Uploaded by zizo11

6 Sigma and Beyond

advertisement
SIX SIGMA
AND
BEYOND
Design for
Six Sigma
SIX SIGMA AND
BEYOND
A series by D.H. Stamatis
Volume I
Foundations of Excellent Performance
Volume II
Problem Solving and Basic Mathematics
Volume III
Statistics and Probability
Volume IV
Statistical Process Control
Volume V
Design of Experiments
Volume VI
Design for Six Sigma
Volume VII
The Implementation Process
D. H. Stamatis
SIX SIGMA
AND
BEYOND
Design for
Six Sigma
ST. LUCIE PRES S
A CRC Press Company
Boca Raton London New York Washington, D.C.
SL3151 FMFrame Page 4 Friday, September 27, 2002 3:14 PM
Library of Congress Cataloging-in-Publication Data
Stamatis, D. H., 1947Six sigma and beyond : design for six sigma, volume VI
p. cm. -- (Six sigma and beyond series)
Includes bibliographical references.
ISBN 1-57444-315-1 (v. 1 : alk paper)
1. Quality control--Statistical methods. 2. Production management--Statistical
methods. 3. Industrial management. I. Title. II. Series.
TS156 .S73 2001
658.5′62--dc21
2001041635
This book contains information obtained from authentic and highly regarded sources. Reprinted material
is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable
efforts have been made to publish reliable data and information, but the author and the publisher cannot
assume responsibility for the validity of all materials or for the consequences of their use.
Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopying, microfilming, and recording, or by any information storage or
retrieval system, without prior permission in writing from the publisher.
The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for
creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC
for such copying.
Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation, without intent to infringe.
Visit the CRC Press Web site at www.crcpress.com
© 2003 by CRC Press LLC
St. Lucie Press is an imprint of CRC Press LLC
No claim to original U.S. Government works
International Standard Book Number 1-57444-315-1
Library of Congress Card Number 2001041635
Printed in the United States of America 1 2 3 4 5 6 7 8 9 0
Printed on acid-free paper
SL3151 FMFrame Page 5 Friday, September 27, 2002 3:14 PM
To
Christine
SL3151 FMFrame Page 6 Friday, September 27, 2002 3:14 PM
SL3151 FMFrame Page 7 Friday, September 27, 2002 3:14 PM
Preface
A collage of historical facts brings us to the realization that concerns about quality
are present not only in the minds of top management when things go wrong but also
in the minds of customers when they buy something and it does not work.
We begin the collage 20 years ago, with Wayne’s (1982) proclamation in the
New York Times of “management gospel gone wrong.” Wayne quoted two Harvard
professors, Hays and Abernathy, as saying, “You may have your eye on the wrong
ball.” In a discussion of the cost differential between American and Japanese companies, Wayne said that American business executives argue that the Japanese advantage is largely rooted in factors unique to Japan: lower labor costs, more automated
and newer factories, strong government support, and a homogeneous culture.
The professors, though, argue differently, Wayne said. They claim that Japanese
businesses are better because they pay attention to such basics as a clean workplace,
preventive maintenance for machinery, a desire to make their production process
error free, and an attitude that “thinks quality.”
Other authors writing in the early 1980s made similar points. Blotnick (1982)
wrote, “If it’s American, it must be bad.” The headline of an anonymous article in
The Sentinel Star (1982) referred to “retailers relearning lesson of customer’s always
right.” Ohmae (1982) wrote an article titled “Quality control circles: They work and
don’t work.” Imai (1982) wrote that unless organizations control (eliminate) their
waste, they would have problems. He identified waste as:
1.
2.
3.
4.
5.
6.
7.
The
The
The
The
The
The
The
waste
waste
waste
waste
waste
waste
waste
of
of
of
of
of
of
of
making too many units
waiting time at the machine
transporting units
processing itself
inventory
motion
making defective units
Imai pointed out some of Toyota’s advantages, specifically its autonomation
system. Autonomation means that the machine is equipped with human wisdom to
stop automatically whenever something goes wrong with it.
Wight (1982) urged management to “learn to live with the truth.” When Honda
rolled out its first American-built car, Lewin (1982) wrote, “Japanese bosses ponder
mysterious U.S. workers.” Among other things, the Japanese wondered why Americans have so many lawyers, Lewin pointed out. Lohr (1983) wrote that “it’s just
wishful thinking to say that Japan cannot catch up software. That is what a lot of
people were saying about semiconductor industry a few years ago and the auto
industry a decade ago.”
SL3151 FMFrame Page 8 Friday, September 27, 2002 3:14 PM
Holusha (1983) wrote of the “U.S. striving for efficiency.” Serrin (1983) described
a study that showed that the work ethic is “alive but neglected.” Holloran (1983) wrote
that an “army staff chief faults industry as producing defective materials.”
Almost twenty years later, Zahary (2001) reported that Toyota strives to retain
its benchmark status by continuing its focus on the Kaizen approach and genchi
genbutso (go and see attitude). Winter (2001) wrote that GM is “now trying to show
it understands importance of product.” McElroy (2001) wrote, “Customers don’t
care how well your stock is performing. They do not care you are the lowest producer.
They do not care you are the fastest to market. All they care about is the car they
are buying. That is why it all comes down to product.”
Morais (2001), quoting O’Connell (2000), claimed that over 100,000 focus
groups were fielded in 1999, even though marketing and advertising professionals
have mixed feelings about their value. Steel (1998 pp. 79 and 202–205) expressed
industry’s ambivalence about focus groups. Among other things, he claimed that
they are not very representative at all. The odd thing about focus groups is that we
still use them to predict the sales potential of new products primarily because of
their instant judgments, non-projectable conclusions, and comparatively low costs,
even though we know better — that is, we know that we could do better by learning
about consumers’ product needs and attitudes and understanding their lives.
In the automotive industry, the evidence that something is wrong is abundantly
clear, as Mayne et al. (2001) have reported. Here are some key points:
1. National Highway Traffic Safety Administration records showed more
than 250 vehicle recalls as of mid-June 2001 — well on pace to exceed
the previous year’s record 12-month total of 483. The 2000 total broke
the previous high of 370 — set in 1999 — and shattered the next-highest
mark of 328, set the year before.
2. Numbers of recalled vehicles have risen correspondingly — 23.4 million
in 2000, 19.2 million the previous year, and 17.2 million in 1998.
3. The number of vehicles snared by non-compliance recalls — issued for
failure to meet the Federal Motor Vehicle Safety Standards — increased
to 4.5 million in 2000. This represents a 61% hike compared to 1999’s
2.8 million, and it is nearly three times the 1.6 million recorded in 1998.
4. A total of 18.9 million vehicles were recalled in 2000 because of safetyrelated defects. That is 81% of the overall recall total and a sharp increase
compared to 1999 and 1998, when safety-related defects prompted recalls
of 16.4 million and 15.6 million vehicles, respectively. Even more telling,
perhaps, it is 9% more than the 17.3 million new light vehicles sold last
year in the U.S.
5. A supplier executive, who wants to remain anonymous, bristles at the
suggestion that quality problems fall at the feet of suppliers. He says
quality has suffered because of the Big Three’s relentless pursuit of cost
reduction. He also suggests that buyers at the Big Three are evaluated
primarily on the basis of cost savings rather than on the quality of the
parts they procure. In the final analysis, “Americans build to print and
specification, whereas Japanese build to function.”
SL3151 FMFrame Page 9 Friday, September 27, 2002 3:14 PM
Powerful statements indeed, yet I could go on with examples involving home
appliances, food, electronics, health devices, and many other types of products.
However, the point is that the problems we are having are not new. The actions
necessary to fix these problems are not new. What we need is a new commitment
to pursue customer satisfaction and mean it. We must put quality in the design of
all our products and services in such a way that the customer sees value in them.
We must become like a philologist who believes that there is truth and falsehood in
a comma. The pleasures of philology are such that by merely changing the placement
of a comma, you can make sense out of nonsense; you can claim a small victory
over ignorance and error. So, we in quality must learn to persevere and learn as
much as possible about the customer. We must make strides to identify what customers need, want, and expect and then provide them with that product or service.
We must do what the French philosopher Etienne Souriau observed: pour inventer il faut penser a cote. To invent, you must think aside — that is, slightly askew.
Or we must follow the lead of Emily Dickinson when she wrote, “My business is
circumstances,” and her readers understood the serendipity of ideas and the rewards
of looking aside to see those ideas’ unlikely, or at least less than obvious, connections.
This is the essence of Design for Six Sigma (DFSS). The upfront analysis and
investigation of the customer is of paramount importance. So is trying to identify
what is really needed (trade-off analysis) to make the difference. The DFSS approach
is based on a systems overhaul and a new mindset to cure the ailments of organizations (profitability) and provide satisfaction to the customer (functionality and
value). It is a proactive approach rather than a reactive approach, unlike the regular
six sigma methodology. DFSS is a methodology that works for the future, rather
than the present or past.
DFSS is a holistic system that is based on challenging the status quo and
providing a product or a service that not only is accepted by the customer but is
financially rewarding for the organization. To do this, of course, managers must take
risks. They must allow their engineers to design robust designs — and that means
that the traditional Y = F(x) is not good enough. Now we must look for Y = F(x,n).
In these equations, x is the traditional customer characteristic (cascaded to smaller
and precise characteristics), but now we add the n, which is the noise. In other words,
we must design our products and services in the presence of noise for maximum
satisfaction.
The best way to predict the future is to invent it. This suggests that the best way
to know what is coming is to put yourself in charge of creating the situation you
want. Be purposeful. Look at what is needed now, and set about doing it. Action
works like a powerful drug to relieve feelings of fear, helplessness, anger, uncertainty,
or depression. Mobilize yourself as well as the organization because you will be the
primary architect of your future.
One of the keys to being successful in your efforts is to anticipate. Accept the
past, focus on the future, and anticipate. Consider what is coming, what needs to
happen, and how you can rise to the occasion. Stay loose. Remain flexible. Be light
on your feet. Instead of changing with the times, make a habit of changing a little
ahead of the times. This change can happen with Designing for Six Sigma and
SL3151 FMFrame Page 10 Friday, September 27, 2002 3:14 PM
beyond. The only requirement is that we must take advantage of the future before
we are ready for it. I am reminded of Flint’s (2001), Visnic’s (2001), and Mayne’s
(2001) comments, respectively. American automotive companies, for example, have
abandoned the car market because they do not make money on cars. They forget
that the Japanese companies not only sell cars but make money from them. So what
does Detroit do to sell? It focuses on price — rebates, discounts, 0% finance, lease
subsidies, and so on. What does the competition do? Not only have they developed
an engine/transmission with variable valve breathing, they are already using it. We
are trying to perfect the five-speed, and the competition is installing six speeds; we
talk about CVTs, and Audi is putting one in its new A4. We are focusing on 10
years and 150,000 miles reliability, and our competitors are pushing for 15 years
and 200,000 miles reliability.
In diesel technology, the Europeans and Americans are worlds apart. Even in
this age of globalization, the light duty diesel markets in Europe have become more
sophisticated and demanding to the point where policy makers have recognized the
environmental advantages of diesel and have allowed new diesel vehicles to prove
themselves as efficient, quiet, and powerful alternatives. What do we do? Our policy
makers have created a regulatory structure that greatly impedes the widespread use
of diesel vehicles. Consequently, Americans may be denied the performance, fuel
economy, and environmental benefits of advanced diesel technology.
A third example comes again from the automotive world in reference to fuel
economy. One of the issues in fuel economy is the underbody design. Early on,
American companies paid great attention to the design of the underbody. As time
went on, the emphasis shifted to shapes that channel airflow over the bodywork,
instead of what lies beneath. But while U.S. automakers were accustomed to being
on top, BMW AG was redefining airflow from the ground up. Underbodies have
been a priority with the Munich-based automaker since 1980. That is when BMW
acquired its first wind tunnel and began development of the 1986 7-series — code
named E32. Today, underbodies rank second behind rear ends, wheel housing and
cooling airflow. As of right now, the initiative for BMW has gained them 2 miles
per hour.
When we talk about customer satisfaction we must do certain things that will
help or improve the image of the organization in the perception of the customer. We
are talking about prestige and reputation. Prestige and reputation differ from each
other in three ways:
1. Reputation applies to individual products or services, while prestige is a
characteristic of the organization as a whole.
2. Reputation can be measured on absolute scales, but prestige can only be
judged in relative terms.
3. Prestige is judged relative to other organizations; reputation is not.
It is prestige that we are interested from a Design for Six Sigma perspective.
The reason for this is that prestige compels each organization to perform better than
its competitors, thereby promoting excellence and continuously raising industry
SL3151 FMFrame Page 11 Friday, September 27, 2002 3:14 PM
standards not only for the customer but also for the competitors. To achieve prestige,
we must be cognizant of some basic yet inherent items, including the following:
• Be ready to engage our customers in conversation every second of the
day. In the digital age, this means having an interactive medium where
people can tell you what they think about your brand and your product
or service whenever they have an idea, a complaint, or a compliment, or
when they just want to air some ideas with somebody who knows where
you are going. Easy places to start are always-open discussion boards and
focus groups. When you get more sophisticated, you can try regularly
scheduled special events or special meetings. The best solution? Set up
an Internet communication structure that lets you have a 24 × 7 open line
of communication.
• Make customer relations a two-way street. Today’s customers not only
want to be heard, they want to respond. They want to engage you in
conversation, brainstorming, and relationship building. To facilitate this,
you may want to consider two-way communication into your Web site
that provides means for real-time sharing of ideas, debate, and interaction.
Another way to facilitate this is through moderated chat rooms or other
more organized techniques. Online events and presentations allow you to
show off new ideas or development to customers, then take questions in
a moderated and controlled manner, across time zones and around the
world. Online meetings allow you to have customers attend “by invitation
only.” Keys to success: make sure your communication is honest and
credible and that the “idea flow” is going both ways. In today’s world, an
organization can design digital communication systems that can provide
instant information. This system can be used to brainstorm, to test concepts and features, and more importantly, to consider trade-offs.
• Get your customers to help design your products and services. Most
organizations ignore the best product and service designers and
consultants — people who know your product or service inside out and
know intimately what the market needs, more often than not. They are
your customers. They can tell you a lot more than just what is right and
wrong with your current products. They can tell you what they really need
in future products — in functional terms.
• Let your customers get to know each other. Word of mouth is a concept
that no one should ever underestimate in the Internet age. The power of
conversation has the lightning-quick ability to create trends, fads, and
brands. People talking to each other in a moderated environment and
sharing unprompted, honest opinions about your brand of product or service
remains the number one way for you to get new satisfied customers.
• Make your customers feel special. When you get down to it, we have
been talking about delighting the customer for at least 20 years, but that
is where we have stopped. We have forgotten that relationships with
customers should not be any different from relationships you have with
SL3151 FMFrame Page 12 Friday, September 27, 2002 3:14 PM
close friends. You need to keep in touch. You need to be honest. You need
to tell people they matter to you. To facilitate this “special attitude,” an
organization may have special days for the customer, special product
anniversaries and so on. However, in every special situation, representatives of the organization should be identifying new functionalities for new
or pending products and shortcomings of the current products.
• Never try to “understand” your customer. (This is not a contradiction
of the above points. Rather, it emphasizes the notion of change in expectations.) Customers are fickle. They change. As a consequence, the organization must be vigilant in tracking the changes, the wants, and the
expectations of customers. To make sure that customers are being satisfied
and that they will continue to be loyal to your products and services, make
sure you have a system that allows you to listen, listen, and then listen
some more to what they have to say.
• Shrink the globe. The world is shrinking. It has become commonplace
to discuss the information revolution in terms of the creation of “global
markets.” To “think global” is in vogue with the majority of large corporations. But global thinking presupposes that we also understand the
“global customer.” Do we really understand? Or do we merely think we
do? How do we treat all our customers as though they live right next
door? One way, of course, is through a combination of modern communication technology and old-fashioned neighborliness. You need good,
solid, two-way conversation with someone half a globe away that is as
immediate, as powerful, and as intimate as a conversation with someone
right in front of you. This obviously is difficult and demanding, and in
the following chapters we are going to establish a flow of disciplines that
perhaps can help us in formulating those “global customers” with their
specific needs, wants, and expectations.
• Design for customer satisfaction and loyalty. Some time ago I heard a
saying that is quite appropriate here. The saying goes something like
“everything is in a state of flux, including the status quo.” I happen to
agree. Never in human history has so much change affected so many
people in so many ways.
The winds of change keep building, blowing harder than ever, hitting more
people, reshaping all kinds of organizations. Incredible as it may sound, all these
changes are happening even in organizations that think that they have understood
the customer and the market. To their surprise, they have not. How else can we
explain some of the latest statistics that tell us the following:
1. Business failures topped 400,00 in the first half of the 1990s and exceeded
500,000 by the end of the decade. That is double the number of the
previous decade. The same trend is projected for the first decade of the
new century.
SL3151 FMFrame Page 13 Friday, September 27, 2002 3:14 PM
2. Eighty-five percent of all U.S. organizations now outsource services once
performed in house.
3. More than three million layoffs have occurred in the last five years.
What can be done to reverse this trend? Well, some will ride the wind based only
on their size, some will not make it, and some will make it. The ones that will make it
must learn to operate under different conditions — conditions that delight the customer
with the service or the product that an organization is offering. The organization must
learn to design services or products so that the customer will see value in them and
cannot stand it until it has possession of either one. As the desire of the customer
increases for the service or product, the demand for quality will increase.
Designing for six sigma is not a small thing, nor should it be a lighthearted
undertaking. It is a very difficult road to follow, but the results are worthwhile.
The structure of this volume is straightforward and follows the pattern of the
model of DFSS, which is Recognize, Define, Characterize, Optimize, and Verify
(RDCOV). Specifically, with each stage of the model, we will explain some of the
most important tools and methodologies.
Our introduction is the stage where we address the basic and fundamental
characteristics of any DFSS program. It is our version of the Recognize step.
Specifically, we address:
1.
2.
3.
4.
Partnering
Robust teams
Systems engineering
Advanced quality planning
We follow with the Define stage, where we discuss customer concerns by first
explaining the notion of “function” and then continuing with three very important
methodologies in the pursuit of satisfying the customer. Those methodologies are:
1. Kano model
2. Quality function deployment (QFD)
3. Conjoint analysis
We move into a discussion of “Best in class” by discussing benchmarking. We
continue the discussion with advanced topics relating to design, specifically:
1.
2.
3.
4.
5.
6.
7.
8.
Monte Carlo
Finite element analysis
Excel’s solver
Failure mode and effect analysis (FMEA)
Reliability and R&M
DOE
Parameter design
Tolerance design
SL3151 FMFrame Page 14 Friday, September 27, 2002 3:14 PM
We continue with relatively short discussions of manufacturing topics, specifically:
1. Design for manufacturing/assembly (DFM/DFA)
2. Mistake proofing
Our discussion on miscellaneous topics is geared to enhance the overall design
function and to sensitize readers to the fact that the pursuit of DFSS is a team
orientation with many disciplines interwoven to produce the optimum design. Of
course, we do not pretend to have exhaustively identified all methodologies and all
tools, but we believe that we have identified the most critical ones. Specifically, we
discuss:
1.
2.
3.
4.
5.
6.
7.
Theory of constraints
Design review
Trade-off analysis
Cost of quality
Reengineering
GD&T
Metrology
We follow with a chapter on innovative methodologies in pursuing DFSS such
as signal process flow, axiomatic designs, and TRIZ, and then we return to classic
discussions on value analysis, project management, an overview of mathematical
concepts for reliability, and Taylor’s theorem and financial concepts.
We conclude our discussion of Design for Six Sigma and Beyond with a formal
summary in a matrix format of all the tools used, following the model of DCOV:
1.
2.
3.
4.
Define
Characterize
Optimize
Verify
REFERENCES
Anon., Retailers Relearning Lesson of Customer’s Always Right, The Sentinel Star, Jan. 17,
1982, p. 4.
Blotnick, S., If It’s American, It Must Be Bad, Forbes, Feb. 1, 1982, p. 146.
Flint, J., Where’s the Cars? You Can Make Money on Cars If You Really Want To, Ward’s
AUTOWORLD, Sept. 2001, p .21.
Halloran, R., Chief of Army Assails Industry on Arms Flaw, The New York Times, Aug. 9,
1983, p. 1.
Holusha, J., Why G.M. Needs Toyota: U.S. Striving for Efficiency, The New York Times, Feb.
16, 1983, p. 1 (of business section).
Imai, M., From Taylor to Ford to Toyota: Kanban System — Another Challenge from Japan,
The Japan Economic Journal, Mar. 30, 1982, p. 12.
SL3151 FMFrame Page 15 Friday, September 27, 2002 3:14 PM
Lewin, T., Japanese Bosses Ponder Mysterious U.S. Workers, The New York Times, Nov. 7,
1982, p. 2 (of business section).
Lohr, S., Japan’s Hard Look at Software, The New York Times, Jan. 9, 1983, p. 3 (of business
section).
Mayne, E., Bottoms Up! Fuel Economy Pressure Underscores Underbody Debate. Ward’s
AUTOWORLD, Sept. 2001, p. 58.
Mayne, E. et al., Quality Crunch, Ward’s AUTOWORLD, July 2001, p. 14.
McElroy, J., Rendezvous captures consumer interest, Wards AUTOWORLD, Jan. 2001, p. 12.
Morais, R., The End of Focus Groups, Quirk’s Marketing Research Review, May 2001, p. 154.
O’Connell, V., advertising column, Wall Street Journal, Nov. 27, 2000, p. B21.
Ohmae, K., Quality Control Circles: They Work and Don’t Work, The Wall Street Journal,
Mar. 29, 1982, p. 2.
Serrin, W., Study Says Work Ethic Is Alive But Neglected, The New York Times, Sept. 5,
1983, p. 4.
Steel, J., Truth, Lies and Advertising, Wiley, New York, 1998.
Visnic, B., Super Diesel! Anyone in the Industry Will Tell You: Forget Hybrids; Diesels Are
Our One Stop Cure All, Ward’s AUTOWORLD, Sept. 2001, p. 34.
Wayne, L., Management Gospel Gone Wrong, The New York Times, May 30, 1982, p. 1 (of
business section).
Wight, O.W., Learning To Tell the Truth, Purchasing, May 13, 1982, p. 5.
Winter, D., One last speed, Wards AUTOWORLD, July 2001, p. 9.
Zachary, K., Toyota Strives To Retain Its Benchmark Status, Supplement to Ward’s AUTOWORLD, Aug. 6–10, 2001, p. 11.
SL3151 FMFrame Page 16 Friday, September 27, 2002 3:14 PM
SL3151 FMFrame Page 17 Friday, September 27, 2002 3:14 PM
Acknowledgments
I want to thank Dr. A. Stuart for granting me permission to use some of the material
in Chapter 14. The summaries of the different distributions and reliability have added
much to the volume. I am really indebted for his contribution.
As with the other volumes in this series, many people have helped in many ways
to make this book a reality. I am afraid that I will miss some, even though their help
was invaluable.
Dr. H. Hatzis, Dr. E. Panos, and Dr. E. Kelly have been indispensable in reviewing and commenting freely on previous drafts and throughout this project.
I would like to thank Dr. L. Lamberson for his thoughtful comments and suggestions on reliability, G. Burke for his suggestions on R&M, and R. Kapur for his
valuable comments about the flow and content of the material.
I want to thank Ford Motor Company and especially Richard Rossier and David
Kelley for their efforts to obtain permission for using the introductory material on
“robust teams.”
I want to thank Prentice Hall for granting me permission to use the material on
conjoint and MANOVA analysis in Chapter 2. That material was taken from the
1998 book Multivariate Data Analysis, 5th ed., by J.F. Hair, R.E. Anderson, R.L.
Tatham, and W.C. Block.
I want to thank McGraw-Hill and D.R. Bothe for granting me permission to use
some material on six sigma taken from the 1977 book Measuring Process Capability,
by D.R. Bothe.
I want to thank J. Wiley and the Buffa Foundation for granting me permission
to use material on the Monte Carlo method from the 1973 book Modern Production
Management, 4th ed., by E.S. Buffa.
I want to thank the American Supplier Institute for granting me permission to
use the L8 interaction table as well as some of their OA and linear graphs.
I want to thank M.A. Anleitner, from Livonia Technical Services, for his contribution to the topic of “function” in Chapter 2, for helping me articulate some of
the key points on APQP, and for serving as a sounding board on issues of value
analysis. Thanks, Mike.
I also want to thank J. Ondrus, from General Dynamics — Land System Division, for introducing me to Value Analysis and serving as a reviewer for earlier drafts
on this topic.
I want to thank T. Panson, P. Rageas, and J. Golematis, all of them certified
public accountants, for their guidance and help in articulating the basics of accounting and financial concerns presented in Chapter 15. Of course, the ultimate responsibility for interpreting their guidance is solely mine.
Special thanks go to the editors at CRC for putting up with me, as well as for
transforming my notes and the manuscript into a user-friendly product.
SL3151 FMFrame Page 18 Friday, September 27, 2002 3:14 PM
I want to thank the participants in my seminars for their comments and recommendations. They actually piloted the material in their own organizations and saw
firsthand the results of some of the techniques and methodologies discussed in this
particular volume. Their comments were incorporated with much appreciation.
Finally, as always, this volume would not have been completed without the
support of my family and especially my navigator, chief editor, and supporter —
my wife, Carla.
SL3151 FMFrame Page 19 Friday, September 27, 2002 3:14 PM
About the Author
D. H. Stamatis, Ph.D., ASQC-Fellow, CQE, CMfgE, is president of Contemporary
Consultants, in Southgate, Michigan. He received his B.S. and B.A. degrees in
marketing from Wayne State University, his master’s degree from Central Michigan
University, and his Ph.D. degree in instructional technology and business/statistics
from Wayne State University.
Dr. Stamatis is a certified quality engineer for the American Society of Quality
Control, a certified manufacturing engineer for the Society of Manufacturing Engineers, and a graduate of BSI’s ISO 9000 lead assessor training program. He is a
specialist in management consulting, organizational development, and quality science and has taught these subjects at Central Michigan University, the University
of Michigan, and Florida Institute of Technology.
With more than 30 years of experience in management, quality training, and
consulting, Dr. Stamatis has served and consulted for numerous industries in the
private and public sectors. His consulting extends across the United States, Southeast
Asia, Japan, China, India, and Europe. Dr. Stamatis has written more than 60 articles
and presented many speeches at national and international conferences on quality.
He is a contributing author in several books and the sole author of 20 books. In
addition, he has performed more than 100 automotive-related audits and 25 preassessment ISO 9000 audits, and has helped several companies attain certification. He
is an active member of the Detroit Engineering Society, the American Society for
Training and Development, the American Marketing Association, and the American
Research Association, and a fellow of the American Society for Quality Control.
SL3151 FMFrame Page 20 Friday, September 27, 2002 3:14 PM
SL3151 FMFrame Page 21 Friday, September 27, 2002 3:14 PM
List of Figures
Figure 2.1
Figure 2.2
Figure 2.3
Figure 2.4
Figure 2.5
Figure 2.6
Figure 2.7
Figure 2.8
Figure 2.9
Figure 2.10
Figure 2.11
Figure 2.12
Figure 2.13
Figure 2.14
Figure 2.15
Figure 2.16
Figure 2.17
Figure 3.1
Figure 5.1
Figure 5.2
Figure 5.3
Figure 5.4
Figure 5.5
Figure 5.6
Figure 5.7
Figure 5.8
Figure 5.9
Figure 6.1
Figure 6.2
Figure 6.3
Figure 6.4
Figure 6.5
Figure 6.6
Figure 6.7
Figure 6.8
Figure 6.9
Figure 6.10
Figure 6.11
Paper pencil assembly.
Function diagram for a mechanical pencil.
Ten symbols for process flow charting.
Process flow for complaint handling.
Kano model framework.
Basic quality depicted in the Kano model.
Performance quality depicted in the Kano model.
Excitement quality depicted in the Kano model.
Excitement quality depicted over time in the Kano model.
A typical House of Quality matrix.
The initial “what” of the customer.
The iterative process of “what” to “how.”
The relationship matrix.
The conversion of “how” to “how much.”
The flow of information in the process of developing the final “House
of Quality.”
Alternative method of calculating importance.
The development of QFD.
The benchmarking continuum process.
Trade-off relationships between program objectives (balance design).
Sequential approach.
Simultaneous approach.
Tomorrow’s approach … if not today’s.
The product development map/guide.
Manufacturing system schematic.
Approaches to mistake proofing.
Major inspection techniques.
Function of mistake-proofing devices.
Types of FMEA.
Payback effort.
Kano model.
A Pugh matrix — shaving with a razor.
Scope for DFMEA — braking system.
Scope for PFMEA — printed circuit board screen printing process.
Typical FMEA header.
Typical FMEA body.
Function tree process.
Example of ballpoint pen.
FMEA body.
SL3151 FMFrame Page 22 Friday, September 27, 2002 3:14 PM
Figure 6.12
Figure 6.13
Figure 6.14
Figure 6.15
Figure 6.16
Figure 6.17
Figure 6.18
Figure 6.19
Figure 6.20
Figure 6.21
Figure 6.22
Figure 7.1
Figure 7.2
Figure 7.3
Figure 7.4
Figure 7.5
Figure 7.6
Figure 7.7
Figure 9.1
Figure 9.2
Figure 9.3
Figure 9.4
Figure 9.5
Figure 9.6
Figure 9.7
Figure 9.8
Figure 9.9
Figure 9.10
Figure 9.11
Figure 9.12
Figure 9.13
Figure 9.14
Figure 9.15
Figure 9.16
Figure 9.17
Figure 9.18
Figure 9.19
Figure 9.20
Figure 9.21
Figure 9.22
Figure 10.1
Figure 10.2
Figure 11.1
Figure 11.2
Figure 11.3
Transferring the failure modes to the FMEA form.
Transferring severity and classification to the FMEA form.
Transferring causes and occurrences to the FMEA form.
Transferring current controls and detection to the FMEA form.
Area chart.
Transferring the RPN to the FMEA form.
Action plans and results analysis.
Transferring action plans and action results on the FMEA form.
FMEA linkages.
The learning stages.
Pen assembly process.
Bathtub curve.
A series block diagram.
A parallel reliability block diagram.
A complex reliability block diagram.
The Weibull distribution for the example.
Control factors and noise interactions.
An example of a parameter design in reliability usage.
An example of a partially completed fishbone diagram.
An example of interaction.
Example of cause-and-effect diagram.
Plots of averages (higher responses are better).
A linear example of a process with several factors.
Contrasts shown in a graphical presentation.
First round testing.
Second round testing.
Linear graph for L4.
The orthogonal array (OA), linear graph (LG), and column interaction
for L9.
Three-level factors in a L8 array.
Traditional approach.
Nominal the best.
Smaller the better
Larger the better.
A comparison of Cpk and loss function.
Plots of averages (higher responses are better).
ANOVA decomposition of multi-level factors.
Factors not linear.
Plots of the average standard deviation by factor level.
Factor effects.
Factor effects.
Quality cost: The quality control system.
Costs.
A typical branching using signal flow graph.
A simple example with signal flow graph.
A hypothetical design process.
SL3151 FMFrame Page 23 Friday, September 27, 2002 3:14 PM
Figure 11.4
Figure 11.5
Figure 11.6
Figure 11.7
Figure 11.8
Figure 11.9
Figure 12.1
Figure 12.2
Figure 12.3
Figure 12.4
Figure 12.5
Figure 12.6
Figure 12.7
Figure 15.1
Figure 15.2
Figure 15.3
Figure 16.1
The graph transmission.
First few terms of the probability.
The effect of a self loop.
Node absorption.
Order of design matrix showing functional coupling between FRs and
DPs.
Relationship of axiomatic design framework and other tools.
Relationship of savings potential to time.
Project identification sheet.
Cost visibility sheet.
Cost function worksheet.
A form that may be used to direct effort.
Second step in the FAST diagram block process.
A partial cost function FAST diagram.
Life cycle of a typical company or product.
A pictorial approach of duPont’s formula.
Breakeven analysis.
The DFSS model.
SL3151 FMFrame Page 24 Friday, September 27, 2002 3:14 PM
SL3151 FMFrame Page 25 Friday, September 27, 2002 3:14 PM
List of Tables
Table I.1
Table 1.1
Table 1.2
Table 1.3
Table 2.1
Table 2.2
Table 2.3
Table 2.4
Table 2.5
Table 2.6
Table 4.1
Table 4.2
Table 4.3
Table 5.1
Table 5.2
Table 5.3
Table 5.4
Table 6.1
Table 6.2
Table 6.3
Table 6.4
Table 6.5
Table 6.6
Table 6.7
Table 6.8
Table 6.9
Table 7.1
Table 7.2
Table 7.3
Table 7.4
Table 7.5
Table 8.1
Table 8.2
Table 8.3
Probability of a completely conforming product.
Customer/supplier expanded partnering interface meetings.
A typical questionnaire.
A general questionnaire.
Characteristic matrix for a machining process.
Benefits of improved total development process.
Stimuli descriptions and respondent rankings for conjoint analysis of
industrial cleanser.
Average ranks and deviations for respondents 1 and 2.
Estimated part-worths and factor importance for respondents 1 and 2.
Predicted part-worth totals and comparison of actual and estimated
preference rankings.
Simulated samples of 20 performance time values for operations A and B.
Simulated operation of the two-station assembly line when operation
A precedes operation B.
Simulated operation of the two-station assembly line when operation
B precedes operation A.
Customer attributes for a car door.
Relative importance of weights.
Customer’s evaluation of competitive products.
Examples of mistakes and defects.
DFMEA — severity rating.
PFMEA — severity rating.
DFMEA — occurrence rating.
PFMEA — occurrence rating.
DFMEA detection table.
PFMEA detection table.
Special characteristics for both design and process.
Manufacturing process control matrix.
Machinery guidelines for severity, occurrence, and detection.
Failure rates with median ranks.
Median ranks.
Five percent rank table.
Ninety-five percent rank table.
Department of Defense reliability and maintainability — standards and
data items.
Activities in the first three phases of the R&M process.
Cost comparison of two machines.
Thermal calculation values.
SL3151 FMFrame Page 26 Friday, September 27, 2002 3:14 PM
Table 8.4
Table 9.1
Table 9.2
Table 9.3
Table 9.4
Table 9.5
Table 9.6
Table 9.7
Table 9.8
Table 9.9
Table 9.10
Table 9.11
Table 9.12
Table 9.13
Table 9.14
Table 9.15
Table 9.16
Table 9.17
Table 9.18
Table 9.19
Table 9.20
Table 9.21
Table 9.22
Table 9.23
Table 9.24
Table 9.25
Table 9.26
Table 9.27
Table 9.28
Table 9.29
Table 9.30
Table 9.31
Table 9.32
Table 9.33
Table 9.34
Table 9.35
Table 9.36
Table 9.37
Table 9.38
Table 9.39
Table 9.40
Table 9.41
Table 9.42
Table 9.43
Table 9.44
Table 9.45
Guidelines for the Duane model.
One factor at a time.
Test numbers for comparison.
The group runs using DOE configurations.
Comparisons using DOE.
Comparisons of the two means.
The test matrix for the seven factors.
Test results.
An example of contrasts.
L4 setup.
The L8 interaction table.
An L9 with a two-level column.
Combination method.
Modified L8 array.
An L8 with an L4 outer array.
Recommended factor assignment by column.
Formulas for calculating S/N.
Concerns with NTB S/N ratio.
L8 with test results.
ANOVA table.
Higher order relationships.
Inner OA (L8) with outer OA (L4) and test results.
The STB ANOVA table.
The LTB ANOVA table.
The NTB ANOVA table.
Raw data ANOVA table.
Combination design.
L9 OA with test results.
ANOVA table.
Second run of ANOVA.
L8 with test results and S/N values.
ANOVA table for data from Table 9.30.
Significant figures from Table 9.31.
Observed versus cumulative frequency.
Attribute test setup and results.
ANOVA table (for cumulative frequency).
The effect of the significant factors.
Rate of occurrence at the optimum settings.
Door closing effort: test set up and results.
ANOVA table for door closing effort.
The effects of the door closing effort.
Rate of occurrence at the optimum settings.
OA and test setup and results.
ANOVA for the raw data.
ANOVA table for the NTB S/N ratios.
Typical ANOVA table setup.
SL3151 FMFrame Page 27 Friday, September 27, 2002 3:14 PM
Table 9.46
Table 9.47
Table 9.48
Table 9.49
Table 9.50
Table 9.51
Table 9.52
Table 9.53
Table 9.54
Table 9.55
Table 9.56
Table 9.57
Table 9.58
Table 9.59
Table 9.60
Table 9.61
Table 9.62
Table 9.63
Table 9.64
L4 OA with test results.
ANOVA table raw data.
ANOVA table (S/N ratio used as raw data).
Level averages — raw data.
OA setup and test results for Example 2.
ANOVA table (S/N ratio used as raw data).
Transformed data.
ANOVA table for the transformed data.
Components and their levels.
L8 inner OA with L8 outer OA and test results.
ANOVA table (NTB) and level averages for the most significant factors.
Variation runs using recommended factor target values.
Calculated response variance.
Cost of reducing tolerances.
The impact of tightening the tolerance.
Reduction of 20% in the tolerance limits of component A.
Reduction of tolerance limits for component D.
Reduction of tolerance limits for component C.
L8 OA used for the confirmation runs with the levels set, test setup,
ANOVA table and level averages.
Table 9.65 Response variance.
Table 10.1 Design review objectives.
Table 10.2 Design review checklist.
Table 10.3 Comparison between traditional and concurrent engineering.
Table 10.4 Typical monthly quality cost report (values in thousands of dollars).
Table 10.5 Prevention costs.
Table 10.6 Appraisal costs.
Table 10.7 Internal failure costs.
Table 10.8 External failure costs.
Table 10.9 Seven-step process redesign model.
Table 10.10 GD&T characteristics and symbols.
Table 12.1 Project identification checklist.
Table 12.2 Idea needlers or thought stimulators.
Table 12.3 The worksheet for setting the list.
Table 12.4 Evolution summary.
Table 12.5 Ranking and weighting.
Table 12.6 Criteria affecting car purchase XXXX — pair comparison.
Table 12.7 Criteria weighing.
Table 12.8 Criteria comparison.
Table 12.9 Criteria weight comparison — completed matrix.
Table 13.1 Key integrative processes.
Table 13.2 The characteristics of the DFSS implementation model using project
management.
Table 13.3 The process of six sigma and DFSS implementation using project
management.
Table 14.1 Possibilities of selecting a DFSS problem.
SL3151 FMFrame Page 28 Friday, September 27, 2002 3:14 PM
Table 15.1
Table 15.2
Table 15.3
A summary of debits and credits.
Summary of normal debit/credit balances.
The Z score.
SL3151 FMFrame Page 29 Friday, September 27, 2002 3:14 PM
Contents
Introduction Understanding the Six Sigma Philosophy.......................................1
A Static versus a Dynamic Process ..........................................................................1
Products with Multiple Characteristics .....................................................................2
Short- and Long-Term Six Sigma Capability ...........................................................4
Design for Six Sigma and the Six Sigma Philosophy..............................................5
Design Phase.........................................................................................................5
Internal Manufacturing .........................................................................................5
External Manufacturing ........................................................................................6
References..................................................................................................................7
Chapter 1 Prerequisites to Design for Six Sigma (DFSS) ...................................9
Partnering ...................................................................................................................9
The Principles of Partnering...............................................................................11
View of Buyer/Supplier Relationship: A Paradigm Shift ..................................11
Characteristics of Expanded Partnering .............................................................12
Evaluating Suppliers and Selecting Supplier Partners.......................................14
Implementing Partnering ....................................................................................14
1. Establish Top Management Enrollment
(Role of Top Management — Leadership)....................................................14
2. Establish Internal Organization.................................................................14
Option 1: Supplier Partnering Manager....................................................14
Option 2: Supplier Council/Team .............................................................15
Option 3: Commodity Management Organization ...................................15
3. Establish Supplier Involvement ................................................................15
4. Establish Responsibility for Implementation ...........................................15
5. Reevaluate the Partnering Process............................................................17
Ratings.......................................................................................................17
Terms Used in Specific Questions ............................................................19
Major Issues with Supplier Partnering Relationships........................................19
How Can We Improve?.......................................................................................20
Basic Partnering Checklist..................................................................................21
1. Leadership .................................................................................................21
2. Information and Analysis..........................................................................22
3. Strategic Quality Planning ........................................................................22
4. Human Resource Development and Management ...................................22
5. Management of Process Quality...............................................................23
6. Quality and Operational Results...............................................................23
7. Customer Focus and Satisfaction .............................................................23
Expanded Partnering Checklist ..........................................................................23
1. Leadership .................................................................................................23
2. Information and Analysis..........................................................................24
SL3151 FMFrame Page 30 Friday, September 27, 2002 3:14 PM
3. Strategic Quality Planning ........................................................................24
4. Human Resource Development and Management ...................................24
5. Management of Process Quality...............................................................25
6. Quality and Operational Results...............................................................25
7. Customer Focus and Satisfaction .............................................................25
The Robust Team: A Quality Engineering Approach .............................................25
Team Systems .....................................................................................................26
Input ...............................................................................................................27
Signal..............................................................................................................27
The System.....................................................................................................27
Output/Response ............................................................................................28
The Environment............................................................................................28
External Variation...........................................................................................28
Internal Variation............................................................................................29
The Boundary.................................................................................................29
Controlling a Team Process: Conformance in Teams........................................29
Strategies for Dealing with Variation .................................................................30
Controlling or Eliminating Variation .............................................................30
Compensating for Variation ...........................................................................30
System Feedback............................................................................................31
Minimizing the Effect of Variation................................................................31
Monitoring Team Performance...........................................................................33
System Interrelationships...............................................................................33
Systems Engineering ...............................................................................................34
“Systems“ Defined ..............................................................................................34
Implications of the Systems Concept for the Manager .....................................35
Defining Systems Engineering ...........................................................................37
Pre-Feasibility Analysis ......................................................................................38
Requirement Analysis .........................................................................................38
Design Synthesis.................................................................................................38
Verification ..........................................................................................................39
Advanced Quality Planning.....................................................................................40
When Do We Use AQP?.....................................................................................42
What Is the Difference between AQP and APQP? ............................................43
How Do We Make AQP Work?..........................................................................43
Are There Pitfalls in Planning? ..........................................................................43
Do We Really Need Another Qualitative Tool to Gauge Quality?....................44
How Do We Use the Qualitative Methodology in an AQP Setting?.................44
APQP Initiative and Relationship to DFSS .......................................................45
References................................................................................................................47
Selected Bibliography..............................................................................................47
Chapter 2 Customer Understanding....................................................................49
The Concept of Function.........................................................................................52
Understanding Customer Wants and Needs .......................................................54
SL3151 FMFrame Page 31 Friday, September 27, 2002 3:14 PM
Creating a Function Diagram .............................................................................55
The Product Flow Diagram and the Concept of Functives ...............................56
The Process Flow Diagram ................................................................................61
Using Function Concepts with Productivity and Quality Methodologies.........64
Kano Model .............................................................................................................68
Basic Quality.......................................................................................................69
Performance Quality ...........................................................................................69
Excitement Quality .............................................................................................69
Quality Function Deployment (QFD) .....................................................................71
Terms Associated with QFD...............................................................................73
Benefits of QFD..................................................................................................73
Issues with Traditional QFD...............................................................................75
Process Overview................................................................................................76
Developing a “QFD” Project Plan .....................................................................76
The Customer Axis ........................................................................................77
Technical Axis................................................................................................79
Internal Standards and Tests ..........................................................................79
The QFD Approach ............................................................................................79
QFD Methodology..............................................................................................80
QFD and Planning ..............................................................................................84
Product Development Process ............................................................................86
Conjoint Analysis ....................................................................................................88
What Is Conjoint Analysis?................................................................................88
A Hypothetical Example of Conjoint Analysis..................................................89
An Empirical Example .......................................................................................90
The Managerial Uses of Conjoint Analysis .......................................................95
References................................................................................................................95
Selected Bibliography..............................................................................................95
Chapter 3 Benchmarking ....................................................................................97
General Introduction to Benchmarking...................................................................97
A Brief History of Benchmarking......................................................................97
Potential Areas of Application of Benchmarking ..............................................97
Benchmarking and Business Strategy Development ..............................................99
Least Cost and Differentiation............................................................................99
Characteristics of a Least Cost Strategy ..........................................................100
Characteristics of a Differentiated Strategy .....................................................101
Benchmarking and Strategic Quality Management ..............................................102
Benchmarking and Six Sigma ..........................................................................105
National Quality Award Winners and Benchmarking......................................107
Example — Cadillac ....................................................................................107
A Second Example — Xerox ......................................................................108
Third Example — IBM Rochester...............................................................109
Fourth Example — Motorola.......................................................................110
Benchmarking and the Deming Management Method ....................................110
SL3151 FMFrame Page 32 Friday, September 27, 2002 3:14 PM
Benchmarking and the Shewhart Cycle or Deming Wheel.............................111
Plan...............................................................................................................111
Do .................................................................................................................111
Study — Observe the Effects ......................................................................111
Act ................................................................................................................111
Why Do People Buy? .......................................................................................111
Alternative Definitions of Quality ....................................................................112
Determining the Customer’s Perception of Quality.........................................117
Quality, Pricing and Return on Investment (ROI) — The PIMS Results .......119
Benchmarking as a Management Tool..................................................................119
What Benchmarking Is and Is Not...................................................................120
The Benchmarking Process ..............................................................................121
Types of Benchmarking....................................................................................122
Organization for Benchmarking .......................................................................123
Requirements for Success.................................................................................124
Benchmarking and Change Management .............................................................126
Structural Pressure ............................................................................................128
Aspiration for Excellence .................................................................................128
Force Field Analysis .........................................................................................128
Identification of Benchmarking Alternatives ........................................................129
Externally Identified Benchmarking Candidates..............................................129
Industry Analysis and Critical Success Factors ..........................................129
PIMS Par Report ..........................................................................................130
Financial Comparison ..................................................................................130
Competitive Evaluations ..............................................................................131
Focus Groups ...............................................................................................131
Importance/Performance Analysis ...............................................................131
Internally Identified Benchmarking Candidates — Internal
Assessment Surveys..........................................................................................132
Nominal Group Process: General Areas in Greatest Need
of Improvement............................................................................................132
Pareto Analysis.............................................................................................132
Statistical Process Control ...........................................................................133
Trend Charting .............................................................................................133
Product and Company Life Cycle Position.................................................133
Failure Mode and Effect Analysis ...............................................................134
Cost/Time Analysis ......................................................................................134
Need to Identify Underlying Causes ................................................................134
Problem, Causes, Solutions .........................................................................134
The Five Whys .............................................................................................134
Cause and Effect Diagram ...........................................................................134
Business Assessment — Strengths and Weaknesses.............................................135
Prioritization of Benchmarking Alternatives — Prioritization Process................139
Prioritization Matrix .........................................................................................139
Quality Function Deployment (House of Quality) ..........................................140
Importance/Feasibility Matrix ..........................................................................141
SL3151 FMFrame Page 33 Friday, September 27, 2002 3:14 PM
Paired Comparisons .....................................................................................141
Improvement Potential .................................................................................141
Prioritization Factors....................................................................................141
Are There Any Other Problems? What Is the Relative Importance
of Each of These? .............................................................................................142
Identification of Benchmarking Sources...............................................................142
Types of Benchmark Sources ...........................................................................142
Internal Best Performers ..............................................................................143
Competitive Best Performers .......................................................................143
Best of Class ................................................................................................143
Selection Criteria ..............................................................................................144
Sources of Competitive Information ................................................................144
Gaining the Cooperation of the Benchmark Partner........................................148
Making the Contact ..........................................................................................149
Benchmarking — Performance and Process Analysis..........................................149
Preparation of the Benchmarking Proposal......................................................149
Activity before the Visit....................................................................................149
Understanding Your Own Operations..........................................................149
Activity Analysis..........................................................................................150
1. Define the Activity .............................................................................150
2. Determine the Triggering Event ........................................................150
3. Define the Activity .............................................................................150
4. Determine the Resource Requirements .............................................151
5. Determine the Activity Drivers ..........................................................151
6. Determine the Output of the Activity ................................................151
7. Determine the Activity Performance Measure ..................................151
Model the Activity .......................................................................................152
Examples of Modeling.................................................................................152
Flow Chart the Process ................................................................................153
Activities during the Visit.................................................................................155
Understand the Benchmark Partner’s Activities..........................................155
Identification of All of the Factors Required for Success ..........................155
Activities after the Visit ....................................................................................156
1. Functional Analysis.................................................................................156
2. Cost Analysis...........................................................................................156
3. Technology Forecasting ..........................................................................156
4. Financial Benchmarking .........................................................................157
5. Sales Promotion and Advertising ...........................................................157
6. Warehouse Operations ............................................................................157
7. PIMS Analysis.........................................................................................157
8. Purchasing Performance Benchmarks ....................................................157
Motorola Example........................................................................................158
Gap Analysis..........................................................................................................158
Definition of Gap Analysis ...............................................................................158
Current versus Future Gap ...............................................................................158
SL3151 FMFrame Page 34 Friday, September 27, 2002 3:14 PM
Goal Setting ...........................................................................................................159
Goal Definition .................................................................................................159
Goal Characteristics ..........................................................................................159
Result versus Effort Goals................................................................................159
Goal Setting Philosophy ...................................................................................159
Best of the Best versus Optimization ..........................................................159
Kaizen versus Breakthrough Strategies .......................................................160
Guiding Principle Implications.........................................................................160
Goal Structure ...................................................................................................160
Cascading Goal Structure ............................................................................160
Interdepartmental Goals...............................................................................161
Action Plan Identification and Implementation ....................................................161
A Creative Planning Process ............................................................................162
Action Plan Prioritization .................................................................................162
Action Plan Documentation..............................................................................162
Monitoring and Control ....................................................................................162
Financial Analysis of Benchmarking Alternatives................................................163
Managing Benchmarking for Performance...........................................................164
References..............................................................................................................166
Selected Bibliography............................................................................................167
Chapter 4 Simulation ........................................................................................169
What Is Simulation? ..............................................................................................169
Simulated Sampling...............................................................................................170
Finite Element Analysis (FEA) .............................................................................175
Types of Finite Elements ..................................................................................175
Types of Analyses .............................................................................................176
Procedures Involved in FEA.............................................................................178
Steps in Analysis Procedure .............................................................................178
Overview of Finite Element Analysis — Solution Procedure .........................179
Input to the Finite Element Model ...................................................................180
Outputs from the Finite Element Analysis.......................................................180
Analysis of Redesigns of Refined Model ........................................................181
Summary — Finite Element Technique: A Design Tool .................................182
Excel’s Solver ........................................................................................................182
Design Optimization..............................................................................................182
How To Do Design Optimization.....................................................................184
Understanding the Optimization Algorithm .....................................................184
Conversion to an Unconstrained Problem........................................................185
Simulation and DFSS ............................................................................................185
References..............................................................................................................186
Selected Bibliography............................................................................................186
Chapter 5 Design for Manufacturability/Assembly
(DFM/DFA or DFMA) ..........................................................................................187
SL3151 FMFrame Page 35 Friday, September 27, 2002 3:14 PM
Business Expectations and the Impact from a Successful DFM/DFA.................189
The Essential Elements for Successful DFM/DFA ..............................................192
The Product Plan ..............................................................................................194
Product Design.............................................................................................194
Criteria for Decision between Crash Program and Perfect Product...........195
Case #1 — Crash Program .....................................................................195
Case #2 — Perfect Product Design ........................................................196
The Product Plan — Product Design Itself.................................................196
Define Product Performance Requirement ..................................................198
Available Tools and Methods for DFMA .............................................................198
Cookbooks for DFM/DFA................................................................................199
Use of the Human Body ..............................................................................199
Arrangement of the Work Place ..................................................................200
Design of Tools and Equipment ..................................................................200
Mitsubishi Method............................................................................................200
U-MASS Method..............................................................................................202
MIL-HDBK-727 ...............................................................................................203
Fundamental Design Guidance .............................................................................204
The Manufacturing Process...................................................................................206
Mistake Proofing....................................................................................................208
Definition ..........................................................................................................208
The Strategy ......................................................................................................208
Defects ..............................................................................................................209
Mistake Proof System Is a Technique for Avoiding Errors
in the Workplace ...............................................................................................210
Types of Human Mistakes ................................................................................210
Forgetfulness ................................................................................................210
Mistakes of Misunderstanding.....................................................................210
Identification Mistakes .................................................................................210
Amateur Errors.............................................................................................211
Willful Mistakes...........................................................................................211
Inadvertent Mistakes ....................................................................................211
Slowness Mistakes .......................................................................................211
Lack of Standards Mistakes.........................................................................211
Surprise Mistakes .........................................................................................211
Intentional Mistakes .....................................................................................212
Defects and Errors ............................................................................................212
Mistake Types and Accompanying Causes ......................................................213
Signals that Alert ..............................................................................................215
Approaches to Mistake Proofing ......................................................................215
Major Inspection Techniques.......................................................................216
Mistake Proof System Devices....................................................................216
Devices Used as “Detectors of Mistakes” ..............................................217
Devices Used as “Preventers of Mistakes”.............................................217
Equation for Success ........................................................................................218
Typical Error Proofing Devices ...................................................................219
SL3151 FMFrame Page 36 Friday, September 27, 2002 3:14 PM
References..............................................................................................................219
Selected Bibliography............................................................................................219
Chapter 6 Failure Mode and Effect Analysis (FMEA) ....................................223
Definition of FMEA ..............................................................................................224
Types of FMEAs....................................................................................................224
Is FMEA Needed? .................................................................................................225
Benefits of FMEA .................................................................................................226
FMEA History .......................................................................................................226
Initiation of the FMEA..........................................................................................227
Getting Started .......................................................................................................228
1. Understand Your Customers and Their Needs ............................................228
2. Know the Function ......................................................................................230
3. Understand the Concept of Priority ............................................................230
4. Develop and Evaluate Conceptual Designs/Processes Based
on Customer Needs and Business Strategy......................................................230
5. Be Committed to Continual Improvement ..................................................231
6. Create an Effective FMEA Team ................................................................231
7. Define the FMEA Project and Scope ..........................................................234
The FMEA Form ...................................................................................................235
Developing the Function...................................................................................238
Organizing Product Functions ..........................................................................239
Failure Mode Analysis......................................................................................240
Understanding Failure Mode .......................................................................240
Failure Mode Questions...............................................................................240
Determining Potential Failure Modes..........................................................242
Failure Mode Effects ........................................................................................243
Effects and Severity Rating .........................................................................244
Severity Rating (Seriousness of the Effect) ................................................245
Failure Cause and Occurrence..........................................................................246
Popular Ways (Techniques) to Determine Causes ......................................247
Occurrence Rating........................................................................................249
Current Controls and Detection Ratings .....................................................249
Detection Rating ..........................................................................................250
Understanding and Calculating Risk................................................................251
Action Plans and Results.......................................................................................253
Classification and Characteristics.....................................................................254
Product Characteristics/“Root Causes” .......................................................255
Process Parameters/“Root Causes”..............................................................255
Driving the Action Plan ....................................................................................255
Linkages among Design and Process FMEAs and Control Plan.........................258
Getting the Most from FMEA...............................................................................260
System or Concept FMEA ....................................................................................262
Design Failure Mode and Effects Analysis (DFMEA).........................................262
Objective ...........................................................................................................263
Timing ...............................................................................................................263
SL3151 FMFrame Page 37 Friday, September 27, 2002 3:14 PM
Requirements ....................................................................................................263
Discussion .........................................................................................................263
Forming the Appropriate Team....................................................................263
Describing the Function of the Design/Product..........................................264
Describing the Failure Mode Anticipated ...................................................264
Describing the Effect of the Failure ............................................................264
Describing the Cause of the Failure ............................................................264
Estimating the Frequency of Occurrence of Failure ...................................265
Estimating the Severity of the Failure.........................................................265
Identifying System and Design Controls ....................................................265
Estimating the Detection of the Failure ......................................................266
Calculating the Risk Priority Number .........................................................267
Recommending Corrective Action...............................................................267
Strategies for Lowering Risk: (System/Design) — High Severity
or Occurrence ..........................................................................................267
Strategies for Lowering Risk: (System/Design) — High Detection
Rating ......................................................................................................267
Process Failure Mode and Effects Analysis (FMEA)...........................................268
Objective ...........................................................................................................268
Timing ...............................................................................................................268
Requirements ....................................................................................................268
Discussion .........................................................................................................269
Forming the Team ........................................................................................269
Describing the Process Function .................................................................269
Manufacturing Process Functions...........................................................269
The PFMEA Function Questions............................................................270
Describing the Failure Mode Anticipated ...................................................270
Describing the Effect(s) of the Failure........................................................271
Describing the Cause(s) of the Failure........................................................272
Estimating the Frequency of Occurrence of Failure ...................................273
Estimating the Severity of the Failure.........................................................273
Identifying Manufacturing Process Controls...............................................273
Estimating the Detection of the Failure ......................................................274
Calculating the Risk Priority Number .........................................................275
Recommending Corrective Action...............................................................275
Strategies for Lowering Risk: (Manufacturing) — High Severity
or Occurrence ..........................................................................................275
Strategies for Lowering Risk: (Manufacturing) — High Detection
Rating ......................................................................................................276
Machinery FMEA (MFMEA) ...............................................................................277
Identify the Scope of the MFMEA ..................................................................277
Identify the Function ........................................................................................277
Failure Mode.....................................................................................................277
Potential Effects ................................................................................................278
Severity Rating..................................................................................................279
Classification .....................................................................................................279
SL3151 FMFrame Page 38 Friday, September 27, 2002 3:14 PM
Potential Causes................................................................................................279
Occurrence Ratings...........................................................................................282
Surrogate MFMEAs..........................................................................................282
Current Controls...........................................................................................282
Detection Rating ..........................................................................................282
Risk Priority Number (RPN)............................................................................282
Recommended Actions .....................................................................................283
Date, Responsible Party....................................................................................283
Actions Taken/Revised RPN.............................................................................283
Revised RPN.....................................................................................................284
Summary ................................................................................................................284
Selected Bibliography............................................................................................284
Chapter 7 Reliability .........................................................................................287
Probabilistic Nature of Reliability ........................................................................287
Performing the Intended Function Satisfactorily..................................................288
Specified Time Period.......................................................................................288
Specified Conditions .........................................................................................289
Environmental Conditions Profile ....................................................................289
Reliability Numbers ..........................................................................................290
Indicators Used to Quantify Product Reliability..............................................290
Reliability and Quality ..........................................................................................291
Product Defects.................................................................................................291
Customer Satisfaction .......................................................................................292
Product Life and Failure Rate ..........................................................................293
Product Design and Development Cycle ..............................................................295
Reliability in Design.........................................................................................296
Cost of Engineering Changes and Product Life Cycle....................................297
Reliability in the Technology Deployment Process.........................................298
1. Pre-Deployment Process .........................................................................298
2. Core Engineering Process.......................................................................299
3. Quality Support .......................................................................................300
Reliability Measures — Testing ............................................................................300
What Is a Reliability Test? ...............................................................................300
When Does Reliability Testing Occur?............................................................301
Reliability Testing Objectives...........................................................................301
Sudden-Death Testing ..................................................................................302
Accelerated Testing ......................................................................................305
Accelerated Test Methods .....................................................................................305
Constant-Stress Testing.....................................................................................305
Step-Stress Testing............................................................................................306
Progressive-Stress Testing ................................................................................306
Accelerated-Test Models ..................................................................................306
Inverse Power Law Model ...........................................................................307
Arrhenius Model ..........................................................................................308
SL3151 FMFrame Page 39 Friday, September 27, 2002 3:14 PM
AST/PASS..............................................................................................................310
Purpose of AST.................................................................................................310
AST Pre-Test Requirements .............................................................................311
Objective and Benefits of AST.........................................................................311
Purpose of PASS...............................................................................................311
Objective and Benefits of PASS .......................................................................312
Characteristics of a Reliability Demonstration Test .............................................312
The Operating Characteristic Curve.................................................................313
Attributes Tests .................................................................................................313
Variables Tests ..................................................................................................314
Fixed-Sample Tests ...........................................................................................314
Sequential Tests ................................................................................................314
Reliability Demonstration Test Methods...............................................................314
Small Populations — Fixed-Sample Test
Using the Hypergeometric Distribution ...........................................................315
Large Population — Fixed-Sample Test
Using the Binomial Distribution ......................................................................315
Large Population — Fixed-Sample Test
Using the Poisson Distribution.........................................................................316
Success Testing ......................................................................................................316
Sequential Test Plan for the Binomial Distribution .........................................317
Graphical Solution ............................................................................................318
Variables Demonstration Tests ..............................................................................318
Failure-Truncated Test Plans — Fixed-Sample Test
Using the Exponential Distribution ..................................................................318
Time-Truncated Test Plans — Fixed-Sample Test
Using the Exponential Distribution ..................................................................319
Weibull and Normal Distributions....................................................................320
Sequential Test Plans .............................................................................................321
Exponential Distribution Sequential Test Plan.................................................321
Weibull and Normal Distributions....................................................................323
Interference (Tail) Testing ................................................................................323
Reliability Vision ..............................................................................................323
Reliability Block Diagrams ..............................................................................323
Weibull Distribution — Instructions for Plotting and Analyzing Failure
Data on a Weibull Probability Chart ................................................................325
Instructions for Plotting Failure and Suspended Items Data
on a Weibull Probability Chart.........................................................................331
Additional Notes on the Use of the Weibull....................................................334
Design of Experiments in Reliability Applications ..............................................335
Reliability Improvement through Parameter Design ............................................336
Department of Defense Reliability and Maintainability — Standards
and Data Items.......................................................................................................337
References..............................................................................................................342
Selected Bibliography............................................................................................343
SL3151 FMFrame Page 40 Friday, September 27, 2002 3:14 PM
Chapter 8 Reliability and Maintainability ........................................................345
Why Do Reliability and Maintainability?.............................................................345
Objectives...............................................................................................................346
Making Reliability and Maintainability Work ......................................................346
Who’s Responsible? ..............................................................................................347
Tools.......................................................................................................................347
Sequence and Timing ............................................................................................348
Concept ..................................................................................................................349
Bookshelf Data .................................................................................................349
Manufacturing Process Selection .....................................................................350
R&M and Preventive Maintenance (PM) Needs Analysis ..............................350
Development and Design.......................................................................................350
R&M Planning..................................................................................................350
Process Design for R&M .................................................................................351
Machinery FMEA Development ......................................................................351
Design Review ..................................................................................................352
Build and Install ....................................................................................................352
Equipment Run-Off ..........................................................................................352
Operation of Machinery....................................................................................352
Operations and Support .........................................................................................353
Conversion/Decommission ....................................................................................353
Typical R&M Measures ........................................................................................353
R&M Matrix .....................................................................................................353
Reliability Point Measurement .........................................................................354
MTBE................................................................................................................354
MTBF................................................................................................................355
Failure Rate.......................................................................................................355
MTTR................................................................................................................355
Availability ........................................................................................................356
Overall Equipment Effectiveness (OEE)..........................................................356
Life Cycle Costing (LCC) ................................................................................356
Top 10 Problems and Resolutions....................................................................357
Thermal Analysis ..............................................................................................357
Electrical Design Margins ................................................................................359
Safety Margins (SM) ........................................................................................359
Interference .......................................................................................................360
Conversion of MTBF to Failure Rate and Vice Versa .....................................361
Reliability Growth Plots ...................................................................................361
Machinery FMEA .............................................................................................361
Key Definitions in R&M .......................................................................................362
DFSS and R&M ....................................................................................................364
References..............................................................................................................365
Selected Bibliography............................................................................................365
SL3151 FMFrame Page 41 Friday, September 27, 2002 3:14 PM
Chapter 9 Design of Experiments.....................................................................367
Setting the Stage for DOE.....................................................................................367
Why DOE (Design of Experiments) Is a Valuable Tool..................................367
Taguchi’s Approach ..........................................................................................370
Miscellaneous Thoughts ...................................................................................371
Planning the Experiment .......................................................................................372
Brainstorming....................................................................................................372
Choice of Response ..........................................................................................373
Miscellaneous Thoughts ...................................................................................379
Setting Up the Experiment ....................................................................................380
Choice of the Number of Factor Levels...........................................................380
Linear Graphs ...................................................................................................382
Degrees of Freedom..........................................................................................383
Using Orthogonal Arrays and Linear Graphs ..................................................383
Column Interaction (Triangular) Table.............................................................384
Factors with Three Levels ................................................................................385
Interactions and Hardware Test Setup..............................................................385
Choice of the Test Array...................................................................................387
Factors with Four Levels ..................................................................................389
Factors with Eight Levels .................................................................................389
Factors with Nine Levels..................................................................................390
Using Factors with Two Levels in a Three-Level Array .................................390
Dummy Treatment .......................................................................................390
Combination Method ...................................................................................390
Using Factors with Three Levels in a Two-Level Array .................................391
Other Techniques ..............................................................................................391
Nesting of Factors ........................................................................................392
Setting Up Experiments with Factors with Large Numbers of Levels.......392
Inner Arrays and Outer Arrays .........................................................................393
Randomization of the Experimental Tests .......................................................394
Miscellaneous Thoughts ...................................................................................394
Loss Function and Signal-to-Noise.......................................................................397
Loss Function and the Traditional Approach ...................................................397
Calculation of the Loss Function .....................................................................398
Comparison of the Loss Function and Cpk .....................................................402
Signal-to-Noise (S/N) .......................................................................................403
Miscellaneous Thoughts ...................................................................................404
Analysis..................................................................................................................405
Graphical Analysis ............................................................................................405
Analysis of Variance (ANOVA) .......................................................................407
Estimation at the Optimum Level ....................................................................408
Confidence Interval around the Estimation......................................................409
Interpretation and Use ......................................................................................410
ANOVA Decomposition of Multi-Level Factors .............................................410
SL3151 FMFrame Page 42 Friday, September 27, 2002 3:14 PM
S/N Calculations and Interpretations................................................................411
Smaller-the-Better (STB) .............................................................................412
Larger-the-Better (LTB) ...............................................................................413
Nominal the Best (NTB) .............................................................................413
Combination Design .........................................................................................415
Miscellaneous Thoughts ...................................................................................418
Analysis of Classified Data ...................................................................................421
Classified Responses.........................................................................................422
Classified Attribute Analysis.............................................................................422
Class 1 ..........................................................................................................425
Class 2 ..........................................................................................................426
Classified Variable Analysis..............................................................................426
Discussion of the Degrees of Freedom ............................................................428
Miscellaneous Thoughts ...................................................................................429
Dynamic Situations................................................................................................430
Definition ..........................................................................................................430
Discussion .........................................................................................................431
Conditions ....................................................................................................431
Analysis ........................................................................................................432
Miscellaneous Thoughts ...................................................................................439
For Example 1..............................................................................................440
For Example 2..............................................................................................440
Parameter Design...................................................................................................441
Discussion .........................................................................................................441
Example........................................................................................................441
Tolerance Design ...................................................................................................447
Discussion .........................................................................................................447
Example........................................................................................................448
Humidity..................................................................................................454
Testing .....................................................................................................454
DOE Checklist .......................................................................................................454
Selected Bibliography............................................................................................455
Chapter 10 Miscellaneous Topics — Methodologies .......................................457
Theory of Constraints (TOC) ................................................................................457
The Goal ...........................................................................................................457
Strategic Measures ............................................................................................458
Net Profit, Return on Investment, and Productivity.........................................459
Measurement Focus ..........................................................................................460
Throughput versus Cost World.........................................................................461
Obstacles to Moving into the Throughput World ............................................461
The Foundation Elements of TOC ...................................................................463
The Theory of Non-Constraints .......................................................................463
The Five-Step Framework of TOC...................................................................464
Selected Bibliography............................................................................................465
SL3151 FMFrame Page 43 Friday, September 27, 2002 3:14 PM
Design Review .......................................................................................................465
Failure Mode and Effect Analysis (FMEA).....................................................467
References..............................................................................................................470
Selected Bibliography............................................................................................470
Trade-Off Studies...................................................................................................470
How to Conduct a Trade-Off Study: The Process ...........................................471
1. Construct the Preliminary Matrix ...........................................................471
2. Select and Assemble the Cross-Functional Team ..................................472
3. Assign Team Members’ Roles and Responsibilities ..............................472
4. Assign Ranking Teams To Evaluate the Alternatives ............................473
Identification of Ranking Methods .........................................................473
Development of Standardized Documentation .......................................474
Timing for Report out of Selection Process...........................................474
5. Weight the Various Categories................................................................474
6. Compile the Evidence Book ...................................................................475
7. Present the Results ..................................................................................475
Glossary of Terms.............................................................................................476
Selected Bibliography............................................................................................477
Cost of Quality ......................................................................................................477
Cost Monitoring System...................................................................................478
Standard Cost ...............................................................................................478
Actual Costs .................................................................................................478
Variance ........................................................................................................480
Cost Reduction Efforts.................................................................................480
Concepts of Quality Costs................................................................................480
J. Juran .........................................................................................................480
W.E. Deming ................................................................................................480
P. Crosby ......................................................................................................481
G. Taguchi ....................................................................................................481
Definition of Quality Components ...................................................................481
Methods of Measuring Quality.........................................................................483
Complaint Indices .............................................................................................484
Processing and Resolution of Customer Complaints.......................................484
Techniques for Analyzing Data ........................................................................484
Format for Presentation of Costs......................................................................485
Laws of Cost of Quality ...................................................................................485
Data Sources .....................................................................................................487
Inspection Decisions .........................................................................................487
Prevention Costs (See Table 10.5) ...................................................................487
Appraisal Costs (See Table 10.6) .....................................................................487
Internal Failure Costs (See Table 10.7)............................................................487
External Failure Costs (See Table 10.8)...........................................................487
Diagnostic Guidelines to Identify Manufacturing Process
Improvement Opportunities ..............................................................................489
Diagnostic Guidelines to Identify Administrative Process
Improvement Opportunities ..............................................................................490
SL3151 FMFrame Page 44 Friday, September 27, 2002 3:14 PM
Steps for Quality Improvement — Using Cost of Quality ..............................492
Procedure......................................................................................................492
Examples ......................................................................................................492
Guideline Cost of Quality Elements by Discipline .........................................502
Cost of Quality and DFSS Relationship ..........................................................509
References..............................................................................................................511
Selected Bibliography............................................................................................511
Reengineering ........................................................................................................511
Process Redesign ..............................................................................................511
The Restructuring Approach.............................................................................512
The Conference Method ...................................................................................513
The OOAD Method ..........................................................................................515
Reengineering and DFSS..................................................................................516
References..............................................................................................................517
Selected Bibliography............................................................................................518
Geometric Dimensioning and Tolerancing (GD&T) ............................................518
References..............................................................................................................523
Selected Bibliography............................................................................................523
Metrology...............................................................................................................524
Understanding the Problem ..............................................................................524
Metrology’s Role in Industry and Quality .......................................................525
Measurement Techniques and Equipment........................................................527
Purpose of Inspection .......................................................................................528
How Do We Use Inspection and Why? ...........................................................529
Methods of Testing ...........................................................................................529
Interpreting Results of Inspection and Testing ................................................530
Technique for Wringing Gage Blocks..............................................................531
Length Combinations........................................................................................532
References..............................................................................................................533
Chapter 11 Innovation Techniques Used in Design for Six Sigma (DFSS)....535
Modeling Design Iteration Using Signal Flow Graphs as Introduced
by Eppinger, Nukala and Whitney (1997) ............................................................535
Rules and Definitions of Signal Flow Graphs as Introduced
by Howard (1971) and Truxal (1955) ..............................................................538
Basic Operations on Signal Flow Graphs ........................................................538
The Effect of a Self Loop.................................................................................538
Solution by Node Absorption ...........................................................................539
References..............................................................................................................539
Selected Bibliography............................................................................................540
Axiomatic Designs ................................................................................................541
So, What Is an Axiomatic Design? ..................................................................542
Axiomatic and Other Design Methodologies...................................................542
Applying Axiomatic Design to Cars ................................................................543
New Designs ................................................................................................544
SL3151 FMFrame Page 45 Friday, September 27, 2002 3:14 PM
Diagnosis of Existing Design ......................................................................544
Extensions and Engineering Changes to Existing Designs ........................544
Efficient Project Work-Flow ........................................................................545
Effective Change Management ....................................................................545
Efficient Design Function ............................................................................545
References..............................................................................................................547
Selected Bibliography............................................................................................547
TRIZ — The Theory of Inventive Problem Solving ............................................548
References..............................................................................................................551
Selected Bibliography............................................................................................551
Chapter 12 Value Analysis/Engineering ...........................................................553
Introduction to Value Control — The Environment .............................................553
History of Value Control .......................................................................................555
Value Concept........................................................................................................556
Definition ..........................................................................................................556
Planned Approach .............................................................................................556
Function ............................................................................................................557
Value..................................................................................................................557
Develop Alternatives.........................................................................................558
Evaluation, Planning, Reporting, and Implementation ....................................559
The Job Plan .....................................................................................................559
Application.............................................................................................................560
Value Control — The Job Plan .............................................................................561
Value Control — Techniques versus Job Plan ......................................................562
Techniques.........................................................................................................562
Information Phase..................................................................................................563
Define the Problem ...........................................................................................563
Information Development ............................................................................564
Information Collection ............................................................................564
Cost Visibility..........................................................................................564
Project Scope...........................................................................................565
Function Determination ...............................................................................567
Function Analysis and Evaluation ...............................................................567
Cost Visibility ...................................................................................................568
Definitions ....................................................................................................568
Sources of Cost Information........................................................................570
Cost Visibility Techniques ...........................................................................570
Technique 1 — Determine Manufacturing Cost.....................................571
Technique 2 — Determine Cost Element ...............................................571
Technique 3 — Determine Component or Process Costs ......................571
Technique 4 — Determine Quantitative Costs .......................................572
Technique 5 — Determine Functional Area Costs.................................573
Function Determination ....................................................................................573
What Is Function?........................................................................................574
SL3151 FMFrame Page 46 Friday, September 27, 2002 3:14 PM
Basic and Secondary Functions...................................................................574
Basic Functions .......................................................................................574
Secondary Functions ...............................................................................575
Function Analysis and Evaluation ....................................................................575
Technique 1 — Identify and Evaluate Function..........................................575
Technique 2 — Evaluate Principle of Operation ........................................576
Technique 3 — Evaluate Basic Function ....................................................576
Technique 4 — Theoretical Evaluation of Function ...................................576
Technique 5 — Input Output Method .........................................................577
Technique 6 — Function Analysis System Technique................................577
Cost Function Relationship..........................................................................580
Evaluate the Function ..................................................................................580
Creative Phase........................................................................................................582
Phase 1. Blast....................................................................................................584
Phase 2. Create .................................................................................................584
Phase 3. Refine .................................................................................................584
Evaluation Phase....................................................................................................585
Selection and Screening Techniques ................................................................585
Pareto Voting ................................................................................................585
Paired Comparisons .....................................................................................586
Evaluation Summary....................................................................................587
Matrix Analysis ............................................................................................587
Example...................................................................................................589
Rank and Weigh Criteria....................................................................589
Evaluate Each Alternative ..................................................................590
Analyze Results ..................................................................................591
Implementation Phase............................................................................................591
Goal for Achievement.......................................................................................592
Developing a Plan.............................................................................................592
Evaluation of the System..................................................................................593
Understanding the Principles............................................................................593
Organization ......................................................................................................594
Attitude..............................................................................................................596
Value Council....................................................................................................596
Audit Results.....................................................................................................597
Project Selection ....................................................................................................597
Concluding Comments ..........................................................................................598
References..............................................................................................................598
Selected Bibliography............................................................................................598
Chapter 13 Project Management (PM).............................................................599
What Is a Project? .................................................................................................599
The Process of Project Management.....................................................................601
Key Integrative Processes......................................................................................602
Project Management and Quality..........................................................................603
SL3151 FMFrame Page 47 Friday, September 27, 2002 3:14 PM
A Generic Seven-Step Approach to Project Management....................................603
Phase 1. Define the Project ..............................................................................603
Step 1. Describe the Project ........................................................................603
Step 2. Appoint the Planning Team.............................................................604
Step 3. Define the Work...............................................................................604
Phase 2. Plan the Project ..................................................................................604
Step 4. Estimate Tasks .................................................................................604
Step 5. Calculate the Schedule and Budgets...............................................604
Phase 3. Implement the Plan ............................................................................605
Step 6. Start the Project ...............................................................................605
Phase 4. Complete the Project..........................................................................605
Step 7. Track Progress and Finish the Project ............................................605
A Generic Application of Project Management in Implementing Six Sigma
and DFSS ...............................................................................................................605
The Value of Project Management in the Implementation Process ................607
Planning the Process ....................................................................................607
Goal Setting..................................................................................................608
PM and Six Sigma/DFSS .................................................................................608
Project Justification and Prioritization Techniques .....................................610
Benefit-Cost Analysis..............................................................................610
Return on Assets (ROA).....................................................................610
Return on Investment (ROI)...............................................................610
Net Present Value (NPV) Method......................................................611
Internal Rate of Return (IRR) Method ..............................................611
Payback Period Method .....................................................................612
Project Decision Analysis .......................................................................612
Why Project Management Succeeds .....................................................................613
References..............................................................................................................615
Selected Bibliography............................................................................................615
Chapter 14 Limited Mathematical Background for Design
for Six Sigma (DFSS) ...........................................................................................617
Exponential Distribution and Reliability...............................................................617
Exponential Distribution...................................................................................617
Probability Density Function and Cumulative Distribution Function ........618
Probability Density Function (Decay Time)...........................................618
Cumulative Distribution Function (Rise Time) ......................................618
Reliability Problems.....................................................................................618
Constant Rate Failure .......................................................................................619
Example........................................................................................................619
Probability of Reliability ..................................................................................621
Control Charts...................................................................................................621
Continuous Time Waveform ........................................................................621
Discrete Time Samples ................................................................................621
Digital Signal Processing ........................................................................622
SL3151 FMFrame Page 48 Friday, September 27, 2002 3:14 PM
Sample Space ....................................................................................................622
Assigning Probability to Sets ...........................................................................624
Gamma Distribution ..............................................................................................625
Gamma Distribution (pdf) ................................................................................625
Gamma Function...............................................................................................626
Properties of Gamma Functions ..................................................................626
Gamma Distribution and Reliability............................................................627
Example 1: Time to Total System Failure.......................................................627
Gamma Distribution and Reliability............................................................628
Reliability Relationships ..............................................................................632
Reliability Function......................................................................................632
Data Failure Distribution ..................................................................................633
Failure Rate or Density Function .....................................................................633
Hazard Rate Function .......................................................................................634
Relations between Reliability and Hazard Functions ......................................634
Poisson Process.................................................................................................635
Characteristics of Poisson Process ..............................................................636
Poisson Distribution .....................................................................................636
Example........................................................................................................639
Weibull Distribution...............................................................................................640
Three-Parameter Weibull Distribution..............................................................643
Taylor Series Expansion ........................................................................................644
Taylor Series Expansion ...................................................................................645
Partial Derivatives ........................................................................................649
Taylor Series in Two-Dimensions................................................................649
Taylor Series of Random Variable (RV) Functions.....................................650
Variance and Covariance..............................................................................650
Functions of Random Variables...................................................................651
Division of Random Variables .....................................................................651
Powers of a Random Variable .....................................................................652
Exponential of a Random Variable..............................................................652
Constant Raised to RV Power .....................................................................653
Logarithm of Random Variable ...................................................................653
Example: Horizontal Beam Deflection........................................................654
Example: Difference between Two Means..................................................655
Miscellaneous ........................................................................................................656
Closing Remarks....................................................................................................658
Selected Bibliography............................................................................................658
Chapter 15 Fundamentals of Finance and Accounting for Champions,
Master Blacks, and Black Belts ............................................................................661
The Theory of the Firm.........................................................................................661
Budgets ..................................................................................................................662
Our Romance with Growth ...................................................................................663
The New Industrial State.......................................................................................663
SL3151 FMFrame Page 49 Friday, September 27, 2002 3:14 PM
Behavioral Theory .................................................................................................663
Accounting Fundamentals .....................................................................................664
Accounting’s Role in Business.........................................................................664
Financial Reports ..............................................................................................664
The Balance Sheet .......................................................................................664
Current Assets and Liabilities .................................................................665
Fixed Assets.............................................................................................665
Other Slow Assets ...................................................................................666
Current Liabilities ...................................................................................666
Working Capital Format..........................................................................666
Noncurrent Assets ...................................................................................667
Noncurrent Liabilities .............................................................................667
Shareholders’ Equity ...............................................................................667
The Income Statement ............................................................................667
Gross Profit..............................................................................................668
A Gaggle of Profits .................................................................................668
Earnings per Share ..................................................................................669
The Statement of Changes ......................................................................669
Sources of Funds or Cash .......................................................................669
Use of Funds ...........................................................................................670
Changes in Working Capital Items .........................................................670
The Footnotes ..........................................................................................670
Accountants’ Report.....................................................................................671
How to Look at an Annual Report ..............................................................671
Recording Business Transactions .....................................................................672
Debits and Credits........................................................................................673
Sources and Uses of Cash.......................................................................673
How Debits and Credits Are Used .........................................................673
The Balance Sheet Equations ......................................................................673
Classification of Accounts ................................................................................674
Recording Transactions................................................................................675
The Two Books of Account.........................................................................675
The Trial Balance ....................................................................................676
The Mirror Image....................................................................................676
Accrual Basis of Accounting............................................................................676
Accrual Basis versus Cash Basis.................................................................677
Details, Details .............................................................................................677
Birth of the Balance Sheet...........................................................................678
Profits versus Cash.......................................................................................678
Things Are Measured in Money..................................................................678
Values Are Based on Historical Costs.........................................................678
Understanding Financial Statements .....................................................................679
Assets ................................................................................................................679
The Inflation Effect...........................................................................................679
Summary of Valuation Methods .......................................................................679
Historical Cost..............................................................................................679
SL3151 FMFrame Page 50 Friday, September 27, 2002 3:14 PM
Liquidation Value .........................................................................................679
Investment or Intrinsic Value .......................................................................680
Psychic Value ...............................................................................................680
Current Value or Replacement Cost ............................................................680
Assets versus Expenses................................................................................680
Types of Assets .................................................................................................681
Financial Assets............................................................................................681
Physical Assets.............................................................................................681
Operating Leverage .................................................................................682
Determining the Value of Inventory .......................................................682
FIFO....................................................................................................682
LIFO ...................................................................................................682
Weighted Average...............................................................................683
Depreciation ............................................................................................683
Useful Life Concept ................................................................................683
Depreciation as an Expense ...............................................................684
Depreciation as a Valuation Reserve..................................................684
Depreciation as a Tax Strategy ..........................................................684
Depreciation as Part of Cash Flow ....................................................685
Straight Line .......................................................................................685
Sum-of-the-Years’ Digits (SYD)........................................................686
Double Declining Balance (DDB) .....................................................686
Unit of Production..............................................................................687
Replacement Cost...............................................................................687
Advantages of Accelerated Depreciation ...........................................687
Financial Statement Analysis ................................................................................688
Ratio Analysis ...................................................................................................688
Liquidity Ratios............................................................................................691
Financial Leverage .......................................................................................692
Coverage Ratios ...........................................................................................692
Earnings........................................................................................................692
Earnings Ratios ............................................................................................693
Le ROI .....................................................................................................693
ROE: Return on Equity ......................................................................694
ROA: Return on Assets ......................................................................694
ROS: Return on Sales ........................................................................694
Other Return Ratios............................................................................694
Financial Rating Systems ......................................................................................695
Bond Rating Companies...................................................................................695
Moody’s et al. ..............................................................................................695
Moody’s...................................................................................................696
Standard and Poor’s ................................................................................696
Ratings on Common Stocks .............................................................................696
The S&P Rating Method .............................................................................697
The Value Line Method ...............................................................................697
Good Ole Ben Graham ................................................................................697
SL3151 FMFrame Page 51 Friday, September 27, 2002 3:14 PM
Commercial Credit Ratings ..............................................................................698
Dun & Bradstreet .........................................................................................698
Other Systems ..............................................................................................699
Company and Product Life Cycle.........................................................................699
Cash Flow .........................................................................................................700
A Final Thought about Cash Flow...................................................................701
A Handy Guide to Cost Terms.........................................................................703
Useful Concepts for Financial Decisions..............................................................704
The Modified duPont Formula .........................................................................704
Breakeven Analysis...........................................................................................705
Contribution Margin Analysis ..........................................................................706
Price–Volume Variance Analysis ......................................................................707
Inventory’s EOQ Model....................................................................................707
Return on Investment Analysis.........................................................................708
Net Present Value (NPV) .............................................................................709
Internal Rate of Return (IRR)......................................................................709
Profit Planning .......................................................................................................710
The Nature of Sales Forecasting ......................................................................710
The Plans Up Form......................................................................................710
Statistical Analysis .......................................................................................711
Compound Growth Rates........................................................................711
Regression Analysis ................................................................................711
Revenues and Costs.................................................................................711
Departmental Budgets .............................................................................711
How to Budget ........................................................................................712
Zero-Growth Budgeting ..........................................................................712
Selected Bibliography............................................................................................712
Chapter 16 Closing Thoughts about Design for Six Sigma (DFSS) ...............715
Appendix The Four Stages of Quality Function Deployment ..........................725
Stage 1: Establish Targets......................................................................................725
Stage 2: Finalize Design Timetables and Prototype Plans ...................................725
Stage 3: Establish Conditions of Production ........................................................725
Stage 4: Begin Mass Production Startup ..............................................................726
Tangible Benefits ...................................................................................................726
Intangible Benefits .................................................................................................727
Summary Value......................................................................................................727
The QFD Process...................................................................................................727
Managing the Process............................................................................................728
Selected Bibliography ..........................................................................................731
Index ......................................................................................................................737
SL3151 FMFrame Page 52 Friday, September 27, 2002 3:14 PM
SL3151Ch00Frame Page 1 Thursday, September 12, 2002 6:15 PM
Introduction —
Understanding the
Six Sigma Philosophy
Much discussion in recent years has been devoted to the concept of “six sigma”
quality. The company most often associated with this philosophy is Motorola, Inc.,
whose definition of this principle is stated by Harry (1997, p. 3) as follows:
A product is said to have six sigma quality when it exhibits no more than 3.4 npmo
at the part and process step levels.
Confusion often exists about the relationship between six sigma and this definition of producing no more than 3.4 nonconformities per million opportunities.
From a typical normal distribution table, one may find that the area underneath the
normal curve beyond six sigma from the average is 1.248 × 10–9 or .001248 ppm,
which is about 1 part per billion. Considering both tails of the process distribution,
this would be a total of .002 ppm. This process has the potential capability of fitting
two six sigma spreads within the tolerance, or equivalently, having 12 σ equal the
tolerance.
However, the 3.4 ppm value corresponds to the area under the curve at a distance
of only 4.5 sigma from the process average. Why this apparent discrepancy? It is
due to the difference between a static and a dynamic process. (The reader is encouraged to review Volume I of this series.)
A STATIC VERSUS A DYNAMIC PROCESS
If a process is static, meaning the process average remains centered at the middle
of the tolerance, then approximately .002 ppm will be produced. But under the six
sigma concept, the process is considered to be dynamic, implying that over time,
the process average will move both higher and lower because of many small changes
in material, operators, environmental factors, tools, etc. Most small shifts in the
process average will go undetected by the control chart. For an n of 4, there is only
a 50 percent chance a 1.5-sigma shift in µ is detected by the next subgroup after
this change. By the time this next subgroup is collected, it may have returned to its
original position. Thus, this process change will never be noticed on the chart, which
means that no corrective action will be implemented. However, this movement has
caused the actual long-term process variation to increase somewhat because betweensubgroup variation is greater than within-subgroup variation. Note that estimates of
short-term process variation are not impacted because they are determined only from
within-subgroup variation.
1
SL3151Ch00Frame Page 2 Thursday, September 12, 2002 6:15 PM
2
Six Sigma and Beyond
Based on studies analyzing the effect of these changes on process variation
(Bender, 1962, 1968; Evans, 1970, 1974, 1975a and b; Gilson, 1951), the six sigma
principle acknowledges the likelihood of undetected shifts in the process average of
up to ±1.5 sigma. Because shifts in the average greater than 1.5 sigma are expected
to be caught, and six is assumed not to change, the worst case for the production
of nonconforming parts happens when the process average has shifted either the full
1.5 sigma above the middle of the tolerance or the full 1.5 sigma below it. For this
worst case, there would be only 4.5 sigma (6 sigma minus 1.5 sigma) remaining
between the process average and the nearest specification limit.
This reduced Z value of 4.5 for the dynamic model corresponds to 3.4 ppm.
When this size of shift occurs, the Z value for the other specification limit becomes
7.5, which means essentially 0 ppm are outside this limit. Because the process
average can shift in only one direction at a time, the maximum number of nonconforming parts produced is 3.4 ppm. Notice that most of the time the average should
be closer to the middle of the tolerance, resulting in far fewer than 3.4 ppm actually
being manufactured.
To achieve a goal of 3.4 ppm, the process average must be no closer than 4.5
sigma to a specification limit. Assuming the average could drift by as much as 1.5
sigma, potential capability must be at least 6.06 (4.56 plus 1.5 sigma) to compensate
for shifts in the process average of up to 1.56, yet still be able to produce the desired
quality level. The required 4.56 plus this added buffer of 1.5 sigma create the 6 σ
requirement, and thereby generate the label “six sigma.” (Here it must be noted that
the 4.5 shift is allegedly an empirical value for the electronic industry. In the
automotive industry, for years the shift has been identified as only 1 sigma — a shift
from a Ppk of 1.33 to a Cpk of 1.67 i.e., from 4 sigma to 5 sigma. The point is that
every industry should identify its own shift and use it accordingly. It is unfortunate
that the 4.5 shift has become the default value for everything. For a detailed explanation on the difference between Ppk and Cpk, the reader is encouraged to review
Volumes I and IV of this series.)
To counter the effect of shifts in µ, a buffer of 1.5 standard deviations can be
added to other capability goals as well. If no more than 32 ppm are desired outside
either specification, the goal would be to have ±4.06 fit within the tolerance, assuming no change in the process average. This target equates to a Cp of 1.33 (4.0/3).
Under the static model, this potential capability goal translates into 32 ppm outside
each specification when the average is centered at M. But with the inevitable 1.56
drifts in µ occurring with the dynamic process model, the average could move as
close as 2.56 (4.5 sigma minus 1.5 sigma) to a specification limit before triggering
any type of corrective action. This change in centering would cause as many as 6210
ppm to be produced, quite a bit more than the desired maximum of 32 ppm.
PRODUCTS WITH MULTIPLE CHARACTERISTICS
Extremely low ppm levels are imperative for producing high quality products possessing many characteristics (or components). Table I.1 compares the probability of
manufacturing a product with all characteristics inside their respective specifications
when each is produced with ±4 sigma (Cp = 1.33) versus ±6 sigma (Cp = 2.00)
SL3151Ch00Frame Page 3 Thursday, September 12, 2002 6:15 PM
Introduction — Understanding the Six Sigma Philosophy
3
TABLE I.1
Probability of a Completely Conforming Product
With 1.56 Shift
Number of
Characteristics
C, = 1.33
(±46)
C,. = 2.00
(±6a)
1
2
5
10
25
50
100
150
250
500
99.3790
98.7618
96.9333
93.9607
85.5787
73.2371
53.6367
39.2820
21.0696
4.4393
99.99966
99.99932
99.9983
99.9966
99.9915
99.9830
99.9660
99.9490
99.9150
99.8301
capability. The processes producing the features are assumed to be dynamic, with
up to a 1.5-sigma shift in average possible.
Suppose a product has only one feature, which is produced on a process having
±4 sigma potential capability. We can then calculate that a maximum of .6210 percent
of these parts will be non-conforming under the dynamic model. Conversely, at least
99.3790 percent will be conforming, as is listed in the first line of Table I.1. If this
single characteristic is instead produced on a process with ±6 sigma potential capability, at most .00034 percent of the finished product will be out of specification,
with at least 99.99966 percent within specification.
If a product has two characteristics, the probability that both are within specification (assuming independence) is .993790 times .993790, or 98.7618 percent when
each is produced on a ±4 sigma process. If they are produced on a ±6 sigma process,
this probability increases to 99.99932 percent (.9999966 times .9999966). The
remainder of the table is computed in a similar manner.
When each characteristic is produced with ±4 sigma capability (and assuming
a maximum drift of 1.5 sigma), a product with 10 characteristics will average about
939 conforming parts out of every 1000 made, with the 61 nonconforming ones
having at least one characteristic out of specification. If all characteristics are manufactured with ±6 sigma capability, it would be very unlikely to see even one
nonconforming part out of these 1000.
For a product having 50 characteristics, 268 out of 1000 parts will have at least
one nonconforming characteristic when each is produced with ±4 sigma capability.
If these 50 characteristics were manufactured with ±6 sigma capability, it would still
be improbable to see one nonconforming part. In fact, with ±6 sigma capability, a
product must have 150 characteristics before you would expect to find even one
nonconforming part out of 1000. Contrast this to the ±4 sigma capability level, where
60.7 percent of these parts would be rejected, and the rationale for adopting the six
sigma philosophy becomes quite evident.
SL3151Ch00Frame Page 4 Thursday, September 12, 2002 6:15 PM
4
Six Sigma and Beyond
SHORT- AND LONG-TERM SIX SIGMA CAPABILITY
The six sigma approach also differentiates between short- and long-term process
variation. Just as in the past, the short-term standard deviation has been estimated
from within-subgroup variation, usually from R, and the long-term standard deviation
incorporates both the short-term variation and any additional variation in the process
introduced by the small, undetected shifts in the process average that occur over
time. Although no exact relationship between these two types of variation applies
to every kind of process, the six sigma philosophy ties them together with this
general equation (Harry and Lawson, 1992, pp. 6–8).
σ LT = cσ ST
As c is affected by shifts in the process average, it is related to the k factor,
which quantifies how far the process average is from the middle of the tolerance.
c=
µ−M
1
k=
(USL − LSL ) / 2
1− k
If a process has a Cp of 2.00 and is centered at the middle of the tolerance, then
there is a distance of 6σST from the average to the USL. When the process average
shifts up by 1.5σST, it has moved off target by 25 percent of one-half the tolerance
(1.5/6.0 = .25). For this k factor of .25, c is calculated as 1.33.
C = 1/(1 – .25) = 1/.75 = 1.33
The long-term standard deviation for this process would then be estimated from
σST, as:
ˆ LT = cσ ST = 1.33σ ST
σ
The value 1.33 is quite commonly adopted as the relationship between shortand long-term process variation (Koons, 1992). This factor implies that long-term
variation is approximately 33 percent greater than short-term variation. Other authors
are more conservative and assume a c factor between 1.40 and 1.60, which translates
to a k factor ranging from .286 to .375 (Harry and Lawson, 1992, pp. 6–12, 7–6).
For a c factor of 1.50, k is .333.
1.50 = 1/(1 – k)
1 – k = 1/1.50
k = 1 – (1/1.50) = .333
SL3151Ch00Frame Page 5 Thursday, September 12, 2002 6:15 PM
Introduction — Understanding the Six Sigma Philosophy
5
This assumption expects up to a 33.3 percent shift in the process average. With
six sigma capability, there is 6σST from M to the specification limit, a distance that
equals one-half the tolerance. A k factor of .333 represents a maximum shift in the
process average of 2.0σST, a number derived by multiplying one-half the tolerance,
or 6σST, by .333.
DESIGN FOR SIX SIGMA AND
THE SIX SIGMA PHILOSOPHY
The six sigma philosophy is becoming more and more popular in the quality field,
especially with companies in the electronics industry (de Treville et al., 1995).
Organizations striving to attain the quality levels required with the six sigma system
usually adopt the following three recommended strategies for accomplishing this
goal (Tomas (1991) offers a six-step approach). Improving an existing process to
the six sigma level of quality would be very difficult, if not impossible. That is why
Fan (1990) insists this type of thinking must already be incorporated into the original
design of new products and the processes that will manufacture them if there is to
be any chance of achieving six sigma quality.
The three recommended strategies are as follows:
DESIGN PHASE
1. Design in ±6σ tolerances for all critical product and process parameters.
For additional information on this topic, read Six Sigma Mechanical
Design Tolerancing by Harry and Stewart (1988).
2. Develop designs robust to unexpected changes in both manufacturing and
customer environments (see Harry and Lawson, 1992).
3. Minimize part count and number of processing steps.
4. Standardize parts and processes.
Knowing the process capability of current manufacturing operations will greatly
aid designers in accomplishing this first step. And of course, good designs will
positively influence the capability of future processes.
Once a new product is released for production, the designed-in quality levels
must be maintained, and even improved upon, by working to reduce (or eliminate)
both assignable and common causes of process variation. McFadden (1993) lists
several additional key components of a six sigma quality program specifically targeted at manufacturing.
INTERNAL MANUFACTURING
1. Standardize manufacturing practices.
2. Audit the manufacturing system. Pena (1990) provides a detailed audit
checklist for this purpose.
SL3151Ch00Frame Page 6 Thursday, September 12, 2002 6:15 PM
6
Six Sigma and Beyond
3. Use SPC to control, identify, and eliminate causes of variation in the
manufacturing process. Mader et al. (1993) have written a book entitled
Process Control Methods to help with this step. The reader may also
review Volume IV of this series.
4. Measure process capability and compare to goals. Koons’ (1992) and
Bothe’s (1997) books on capability indices are useful here.
5. Consider the effects of random sampling variation on all six sigma estimates and apply the proper confidence bounds. The reference by Tavormina and Buckley (1992) would be helpful here.
6. Kelly and Seymour (1993), Bothe (1993), and Delott and Gupta (1990)
reveal how the application of statistical techniques helped achieve six
sigma quality levels for copper plating ceramic substrates. Harry (1994)
provides several examples of applying design of experiments to improve
quality in the electronics industry.
A special warning here is appropriate. Even if the first two strategies are adopted,
a company will never achieve six sigma quality unless it has the full cooperation
and participation of all its suppliers.
EXTERNAL MANUFACTURING
1.
2.
3.
4.
5.
Qualify suppliers.
Minimize the number of suppliers.
Develop long-term partnerships with remaining suppliers.
Require documented process control plans.
Insist on continuous process improvement.
Craig (1993) shows how Dupont Connector Systems utilized this set of strategies
to introduce new products into the data processing and telecommunications industries. Noguera (1992) discusses how the six sigma doctrine applies to chip connection
technology in electronics manufacturing, while Fontenot et al. (1994) explain how
these six sigma principles pertain to improving customer service. Daskalantonakis
et al. (1990–1991) describe how software measurement technology can identify areas
of improvement and help track progress toward attaining six sigma quality in software development.
As all these authors conclude, the rewards for achieving the six sigma quality
goals are shorter cycle times, shorter lead times, reduced costs, higher yields,
improved product reliability, increased profitability, and most important of all, highly
satisfied customers.
We have reviewed the principles of six sigma here to make sure the reader
understands the ramifications of poor quality and the significance of implementing
the six sigma philosophy. In Volume I of this series, we discussed this philosophy
in much more detail. However, it is imperative to summarize some of the inherent
advantages, as follows:
SL3151Ch00Frame Page 7 Thursday, September 12, 2002 6:15 PM
Introduction — Understanding the Six Sigma Philosophy
7
1. As quality improves to the six sigma level, profits will follow with a
margin of about 8% higher prices.
2. The difference between a six sigma company and a non–six sigma company is that the six sigma company is three times more profitable. Most
of that profitability is through elimination of variability — waste.
3. Companies with improved quality gain market share continuously at the
expense of companies that do not improve.
The focus of all these great results is in the manufacturing. However, most of
the cost reduction is not in manufacturing. We know from many studies and the
experience of management consultants that about 80% of quality problems are
actually designed into the product without any conscious attempt to do so. We also
know that about 70% of a product’s cost is determined by its design.
Yet, most of the “hoopla” about six sigma in the last several years has been
about the DMAIC model. To be sure, in the absence of anything else, the DMAIC
model is great. But it still focuses on after-the-fact problems, issues, and concerns.
As we keep on fixing problems, we continually generate problems to be fixed. That
is why Stamatis (2000) and Tavormina and Buckley (1994) and the first volume of
this series proclaimed that six sigma is not any different from any other tool already
in the tool box of the practitioner. We still believe that, but with a major caveat.
The benefit of the six sigma philosophy and its application is in the design phase
of the product or service. It is unconscionable to think that in this day and age there
are organizations that allow their people to chase their tails and give accolades to
so many for fixing problems. Never mind that the problems they are fixing are
repeatable problems. It is an abomination to think that the more we talk about quality,
the more it seems that we regress. We believe that a certification program will do
its magic when in fact nothing will lead to real improvement unless we focus on
the design.
This volume is dedicated to the Design for Six Sigma, and we are going to talk
about some of the most essential tools for improvement in “real” terms. Specifically,
we are going to focus on resource efficiency, robust designs, and production of
products and services that are directly correlated with customer needs, wants, and
expectations.
REFERENCES
Bender, A, Bendarizing Tolerances — A Simple Practical Probability Method of Handling
Tolerances for Limit-Stack-Ups. Graphic Science, Dec. 1962, pp. 17–21.
Bender, A., Statistical Tolerancing as It Relates to Quality Control and the Designer, SAE
Paper No. 680490, Society of Automotive Engineers, Southfield, MI, 1968.
Bothe, D.R., Reducing Process Variation, International Quality Institute., Inc., Sacramento,
CA, 1993.
Bothe, D.R., Measuring Process Capability, McGraw-Hill. New York, 1997.
Craig, R.J., Six-Sigma Quality, the Key to Customer Satisfaction, 47th ASQC Annual Quality
Congress Transactions, Boston, 1993, pp. 206–212.
SL3151Ch00Frame Page 8 Thursday, September 12, 2002 6:15 PM
8
Six Sigma and Beyond
Daskalantonakis, M.K., Yacobellis, R.H., and Basili, V.R., A method for assessing software
measurement technology, Quality Engineering 3, 27–40, 1990–1991.
Delott, C. and Gupta, P., Characterization of copperplating process for ceramic substrates,
Quality Engineering, 2, 269–284, 1990.
de Treville, S., Edelson, N.M., and Watson, R., Getting six sigma back to basics, Quality
Digest, 15, 42–47, 1995.
Evans, D.H., Statistical tolerancing formulation, Journal of Quality Technology, 2, 188–195,
1970.
Evans, D.H., Statistical tolerancing: the state of the art, Part I: Background, Journal of Quality
Technology, 6, 188–195, 1974.
Evans, D.H., Statistical tolerancing: the state of the art, Part II: Methods for estimating
moments, Journal of Quality Technology, 7, 1–12. 1975 (a).
Evans, D.H., Statistical tolerancing: the state of the art, Part III: Shifts and drifts, Journal of
Quality Technology, 7, 72–76, 1975 (b).
Fan, John Y. (May 1990). Achieving Six Sigma in Design, 44th ASQC Annual Quality
Congress Transactions, San Francisco, May 1990, pp. 851–856.
Fontenot, G., Behara, R., and Gresham, A., Six sigma in customer satisfaction, Quality
Progress, 27, 73–76, 1994.
Gilson, J., A New Approach to Engineering Tolerances, Machinery Publishing Co., London,
1951.
Harry, M., The Nature of Six Sigma Quality, Motorola Univ. Press, Schaumburg, IL, 1997.
Harry, M. and Stewart, R., Six Sigma Mechanical Design Tolerancing, Motorola University
Press, Schaumburg, IL, 1988.
Harry, M., The Vision of Six Sigma: Case Studies and Applications, 2nd ed., Sigma Publishing
Co., Phoenix, 1994.
Harry, M. and Lawson, J.R., Six Sigma Producibility Analysis and Process Characterization,
Addison-Wesley Publishing Co., Reading, MA, 1992.
Kelly, H.W. and Seymour, L.A., Data Display. Addison-Wesley Publishing Co., Reading,
MA, 1993.
Koons, J., Indices of Capability: Classical and Six Sigma Tools, Addison-Wesley Publishing
Co., Reading, MA, 1992.
Mader, D.P., Seymour, L.A., Brauer, D.C., and Gallemore, M.L., Process Control Methods,
Addison-Wesley Publishing Co., Reading, MA, 1993.
McFadden, F.R., Six-sigma quality programs, Quality Progress, 26, 37–42, 1993.
Noguera, J., Implementing Six Sigma for Interconnect Technology, 46th ASQC Annual
Quality Congress Transactions, Nashville, TN, May 1992, pp. 538–544.
Pena, E., Motorola’s secret to total quality control, Quality Progress, 23, 43–45, 1990.
Stamatis, D.H., Six sigma: point/counterpoint: who needs six sigma anyway, Quality Digest,
33–38, May, 2000.
Tadikamalla, P.R., The confusion over six-sigma quality, Quality Progress, 21, 83–85, 1994.
Tavormina, J.J., and Buckley, S., SPC and six-sigma, Quality, 31, 47, 1992.
Tomas, S., Motorola’s Six Steps to Six Sigma, 34th International Conference Proceedings,
APICS, Seattle, WA, 1991. pp. 166–169.
SL3151Ch01Frame Page 9 Thursday, September 12, 2002 6:15 PM
1
Prerequisites to Design
for Six Sigma (DFSS)
So far in this series we have presented an overview of the six sigma methodology
(DMAIC) and some of the tools and specific methodologies for addressing problems
in manufacturing. Although this is a commendable endeavor for anyone to pursue —
as mentioned in Volume I of this series — it is not an efficient way to use resources
to pursue improvement. The reason for this is the same as the reason you do not
apply an atomic bomb to demolish a two-story building. It can be done, but it is a
very expensive way to go.
As we proposed in Volume I, if an organization really means business and wants
quality improvement to go beyond six sigma constraints, it must focus on the design
phase of its products or services. It is the design that produces results. It is the design
that allows the organization to have flexibility. It is the design that convinces the
customer of the existence of quality in a product. Of course, in order for this design
to be appropriate and applicable for customer use, it must be perceived by the
customer as functional, not by the organization’s definition but by the customer’s
personal perceived understanding and application of that product or service.
Design for Six Sigma (DFSS) is an approach in which engineers interpret and
design the functionality of the customer need, want, and expectation into requirements that are based on a win-win proposition between customer and organization.
Why is this important? It is important because only through improved quality and
perceived value will the customer be satisfied. In turn, only if the customer is satisfied
will the competitive advantage of a given organization increase.
There are four prerequisites to DFSS and beyond. The first is the recognition
that improvement must be a collaboration between organization and supplier (partnering). The second is the recognition that true DFSS and beyond will only be
achieved if in a given organization there are “real” teams and those teams are really
“robust.” The third prerequisite is that improvement on such a large scale can only
be achieved by recognizing that systems engineering must be in place. Its function
has to be to make sure that the customer’s needs, wants, and expectations are
cascaded all the way to the component level. The fourth prerequisite is the implementation of at least a rudimentary system of Advanced Quality Planning (AQP).
In this chapter we will address each of these prerequisites in a cursory format.
(Here we must note that these prerequisites have also been called the “recognize”
phase of the six sigma methodology.) In the follow-up chapters, we will discuss
specific tools that we need in the pursuit of DFSS and beyond.
PARTNERING
Partnering and cooperation must be our watchwords. In any industry, better communication up and down the supply chain is mandatory. In the past — in few
9
SL3151Ch01Frame Page 10 Thursday, September 12, 2002 6:15 PM
10
Six Sigma and Beyond
instances even today — U.S. companies have bought almost solely on the basis of
price through competitive bidding. We need to change our attitude. Price is important,
but it is not the only consideration. Partnering with both customers and suppliers is
just as important.
The Japanese have created a competitive edge through vertical integration. We
can learn from them by establishing “virtual” vertical integration through partnering
with customers and suppliers. Just as in a marriage, we need to give more than we
get and believe that it will all work out better in the end. We need to give preferential
treatment to local suppliers. We should take a long-term view, understanding their
need for profitability and looking beyond this year’s buy.
To begin our thinking in that direction we must change our current paradigm.
The first paradigm shift must be in the following definitions:
1. Vendors must be viewed as suppliers.
2. Procurement must be viewed as business strategy.
These are small changes indeed but they mean totally different things. For
example: “supplier” implies working together in a win-win situation, while “vendor”
implies a one-time benefit — usually price. “Procurement” implies price orientation
based on bidding of some sort, while “business strategy” takes into account the
concern(s) of the entire organization. We all know that price alone is not the sole
reason we buy. If we do buy on the basis of price alone, we pay the consequences
later on.
So, what is partnering? Partnering is a business culture that fosters open communication and mutually beneficial relationships in a supportive environment built
on trust. Partnering relationships stimulate continuous quality improvement and a
reduction in the total cost of ownership.
Partnering starts with:
1. An attitude and behavioral change at the top of the organization
2. Recognition of long-term mutual dependencies internal and external to
the organization
3. A commitment to this change being understood and valued at all levels
within the organization
At the core or basic level, partnering:
1. Fosters excellence throughout the organization
2. Encourages open communication in a beneficial, supportive, and nonadversarial environment of mutual trust and respect
3. Carries this positive environment outward from the organization to its
customers and suppliers
At an expanded level, partnering involves:
SL3151Ch01Frame Page 11 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
1.
2.
3.
4.
11
Teaming
Sharing resources
Melding of customer and supplier
Eliminating the we/they approach to conducting business
By the same token, partnering is not:
1. A negotiation or purchasing tool to be used as leverage against the supplier
2. A business guarantee
However, in all cases, partnering promotes:
1.
2.
3.
4.
Customer satisfaction
Mutual profitability
Improved product, service, and operational quality
A desire for and a commitment to excellence through continuous improvements in communication skills, quality, delivery, administration, and service performance
5. The factors that contribute to customer satisfaction and the lowest total
cost of ownership
6. A situation in which each partner enhances its own competitive position
through the knowledge and resources shared by the other
THE PRINCIPLES
OF
PARTNERING
Effective partnering has its foundation in the basic principles of economics, marketing, business, humanities, and sociology. The customer develops a set of business
and technical desires, needs, requirements, and expectations in a competitive global
market. The supplier most closely meeting those business and technical needs will
be successful.
The supplier asks the customer what is wanted rather than telling the customer
what is available. The customer recognizes and understands the supplier’s business
and technical requirements, allowing the supplier to be a viable and successful source
to the industry. All transactions are honorable and fair. The parties are not trying
to take advantage of each other.
Functioning interchangeably each day as customer and supplier, internally within
the organization and externally with customers and suppliers, every person in a
strong supply chain recognizes mutual dependencies. All transactions must be mutually beneficial, with each person encouraging open communication and operating
with integrity, mutual trust, cooperation, and respect.
VIEW
OF
BUYER/SUPPLIER RELATIONSHIP: A PARADIGM SHIFT
Partnering involves an expanded view of the buyer/supplier relationship, as shown
here:
SL3151Ch01Frame Page 12 Thursday, September 12, 2002 6:15 PM
12
Six Sigma and Beyond
Traditional
Expanded
Lowest price
Specification-driven
Short-term, reacts to market
Trouble avoidance
Purchasing’s responsibility
Tactical
Little sharing of information on both sides
Total cost of ownership
End customer–driven
Long-term
Opportunity maximization
Cross-functional teams and top management involvement
Strategic
Both supplier and buyer share short- and long-term plans
Share risk and opportunity
Standardization
Joint venture
Share data
How can this partnership develop? There are prerequisites. Some are listed here.
The prerequisites for basic partnering include:
1.
2.
3.
4.
5.
Mutual respect
Honesty
Trust
Open and frequent communication
Understanding of each other’s needs
Additional prerequisites for expanded partnering include:
6.
7.
8.
9.
10.
11.
12.
Long-term commitment
Recognition of continuing improvement — objective and factual
Passion to help each other succeed
High priority on relationship
Shared risk and opportunity
Shared strategies/technology road maps
Management commitment
CHARACTERISTICS
OF
EXPANDED PARTNERING
Expanded partnering promotes dedication, desire, and commitment to product and
service excellence through improvements in technology, skills, quality, delivery,
administration, responsiveness, and total cost of ownership. All these are imperative
requirements for DFSS. In other words, expanded partnering:
1.
2.
3.
4.
Builds on basic partnering
Is a long-term relationship process
Provides focus on mutual strategic and tactical goals
Includes customer/supplier team support to promote mutual success and
profitability.
Of course, there are different levels of partnering just as there are different levels
of results. For example:
SL3151Ch01Frame Page 13 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
Results
Partnering Focus
Sale only
Loyalty/trust
Secured volumes
Mutual improvements
Mutual breakthrough
Short term
Product
Product and service
Process or system
Continual improvement
13
Stage
1
2
3
4
5
Why is partnering so important in the DFSS even though it may mean different
things to different people? It is because the purposes or goal of most customers who
advocate “partnerships” are to reduce the time to get a new product to market by
eliminating the bid cycle and to extend the customer’s capability without adding
personnel.
Partnering is joining together to accomplish an objective that can best be met
by two individuals or corporations rather than one. For a partnership to work well,
it requires that both partners understand the objective, each partner complements
the other in skills necessary to meet the objective, and each recognizes the value of
the other in the relationship. A true partnership occurs when both partners make a
conscious decision to enter into a unique relationship. As the partnership develops,
trust and respect build to a degree that both share the joy and rewards of success
and, when things do not go so well, both work hard together to resolve the issues
to mutual satisfaction.
In a customer/supplier partnership, the customer must define the objective (or
the scope of the project) and identify the needs. The supplier must have the capability
to meet the customer’s needs and become an extension of the customer’s resources.
To be more specific, the customer must be able to quantify and share the desired
needs in terms of the quantity of services required, the timeline or critical path
desired, and targeted costs — including up-front engineering as well as unit cost
and capital investment. The supplier must determine whether it can commit the
resources required to meet those needs and whether it is capable of reaching the
targets. A mutual commitment must be made early in the program, and it must be
for the life of the program.
In a more practical sense, the customer in a customer/supplier partnership must
be the leader and be in a position to guide the partners to the objective — no different
than a project leader or a team leader of a program that is 100 percent internal to
the customer. The leader also must monitor the progress in terms of cost and time
with input from the supplier. Our experience would indicate that longer projects
should be broken into “phases” so that there are milestones that are mutually agreed
to in advance by the partners and that mark the points at which the supplier is paid
for its services.
For a partnership to work well, customer/supplier communications must be open
and frequent. With the availability of CAD, e-mail, Internet, Web sites, fax, and
voice mail, there should be no reason not to communicate within minutes of recognition of an issue critical to the program, but there is also a need for regular meetings
at predetermined intervals at either the customer’s or supplier’s location (probably
with some meetings at each location to expose both partners to as many of the team
players as possible).
SL3151Ch01Frame Page 14 Thursday, September 12, 2002 6:15 PM
14
Six Sigma and Beyond
I am sure there is more to be said as to why partnership and DFSS work in
tandem and why both strive for mutual benefits, but I hope these thoughts gave some
idea of the significance that both have for each other.
EVALUATING SUPPLIERS
AND
SELECTING SUPPLIER PARTNERS
There are many schemes to evaluate suppliers, and each of them has advantages and
disadvantages. We believe, however, that each organization should take the time to
generate its own criteria in at least two dimensions. The first should be the supplier’s
situation and the second the purchaser’s situation. Within each category, levels of
satisfaction may be assessed as total dissatisfaction, partial satisfaction, or total
satisfaction, or numerical values may be used. The higher the number, the more
qualified the supplier is. This may be done with either a questionnaire or a matrix.
In either case, this task should be performed by a team of people from various
functional areas, such as purchasing, engineering, finance, quality, and legal. The
important point is to evaluate key suppliers for a fit with your company’s needs.
IMPLEMENTING PARTNERING
There are five steps to partnering. They are:
1. Establish Top Management Enrollment
(Role of Top Management — Leadership)
The senior management, in the role of an executive customer partner or executive
supplier partner (champion):
1.
2.
3.
4.
5.
6.
7.
8.
9.
Serves in a long-term assignment for each expanded partnering relationship
Is available to support prompt issue resolution
Establishes strong counterpart relationships with key customers and suppliers
Provides for and supports decision-making authority at the lowest practical levels
Provides partnering progress updates for executive management review
Encourages and supports prompt responsiveness to communications
affecting customer/supplier relationships
Maintains a rapid management approval cycle, providing an ombudsman
when required
Commits adequate time to the partnering process
Ensures that cohesive internal, cross-functional teams are in place to
support the partnering process
2. Establish Internal Organization
There are several options in this phase. However, the most common are:
Option 1: Supplier Partnering Manager
A staff supplier partnering manager is appointed to a full-time position (for a
minimum of two years). This manager will be responsible for:
SL3151Ch01Frame Page 15 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
15
1. Working with purchasing/commodity team management
2. Instilling the partnering principles into the company culture
3. Implementing the partnering process with company management and
suppliers
4. Reviewing progress during customer/supplier review sessions
5. Working the issues specific to the partnering process
Option 2: Supplier Council/Team
A supplier partnering council or team is established within the organizational and
operational structure that “owns” the resources required to support the partnering
process. The functions are the same as for the supplier partnering manager but are
assigned to several individuals.
Typically, the council or team is made up of purchasing, quality, product engineering, and manufacturing management with additional resources available from
finance, law, training, and other departments as required.
Option 3: Commodity Management Organization
A line organization consisting of a commodity manager and staff is created to
manage the commodity and the partnering activities described in Option 1. Support
is received from the operational groups as required.
3. Establish Supplier Involvement
To have an effective partnering involvement is of paramount importance. This
involvement may be encouraged and helped to grow by having open communication.
Communication may be conducted in a variety of forums or as scheduled periodic
meetings — see Table 1.1.
4. Establish Responsibility for Implementation
Identify roles and responsibilities of the partnering process manager:
1.
2.
3.
4.
5.
Serve as customer representative.
Serve as supplier advocate. (Avoid conflict of interest.)
Focus participants on long-term success.
Accelerate and route communications (good news, bad news).
Perform meeting planning (with supplier) and facilitation function.
Perhaps one of the most important functions in this step is to establish credibility
with each other as well as confidentiality requirements. The process of this exchange
must be truthful and full of integrity. Some characteristics of this exchange are:
1. Each party provides the other with the information needed to be successful.
2. The supplier needs to know the customer’s requirements and expectations
in order to meet them on a long-term basis.
Establish/update mutual key results,
goals, objectives, action plans
Discuss issues
Review performance
Review/discuss on-time deliveries
Required actions of both parties
Quality indicators
Quality action plan
Business issues
Purchasing
Technical
Quality/reliability
(Other team members)
Monthly Team
Meeting
Major issues
Performance review
“Health check”
Objectives
Expectations
Actual performance
Technology trends
Business trends
Program direction
Purchasing
Technical
Quality/reliability
(Other team members)
Executive partnersb
Quarterly/Semiannual
Management Meeting
At supplier location and tour
Maintain key contacts
Major performance review
Purchasing
Technical
Quality/reliability
(Other team members)
Executive partners
Annual Management
Review
Team includes personnel from Purchasing, Quality, Material Control, Engineering. When needed, also can include personnel from Sales, Safety, Manufacturing, Process
Area Management, Planning, Training, Legal, Risk Management, Finance, Project Management.
b Optional as part of quarterly and semiannual meetings.
Introduce program
Obtain mutual agreement and
commitment
Identify teams
Introduce/suggest executive partners
Present/discuss customer objectives
Supplier objectives
Proposed objectives
Business objectives
Definition of responsibilities
Expectations
Customer team
Supplier team
Executive partners
(if appointed)
Kick-off
Meeting
Meetings
16
a
Meeting Topics
Partner meeting
Meeting purpose
Objectives
Issues
Participant
responsibilities
Participants
Customer teama
Supplier teama
Internal Preparation
Meeting
TABLE 1.1
Customer/Supplier Expanded Partnering Interface Meetings
SL3151Ch01Frame Page 16 Thursday, September 12, 2002 6:15 PM
Six Sigma and Beyond
SL3151Ch01Frame Page 17 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
17
To be successful in this exchange requires time. The reason for this is that
building trust is a function of time. The longer you work with someone the more
you get to know that person. To expedite the process of gaining trust, suppliers and
customers may want to share in:
1.
2.
3.
4.
5.
6.
7.
8.
Non-disclosure agreements
Quality improvement process
Technology development roadmaps
Specification development
Should-cost/Total-cost model
Forecasts/Frozen schedules
Executive partners
Job rotation with suppliers
Be aware of, adhere to, and respect the sensitive/confidential nature of proprietary information, both yours and your partner’s. Always remember: recognize the
differences in company cultures. Find ways to do things without imposing your
value system.
Compromise...
Find the common ground...
Work out the differences...
Move forward…
Negotiate...
COOPERATE!
5. Reevaluate the Partnering Process
People cannot improve unless they know where they are. Evaluation of the partnering
process is a way to benchmark the progress of the relationship and to set priorities
for future improvement. Questionnaires with five-point rating criteria provide a
means for this evaluation in which both customers and suppliers take an active role.
A typical questionnaire may look like Table 1.2.
Sometimes the questionnaires provide detailed definitions of certain words or
criteria that are being used in the instrument. The following is a brief supplement
to explain/define the rating categories and some of the terms used in Table 1.2:
Ratings
1. Does not meet — Failing to satisfy requirements, unacceptable performance
2. Marginally meets — Performance is not fully acceptable, needs improvement
3. Meets — Fulfills basic requirements, satisfactory
4. Exceeds — Surpasses normal requirements
5. Superior — Consistently excels above and beyond expectations, “worldclass” performance
SL3151Ch01Frame Page 18 Thursday, September 12, 2002 6:15 PM
18
Six Sigma and Beyond
TABLE 1.2
A Typical Questionnaire
Please select one of the following ratings for each question:
Ratings:
(1) Does not meet (2) Marginally meets (3) Meets (4) Exceeds (5) Superior
1. Rate the relationship’s impact in focusing both parties on strategic and tactical goals to foster mutual
success.
Strategic
Tactical
1
1
2
2
3
3
4
4
5
5
Comments:
2. Have all established communication channels within Intel, from executive sponsor down, enabled the
partners to improve their effectiveness/competitiveness as a company?
Technical Issues
Business Issues
1
1
2
2
3
3
4
4
5
5
1
1
1
2
2
2
3
3
3
4
4
4
5
5
5
1
2
3
4
5
Comments:
3. Rate the effectiveness of the team structure.
Management Team
Working Team
Performance Reviews
(Both Parties)
Follow-Up on Action Items
Comments:
4. Rate the effectiveness of the Key Supplier Program team in generating high quality solutions.
Time of Solutions
Quality of Solutions
Cost-Effective Solutions
Comments:
1
1
1
2
2
2
3
3
3
4
4
4
5
5
5
1
1
2
2
3
3
4
4
5
5
4
4
4
4
4
4
5
5
5
5
5
5
5. Does the Executive Partner provide meaningful support?
Customer
Supplier
Comments:
6. Is the Key Supplier Program process formally managed in an effective manner?
Customer Resource Commitment
Supplier Resource Commitment
Formal Communication Tools
Information Sharing
Total Cost Focus
Dealing with “Me Best“
Comments:
1
1
1
1
1
1
2
2
2
2
2
2
3
3.
3
3
3
3
SL3151Ch01Frame Page 19 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
19
Terms Used in Specific Questions
Question 1
Strategic Goals — Long-range objectives (i.e., next-generation technology)
Tactical Goals — Operational, day-to-day problem solving, etc.
Question 3
Management Team — Executive sponsors plus upper/middle managers
Working Team — Commodity/product teams, task forces, user groups
Performance Reviews — Grading joint MBOs, other indicators (e.g.,
quality, customer satisfaction survey)
Question 4
Time of Solution — Meets or exceeds time requirements/expectations
Quality of Solution — Meets or exceeds quality requirements/expectations
Cost-Effective Solution — Improves total cost effectiveness/fosters mutual profitability
Question 5
Meaningful Support — Active participation and involvement during and
between business meetings
Question 6
Resource Commitment — Adequate support (people, tools, space...) to allow successful results
Formal Communication Tools — Meetings, reports, MBO’s technology
exchange; correct topics, timely, worthwhile
Information Sharing — Plans, technology, data; useful, timely, fosters
profitability
Total Cost Focus — Model in place and used to support decisions to apply
resources
Dealing with “The Best” — Process contributes to world-class performance
Another general questionnaire evaluating the partnering process is shown in
Table 1.3.
MAJOR ISSUES
WITH
SUPPLIER PARTNERING RELATIONSHIPS
In any relationship that one may think of, issues and concerns exist. Partnering is
no different. Some of the areas that might be of general concern include the following:
1.
2.
3.
4.
5.
6.
Issues
Issues
Issues
Issues
Issues
Other
or
or
or
or
or
concerns
concerns
concerns
concerns
concerns
within the customer’s company
within the supplier’s company
of a competitive nature
of a political or legal nature
of a technological nature
SL3151Ch01Frame Page 20 Thursday, September 12, 2002 6:15 PM
20
Six Sigma and Beyond
TABLE 1.3
A General Questionnaire
Evaluate the following categories based on a rating of 1 to 5, with 1 being low and 5 being excellent.
(Yet another variation of the criteria may be 1 = Much improvement needed, 5 = Little or no improvement
needed.)
Executive commitment to the process
Recognition of mutual dependencies
Mutually defined and shared expectations/objectives
Executive partners/sponsors
Quick issue resolution (break down roadblocks)
Understanding and sharing of risks
Sharing of technical roadmaps/competitive analysis/business plans
Openness, honesty, respect
Formal and frequent communication/feedback process
Access to data
Establish clear definition of responsibility (project leadership)
Issues or concerns of specific nature may develop when any of the following
situations exist:
1. Support on either side is insufficient.
2. Something has caused one party to consider abandoning the partnering
relationship.
3. A “better deal” or innovation threatens the partnering relationship.
4. Unequal benefits or conflicting incentives exist.
5. There are forced requirements under the guise of a partnering relationship
and fear on the part of the supplier to decline or dissent, particularly if
the supplier is small.
6. Key players change or there is a change of ownership.
HOW CAN WE IMPROVE?
A fundamental question that needs to be answered from a customer’s perspective is
“How can we improve?” The answer is by establishing a process with strategic
importance of “key” relationships. Once this process is identified then it needs
recognition — the more the better. How do we do that? We can do it by:
1.
2.
3.
4.
Establishing upper management involvement
Sharing information: technology exchanges
Showing suppliers how to use the data
Educating suppliers in tools and methodologies
We can benefit from creating a “mentoring” attitude toward our suppliers. Traditionally we say, “Do this because we need it.” Start saying (and thinking), “Do
SL3151Ch01Frame Page 21 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
21
this because it will make you a stronger company, and that will in turn make us a
stronger company.” Become a mentor in the Partnering for Total Quality assessment
process with your suppliers.
Clearly define expectations by:
1. Mutually developing short- and long-term objectives for each relationship
2. Increasing the concentration on areas for mutual success; reducing the
concentration on terms and conditions
3. Making decisions based on total cost; increasing the involvement and
awareness of suppliers in this process
In the final analysis, in order for a successful partnership to flourish both
partners — customer and supplier — must recognize that change is imminent, at
least in the following areas:
1.
2.
3.
4.
5.
Organization itself
Internal, interfunctional communication
Customer orientation
World-class definition
Skills development
Are there indicators of a successful partnering process? We believe that there
are. Typical indicators are the existence of:
1.
2.
3.
4.
5.
6.
7.
8.
9.
Formal communication processes
Commitment to the suppliers’ success
Stable relationships, not dependent on a few personalities
Consistent and specific feedback on supplier performance
Realistic expectations
Employee accountability for ethical business conduct
Meaningful information sharing
Guidance to supplier in defining improvement efforts
Non-adversarial negotiations and decisions based on total cost of ownership
10. Employees empowered to do the right thing
BASIC PARTNERING CHECKLIST
The basic partnering principles below may be applied to any customer/supplier
relationship, regardless of size of company and number of employees. The principles
also apply to relationships within the organization. The investment is primarily an
attitude and behavioral change to bring about six sigma quality and beyond.
1. Leadership
Our management:
SL3151Ch01Frame Page 22 Thursday, September 12, 2002 6:15 PM
22
Six Sigma and Beyond
1. Is personally committed to the principles of the partnering process
2. Has directed organization-wide commitment, adoption, and execution of
the partnering principles and philosophy
3. Is committed to generating accurate forecasts to improve delivery schedule
stability with our suppliers
4. Ensures that the partnering principles flourish even in stressful times
5. Seeks mutually profitable arrangements with our suppliers
6. Is involved in high-level review of the partnering process.
2. Information and Analysis
Our organization:
1. Has standardized measurements and performance for products, processes,
service, and administration
2. Respects the protection of intellectual property
3. Treats information gained in open exchanges with respect and confidentiality
4. Provides consistent and specific feedback on supplier performance
3. Strategic Quality Planning
Our organization:
1. Avoids short-term solutions at the expense of long-term viability
2. Places more emphasis on overall needs and mutual expectations, less on
legal or formal aspects of the relationship
3. Uses reasonable and realistic expectations and milestones with our customers and suppliers
4. Demonstrates a commitment to continuous improvement in all facets of
our business
4. Human Resource Development and Management
Our organization:
1. Promotes employee accountability for ethical business conduct through
performance reviews, holding supervisors accountable for promoting such
practices
2. Helps employees understand their roles as customer and supplier internal
and external to the organization
3. Trains employees on business practices that are ethical, open, professional,
and of high integrity
4. Provides position descriptions with a clear definition of responsibility
5. Supports decision-making authority at the lowest practical level
SL3151Ch01Frame Page 23 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
23
5. Management of Process Quality
Our organization:
1. Shares basic evaluation criteria with our customers and suppliers
2. Has methods for ensuring quality of components, processes, administration, service, and final product.
3. Checks periodically with our customers to verify that our quality meets
their expectations
6. Quality and Operational Results
Our organization:
1. Shares meaningful information and data with our customers and suppliers,
with frequent and timely feedback on problems as well as successes
2. Provides guidance to suppliers in defining improvement efforts that
address all problems
7. Customer Focus and Satisfaction
Our organization:
1. Recognizes mutual dependencies with our customers and the need to work
together; understands that partnering does not end with the signing of the
purchase order.
2. Engages in win/win, non-adversarial negotiations and purchasing decisions based on total cost of ownership
3. Provides prompt disclosure to customers of any inability of the organization to meet current or future requirements; makes realistic commitments
to customers
EXPANDED PARTNERING CHECKLIST
In addition to the basic partnering principles, expanded partnering recognizes the
need for mutual support based on such factors as cost, risk, criticalness, and actual
performance. The investment involves an application of resources from both the
customer and the supplier. Customer resource availability limits the number of
expanded partnering relationships in which any organization can be simultaneously
engaged.
1. Leadership
Our senior management, in the role of an executive customer partner or executive
supplier partner (champion):
SL3151Ch01Frame Page 24 Thursday, September 12, 2002 6:15 PM
24
Six Sigma and Beyond
1. Serves in a long-term assignment for each expanded partner relationship
2. Is available to support prompt issue resolution
3. Establishes strong counterpoint relationships with our key customers and
suppliers
4. Provides for and supports decision-making authority at the lowest practical levels
5. Encourages and supports prompt responsiveness to communications
affecting customer/supplier relationships
6. Maintains a rapid management approval cycle, providing an ombudsman
when required
7. Commits adequate time to the partnering process
8. Ensures that cohesive, internal, cross-functional teams are in place to
support the partnering process
2. Information and Analysis
Our organization, with our suppliers:
1. Uses positive encouragement and support to improve performance and
total cost of ownership
2. Participates in joint information-sharing activities to develop value analysis models
3. Shares technical roadmaps, competitive analyses, and plans
4. Focuses on clearly defined, complete, achievable requirements, with less
emphasis on contractual terms and conditions
5. Ensures that suppliers understand our long-term procurement strategy
3. Strategic Quality Planning
Our organization:
1. Shares short- and long-term improvement plans and priorities with suppliers and customers
2. Works with customers and suppliers to understand their quality needs and
plans for continuous improvements
4. Human Resource Development and Management
Our company management:
1. Has established technical advisory boards to support supplier activities
2. Communicates regularly with customer and supplier management to
understand mutual needs and possible areas for cooperation
3. Encourages employees to submit suggestions for continuous quality
improvements
4. Offers the same quality training to supplier personnel as we provide to
our own employees
SL3151Ch01Frame Page 25 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
25
5. Management of Process Quality
Our organization works with customers and suppliers to:
1. Share mutual joint performance measures that are written, measured, and
tracked
2. Work toward standardization of quality and certification programs
3. Develop and implement valid quality assurance systems for products,
processes, service, and administration
6. Quality and Operational Results
Our organization works with customers and suppliers to:
1. Develop joint quality and yield improvement processes
2. Provide access to process data for tool and material development and
refinement
7. Customer Focus and Satisfaction
Our organization works with customers to:
1. Mutually define expectations, understand mutual requirements, and share
risks
2. Ensure that partnering survives lapses in missed generation orders
3. Establish formal, frequent communications as part of the management
process
THE ROBUST TEAM:
A QUALITY ENGINEERING APPROACH
In general, the traditional approach to evaluating the performance of groups in
process has been twofold. The first has been to use a developmental model that
provides a summary of the different phases or stages in the life cycle of a group. A
popular example of this approach is the forming, storming, norming, performing
model of group development. Each phase corresponds to a stage in the group life
cycle — review Volume I, Part II of this series.
The second model has emphasized structural patterns of a group or team. These
may be construed in terms of gender, experience, length of service, or positional
roles (leader, secretary, or assistant, for example). Using the structural approach, the
team can also be analyzed in terms of process; the “peacemaker,” the “aggressor,”
the “blocker,” or the “help-seeker,” for example, or Resource Investigator, Coordinator, and so on.
Both these models have proven to be useful when trying to describe some
aspects of group dynamics, and it may be possible to identify colleagues who
fulfill some of these roles or identify teams that have passed through these different
SL3151Ch01Frame Page 26 Thursday, September 12, 2002 6:15 PM
26
Six Sigma and Beyond
stages of development. Unfortunately, such a restricted approach to monitoring team
process does not provide any feedback as to whether the team is producing predictable
results, nor does it identify problems or opportunities for improvement — especially
breakthrough opportunities. Specifically, no opportunity exists to determine whether
team process is “in control” (capable) or whether the group is “out of control” (chaotic
and falling far short of what it could achieve). Some of these issues were addressed in
Volumes I and II of this series, and perhaps the reader may want to review them at this
time.
A further shortcoming in non-systems approaches to team building concerns
team process improvement. As long as the team is operating within “acceptable”
parameters, no opportunity or drive to improve or maximize the performance of the
team exists. Furthermore, the team usually does not have the ability or training to
self-regulate and, through self-regulation, to begin to change and adapt to the continual change taking place in the workplace. These and other considerations suggest
that a systems approach to team building may have considerable advantages.
The robust team involves an examination of teams as systems in conjunction
with more detailed parallels between a team systems approach and the model put
forward by Taguchi as part of his quality engineering methodology (see Volume V
of this series). Using this viewpoint, a system is considered as a means by which a
user’s intention is transformed into a perceived result. Therefore, if teams are considered in terms of how successfully they transfer energy when they function, it
should be evident that there will be parallels between their functioning and the
functioning of an engineered system — as in the P diagram for example. After all,
in many ways, a team shares similar features to the manufacturing process of a
particular product. Specifications are drawn up (objectives, time scales, etc. are
established); the production machinery is put in place (team members are selected);
the production process is designed and implemented (teams meet, establish norms,
set agendas, and engage in problem solving, decision making, and planning activities,
etc.) and the system is regulated by performance criteria (by the individual members’
expectations, assessments, performance appraisals, etc.).
In manufacturing, it is important not to separate the performance of the component from its interaction with other components and its integration into large subsystems of the whole process or product. In teams, it is important not to separate
the performance of the individual from his/her relationships to other team members,
their interactions, and their membership in sub-teams and the team as a whole —
rather it is of paramount importance to view them as a team system.
TEAM SYSTEMS
Many social psychologists only consider a collection of people to be a group if their
activities relate to one another in a systematic fashion. However, it is easier to define
a group as a collection of individuals. The word “team,” however, as mentioned in
Volume I, Part II, is reserved for those groups that constitute a system whose parts
interrelate and whose members share a common goal. Some groups can easily be
viewed according to this criterion. A soccer club, its manager, and its players
constitute a set of parts necessary to the functioning of the whole — the common
SL3151Ch01Frame Page 27 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
27
aim being to win soccer games. However, when does a newly established team
become a good or effective team? To see the answer to this question let us examine
the team from a systems approach.
Input
A team has an input or signal. The input is the information, energy, resources, etc.,
that enter into the system and are transformed through its structures and processes.
A broad spectrum of inputs into the system can exist and, depending on the perspective one chooses to take, the boundaries that are drawn around the system can
be more or less inclusive of these elements.
A system in which the boundary is closely defined will have only the fixed
structures and extant processes within it and will have a wide range of inputs, many
of which may enter the system simultaneously. A system that has a very broad
boundary might include people, materials, resources, and most information as a part
of the system, with the input defined very narrowly as a discrete piece of information
or energy.
Signal
The signal as developed in the Taguchi model has a more specific and limited
definition. It is an input into the system, but it is limited to the means by which the
user conveys to the system a deliberate intention to change (or adjust) the system
output. In more general terms, it is the variable to which the system must respond
in order to fulfill the user’s intent. From this perspective, most of what are traditionally considered inputs into the system, i.e., people, materials, information, and
so on, are already part of the system itself, and the signal is the discrete piece of
information that determines the amount of energy transformed by the system.
The System
The structure of a system comprises aspects of the system that are relatively static
or enduring. Process, on the other hand, refers to the behavior of the system.
Consequently, process refers to those relatively dynamic or transient aspects of a
system that are observable by virtue of change or instability. Traditional models of
a system are based upon an input-process-output model. The system acts to transform
the energy from the input into the output. This process, once established, is subject
to variation due to internal and external factors that produce “error states” or outputs
other than the desired output. These outputs can simply be wasted energy or may
actually reduce the functional ability of the system itself.
If a particular team has a task to perform, e.g., solving a problem, you can
consider the team to be a system that has inputs, output, and a process that allows
the team members to transform their energy into the desired outputs. Team process
can be defined as any activity (for example, meetings) that utilizes resources (the
team) to transform inputs (ideas, skills, and qualities of team members) into outputs
(discoveries, solutions to problems, proposals, actions, design ideas, products, etc.).
SL3151Ch01Frame Page 28 Thursday, September 12, 2002 6:15 PM
28
Six Sigma and Beyond
Often the energy that the team brings to the process is not used to best effect.
For example, in a meeting, time may be wasted reiterating points because individuals
have not paid attention to what is being discussed or because there is cross talk.
This in turn leaves people annoyed and frustrated. These are examples of “error
states” or undesirable outputs from the team process.
Output/Response
In traditional systems models, the output is whatever the system transforms, produces, or expresses into the environment as a consequence of the impact its structures
and processes have on the input. An output can be anything from a newborn baby
to well done barbecued ribs to a presentation to a text return. This is very important
to understand because teams, by their nature, are complex and multifunctional. They
cannot and should not be configured to produce one kind of response. Most teams
will have a whole range of outputs with accompanying measures that will be used
to identify how successful they are and how effective they are in transferring energy.
The key is to identify appropriate measures that can be used to monitor the team’s
progress.
The Environment
It is important in attempting to maximize the performance of a team to identify
factors that may have an impact on the performance of the team and its ability to
maximize the transfer of its input into desired output and over which the team has
little or no control. (Remember, the output of the team will be a new design —
however defined — and it is up to the team to make that design “wanted” in the
preset environment. This is not a small feat.) These factors are designated as internal
or external to the system. It is these factors that cause energy to be wasted and
undesirable output (error states) to occur.
External Variation
In teams, external variation factors may include such things as change in team
membership, the environment in which the team is working, changing demands from
management, corporate cultural, racial, and gender factors, and so on. In developing
a group process, it is important to develop group systems and processes that are
robust to these factors. In addition, team goals exert a considerable influence on the
behavior of individual members, and goals can vary enormously. They could be
output targets that will vary in accordance with the team’s task — problem-solving
teams puzzling over the root cause of a problem; design teams considering the
optimization of a particular system design to achieve robustness; a marketing team
attempting to understand the exact details of customer requirements; or sports teams,
each of which will have an entirely different set of performance goals depending
upon the sport: soccer, football, tennis, golf, and so on.
Any analysis of working teams should take into account the objectives of the
team and the situation in which the team performs because both will have a profound
effect on the team functioning.
SL3151Ch01Frame Page 29 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
29
Internal Variation
Internal variation, on the other hand, relates to factors that are in the team system
and its members. People may bring predetermined ideas about the correct design
solution. They may have biases about other team members depending on their race,
gender, function, grade, and so on. Certain team members may not get along with
other team members and will regularly question, challenge, or contradict the others
for no apparent reason. The team may not manage its time well and consequently
may find itself chronically short of time at the end of meetings.
Team members may not know how to ask open questions that will open up fresh
avenues of information. Closed questions will result in familiar dead ends or nonproductive and previously rejected ideas. Team members may not know how to build
on the ideas of other team members and, consequently, good ideas may be regularly
lost. If the reader needs help in this area, we recommend a review of Volume I, Part II.
The Boundary
At the simplest level, boundaries can be put around almost anything, thereby defining
it as a system. In practice, the identification of the boundary is the key to successful
system analysis. The classification of factors (signal, control, and variation) that
impact on the system is dependent on the way in which the boundary is defined.
For example, by setting the boundary of the system fairly wide, to include the team
members, environment, resources, information, and so on, leaving only the directive
from the champion or the monthly output target outside, more factors would be
considered as control factors and fewer as variation. In this case, the directive from
the champion would be the signal factor. The team members, environment (or aspect
of it), and so on would be control factors.
External variations would then include disruptions to the team process from
sources outside the team boundary. Internal variations would include attitudinal,
cultural, and intellectual variations among and between team members and variations
in environmental conditions (e.g., temperature). By setting a narrower boundary,
many of the factors such as environment and resources would be considered external
to the system and therefore would become noise factors rather than control factors.
These issues are important because they determine the team’s strategy for dealing
with variations and establishing a means of becoming robust to them.
CONTROLLING
A
TEAM PROCESS: CONFORMANCE
IN
TEAMS
A tale in Hellenic mythology describes the behavior of Procrustes — an innkeeper
by the Corinthean peninsula. Procrustes took his clientele, people of definite natural
shape and size, and either stretched or truncated their limbs so that they might fit
the mattresses he provided. There are many echoes here of the original approach to
quality, “We know what you want, we will design it, you will buy it and you will
like it.” Or the now famous quality euphemism, “We are not sure of what is really
quality, but we sure know it, if we ever see it.”
Fortunately, this philosophy is being transformed into a “customer-driven
approach” and the pursuit of Total Quality Excellence through DFSS. It is not entirely
SL3151Ch01Frame Page 30 Thursday, September 12, 2002 6:15 PM
30
Six Sigma and Beyond
unreasonable, therefore, when it comes to monitoring groups or teams, to identify
an alternative to the current emphasis on fitting the behavior of team members into
behavioral roles through a “Procrustean” method, that is, by squeezing identity and
function into personality models like those of Belbin, Myers-Briggs, Bion, and so
on, through normalization and pressure to conform. Remember, one of the diversity
issues is the fact that everyone is different and we are all much better because of
that difference.
This is particularly the case when old norms are not questioned and challenged
regularly or when personality models are used to avoid genuine personal contact or
in place of a genuine understanding of the uniqueness of others.
STRATEGIES
FOR
DEALING
WITH
VARIATION
There are four basic strategies for dealing with variation and its effect on the
performance of a system: ignore the variation, attempt to control or eliminate the
variation, compensate for the variation, or minimize the effect of the variation by
making the system robust to it. Adopting the first of these strategies would mean
accepting that teams will never function efficiently, but hoping that they will “do
the best they can under the circumstances.” As with an engineering system, this
strategy would result in a lot of unhappy customers.
Generally, with engineering systems, you are encouraged to adopt strategy four
first, reverting only to strategies two and three as a last resort because they are
difficult and expensive to implement. While strategy four should also be chosen in
the case of the team system wherever possible, you have greater flexibility in many
cases to consider the other two options.
Controlling or Eliminating Variation
Procrustes’ behavior is an example of controlling inner variation. While this approach
to variation might be considered extreme, you may have some scope for selecting
team members with the right characteristics for effective teamwork as well as the
necessary technical expertise.
External variations are perhaps a little easier to deal with. For example, you
could ensure that meetings are held away from the shop floor to reduce distractions
due to noise (in the audible sense!) or hold them at an off-site location to minimize
interruption.
Compensating for Variation
The principal means of compensating for variation is by providing some feedback
on its effect on system output. The link between structure and process —the way
in which structure determines process, and for your purposes perhaps more importantly, the way that process determines structure — is found in the concept of
feedback loops. Feedback loops are so named because they are circular interrelationships that feed information from output back to input.
Information is transmitted within the system and is used to maintain stability,
to bring about structural changes, and to facilitate interaction with other systems.
SL3151Ch01Frame Page 31 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
31
Even the simplest model of the effective team includes this concept of feedback
loops. By employing information feedback loops, systems may behave in ways that
can be described as “goal seeking” or “purposive.”
Negative feedback allows a system to maintain stability as in the case of the
most commonly quoted example, a thermostat. A thermostat is controlled by negative
feedback so that when the temperature increases above a certain level the heating
is switched off, but when the temperature decreases sufficiently the heating is
switched on. The process of maintaining stability is called “homeostasis.” The
capacity for such control is engineered into some mechanical systems and occurs
naturally in all biological and social systems. Threats to the stability of the system
will be countered in a powerful attempt to maintain homeostasis.
System Feedback
One alternative approach is to monitor those aspects of team behavior that are
observable (i.e., gather “the voice of the process”). Descriptive Feedback offers a
non-judgmental method of monitoring what happens in working groups. It allows
team members to notice when team process is in control and meeting or exceeding
predetermined expectations or drifting out of control and reducing potential. Descriptive Feedback provides three basic functions:
1. It makes explicit what is happening during team process.
2. It describes those characteristics of team process behavior, relationships,
and feelings that may degrade or go out of control and inhibit the potential
of the team.
3. It determines what, if anything, needs to be changed in order to facilitate
continuous improvement in team process.
Feedback over time enables a team to establish performance-based control limits.
By using these data, specific characteristics or variables relating to team process can
be plotted over time. This will identify patterns that emerge and that can be used to
identify and capture the degree of variability of the team. Some patterns are related
to “in control” conditions, others to “out of control” conditions, just as the patterns
of points on a control chart can be used to establish whether a manufacturing process
is in control or out of control.
Based on feedback that describes what people notice and how they feel, the
team is able to regulate its process and identify opportunities for improvement.
Minimizing the Effect of Variation
The Parameter Design approach used in quality engineering — see Volume V of this
series — is concerned with minimizing the effect of variation factors by making the
system robust. This involves identifying control factors — in this case, aspects of
the team process that are within the control of the team and that can be used to
reduce the impact of variation factors without eliminating or controlling the variation
factors themselves. An example of a “control factor” functioning in this way is the
SL3151Ch01Frame Page 32 Thursday, September 12, 2002 6:15 PM
32
Six Sigma and Beyond
use of Warm-Up and its consideration of “place” (layout, heating, lighting, ventilation) so that best use is made of the facility provided and distractions are minimized,
even though the place itself and many of its features cannot be changed.
The key to a successful team lies not only in identifying those parameters that
are critical for the efficient transformation of inputs to the team process into outputs
but also in doing this with minimal loss of energy in error states and maximum
robustness to variation factors in the environment. Different types of teams with
different outputs required of them would have different parameters established for
their most efficient performance. Many of the structures, processes and skills that
could be used as control factors in a team process have been identified in Volume I,
Part II of this series.
Through this process of observation, it is possible to establish control limits in
a wider area of team performance. A number of the factors that have an impact on
team performance can be observed and regulated through feedback, and “tolerance”
for them can be established depending upon the makeup and objective of the team.
These factors include warming up and down, place, task, maintenance, process
management, team roles, agenda management, communication skills, speaking
guidelines, meeting management, exploratory thinking guidelines, experimental
thinking guidelines, change management, action planning, and team parameters.
The traditional approach to engineering waits until the end of the design process
to address the optimization of a system’s performance — in other words, after
parameter values are selected and tolerances determined, often at the extremes of
conditions and often without considering interactions among different components
or subsystems. When the components and subsystems are integrated, and if performance does not meet the target value or the customer’s requirements, parameter
values are altered. Consequently, though the system may be adjusted to operate
within tolerance, this process does not guarantee that the system is producing its
ideal performance.
Similarly, traditional approaches to building teams have selected team members
according to a number of factors: predetermined skills and knowledge, established
roles for team members, and implemented structured norms. They also have waited
until the end of the process of team design in order to optimize performance. If the
team does not perform within the accepted values of these parameters, then it is
adjusted: team members are changed, roles are redefined, norms are more strictly
enforced. This, however, is against performance criteria that do not necessarily
optimize the team’s performance nor add to the motivation or job satisfaction of the
team members.
The shift suggested in Parameter Design in engineering (and that may be applied
to teams as well) is to move from establishing parameter values to identifying those
parameters that are most important for the function of the process and then determine
through experimental design the correct values for those parameters. The key is to
establish the values that use the energy of the system most efficiently and that are
resistant to uncontrollable impact from other factors internal or external to the system
itself.
SL3151Ch01Frame Page 33 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
33
MONITORING TEAM PERFORMANCE
One way of monitoring team performance has already been suggested, namely the
use of Descriptive Feedback. Gathering “the voice of the process” enables the team
to evaluate its performance and to continuously improve its efficiency and hence its
effectiveness, before completing the task. Preliminary work in using process-control
charting from Statistical Process Control suggests that there is opportunity for
application to group process. This provides a second means of monitoring and
continuously improving the team’s performance. Critical “control factors,” identified
using the Parameter Design approach, could be measured and monitored in this way.
Based upon further refinement, it may be possible to establish control limits, targets,
and tolerances for these factors.
System Interrelationships
A systems model of processes differs from traditional models in many ways, one
of which is the notion of circular causality. In the non-systems view, every event
has its cause or causes in preceding events and its effects on subsequent events: the
scientist seeks the cause or effect. Using the linear method of causality, ultimate
causes are sought by tracing back through proximate causes. However, many phenomena do not “fit” the linear model: the relationships between them — and the
relationships between the attributes or characteristics of the elements — do not
conform to this linear approach to causality.
In engineering systems, a direct cause and effect relationship often exists
between the component of the system and the transformation of the input into an
output. A steering wheel channels the input of the vehicle operator directly into the
output of the system. That is, turning the steering wheel to the right or left actually
turns the wheels of the vehicle to the right or the left. However, it is equally clear
that error states or phenomena are nowhere near as simple or linear in the causal
relationship. Feedback loops and circular causality create very complex interactions.
Similarly, the choice of lubricants may not affect the performance of the system
until months or years later, when early deterioration of a transmission would result
in difficulty shifting gears.
Similarly, in teams, some cause and effect relationships are clearly related in
time and others are not. Interventions by a timekeeper will affect the ability of the
team to stick to its agenda. But other factors have more circular relationships. In a
global problem-solving team, changing seating arrangements from the long-tabled
boardroom style to a circular arrangement will result in more universal eye contact
among team members, which may increase the team’s communication. This leads
to enhanced exchange of information, which may lead to a clearer identification of
the problem which will, in turn, lead to a more targeted search for relevant data,
which will finally lead to a root-cause identification for the problem. Changing the
seating arrangement may enhance finding a root cause more quickly than might have
been the case in boardroom seating, and the cause and effect chain may be quite
intricate.
SL3151Ch01Frame Page 34 Thursday, September 12, 2002 6:15 PM
34
Six Sigma and Beyond
SYSTEMS ENGINEERING
An emerging basis for unifying and relating the complexities of managerial problems
is the system concept and its methodology. This concept has been applied more to
the analysis of productive systems than to other fields, but it is clear that the value
of the concept in management is pervasive.
The word “system” has become so commonplace in the general literature as
well as in the field that one often wants to scream, for its common use almost
depreciates its value. Yet the word itself is so descriptive of the general interacting
nature of the myriad of elements that enter managerial problems that we can no
longer talk of complex problems without using the term “systems.” Indeed, we must
learn to distinguish the general use of the term from its specific use as a mode of
structuring and analyzing problems.
One of the great values of the system concept is that it helps us to take a very
complex situation and lend order and structure to it by using statistics, probability,
and mathematical modeling. A major contribution of the concept is the reduction of
complexity in managerial problems to a block diagram showing the relationship and
interacting effects of the various elements that affect the problem at hand. At its
present state of development and application, the systems concept is most useful in
helping us gain insight into problems. At a second and very powerful level of
contribution, however, systems analysis is gaining prominence as a basis for generating solutions to problems and evaluating their effects, and for designing alternate
systems.
“SYSTEMS“ DEFINED
We have been using the term systems without defining it. Though nearly everyone
may have a general understanding of the term, it may be useful to be more precise.
Webster defines a system as a regularly interacting or interdependent group of items
forming a unified whole. Thus a system may have many components and objects,
but they are united in the pursuit of some common goal. They are in some sense
unified, organized, or coordinated. The components of a system contribute to the
production of a set of outputs from given inputs that may or may not be optimal or
best with respect to some appropriate measure of effectiveness. Systems are often
complex, although the definition does not specify that they need to be.
It is probably correct to say that some of the most interesting systems for study
are complex and that a change in one variable within the system will affect many
other variables of the system. Thus in productive systems, a change in production
rate may affect inventories, hours worked per week, overtime hours, facility layout,
and so on. Understanding and predicting these complex interactions among variables
is one of our main objectives in this section.
One of the elusive aspects of the systems concept is in the definition of a specific
system. The fact that we can define the system that we wish to consider and draw
boundaries around it is important. We can then look inside the defined system to
see what happens, but it is just as important to see how the system is affected by
its environment.
SL3151Ch01Frame Page 35 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
35
Thus, invariably, every system can be thought of as a part of an even larger
system. One of the dangers of defining systems that are too narrow in scope is that
we may fail to see broader implications. On the other hand, a broad definition runs
the risk of leaving out important details involved in the functioning of the system.
Obviously, there is a large element of “art” in the application of systems concepts.
Systems can be open or closed. An open system is one characterized by outputs
that respond to inputs but where the outputs are isolated from and have no influence
on the inputs. An open system is not aware of its own performance. In an open
system, past performance does not control future performance.
A closed system (sometimes called a feedback system), on the other hand, is
influenced by its own behavior. A feedback system has a closed loop structure that
brings results from past action of the system back to control future action. There
are two types of feedback systems: the negative feedback, which seeks a goal and
responds as a consequence of failure to achieve the goal, and the positive feedback,
which generates growth processes wherein action builds a result that generates still
greater action. Unfortunately most of the feedback systems in managerial problems
are of the negative feedback type where the objective is to control a process.
IMPLICATIONS
OF THE
SYSTEMS CONCEPT
FOR THE
MANAGER
Managers who put the systems concept to work are rewarded initially by the development of a deeper understanding of the systems that they manage. By developing
the structure of the interacting effects of system components and the various feedback
control loops in the system, managers can see better which “handles” to twist in
order to keep themselves in control. Indeed, with a knowledge of the system structure, a manager can see how it might be possible to restructure the system in order
to create the most effective feedback control mechanisms.
With the availability of large-scale system models (simulation, statistical, reliability, and mathematical models) a manager is better able to assess the effects of
changes in one division component on another and on the organization as a whole.
Furthermore, the managers of any of the productive operations are better able to see
how their units fit into the whole and to understand the kinds of trade-offs that are
often made by higher level management and that sometimes seemingly affect one
unit adversely.
Perhaps one of the most important contributions of systems thinking is in the
concept of suboptimization. Suboptimization often occurs when one views a problem
narrowly. For example, one can construct mathematical formulas to determine the
minimum cost (optimum) quantity of products or parts to manufacture at one time,
which results in a supposedly optimum inventory level. If one broadens the definition
of the system under study, however, and includes not just the inventory and reorder
subsystem but the production and warehousing subsystems as well, it may turn out
that the inventory-connected costs are a measure of only part of the problem. If the
product exhibits seasonal sales, the costs of changing production levels may be
significant enough to warrant carrying extra inventories to smooth production and
employment. In such a situation, the minimum cost inventory model would be a
suboptimal policy.
SL3151Ch01Frame Page 36 Thursday, September 12, 2002 6:15 PM
36
Six Sigma and Beyond
Organizational suboptimization often occurs when the production and distribution functions of an enterprise are operated as essentially two different businesses.
The factory manager will be faced with minimizing production cost while the
sales/distribution manager will be faced mainly with an inventory management,
shipping, and customer service problem. Each suborganization attempting to optimize separately will likely result in a combined cost somewhat larger than if the
attempt were made to optimize the combined system. The reasons are fairly obvious,
since in minimizing the costs of inventories, the sales function transmits directly to the
factory most of the effects of sales fluctuations instead of absorbing these fluctuations
through buffer inventories. Suboptimization is the result. By coordinating the efforts of
the production and distribution managers, however, it may be possible to achieve some
balance between inventory costs and the costs of production fluctuation.
Another way to view suboptimization is through the “hidden factory” — the
terminology of six sigma. If we take for example the issue of safety, let us examine
what is really at stake. No one will deny that the bottom line of all safety programs
is injury prevention, more often called “loss control.” To appreciate the concept of
“loss control,” however, we must look at the direct and indirect costs (often called
the hidden costs) associated with an on-the-job injury.
The direct costs are:
1. Medical
2. Compensation
The indirect costs are:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Time lost from work by injured
Loss in earning power
Economic loss to injured’s family
Lost time by fellow workers
Loss of efficiency due to breakup of crew
Lost time by supervision
Cost of breaking in new worker
Damage to tools and equipment
Time damaged equipment is out of service
Spoiled work
Loss of production
Spoilage — fire, water, chemical, explosives, and so on
Failure to fill orders
Overhead cost (while work was disrupted)
Miscellaneous (There are at least 100 other items of cost that appear one
or more times with every incident in which a worker is injured.)
The point here is that with most injuries the focus becomes the direct cost,
thereby dismissing the indirect costs. It has been estimated time and again that the
cost relationship of direct to indirect cost is one to three, yet we continue to ignore
the real problems of injury. An appropriate system design for injury prevention would
SL3151Ch01Frame Page 37 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
37
minimize if not eliminate the hidden costs. Generally speaking, the system should
include (a) engineering, (b) education, and (c) enforcement considerations. Some
specific considerations should be:
1.
2.
3.
4.
Workers will not be injured or killed
Property and materials will not be destroyed
Production will flow more smoothly
You will have more time for the other management duties of your job
DEFINING SYSTEMS ENGINEERING
A simple definition of systems engineering is: A customer/requirements–driven
engineering and management process which transforms the voice of the customer(s)
into a feasible and verified product/process of appropriate configuration, capability,
and cost/price. A system is always greater than the sum of its parts and is no better
than the weakest link. The derivative of that, of course, is that optimizing a part
does not optimize the whole. This was brought out by Mayne et al. (2001), when
they reported that 37% of all the automotive warranty for model year 2000 was in
interfacing of parts rather than individual components. The message of Mayne and
coworkers and most of us in the quality field has been and continues to be: interactions determine the performance of the system. We cannot, no matter how hard
we try, fully understand the whole by breaking down and analyzing parts — yet
design is historically done precisely that way.
Systems engineering builds on the fact that the whole is the most important
entity and that integration to meet cost, schedule, and technical performance is
dependent on both technical and management intervention. Ultimately, systems
engineering is a team-based activity. This is very important because as we move
into the future we see that:
1. Quality is becoming more customer dependent rather than definitional
from the provider’s point of view. In other words, we must specify what
the product or service must do and how well it must do it, then verify the
design to those requirements.
2. Products/services are becoming more sophisticated (complex).
3. Traditionally, product development has been very serial with designs thrown
over the imaginary wall to manufacturing — something that today is not
working very well. This has resulted in late changes and ultimately higher
costs. Systems engineering is based on the notion that design may be on a
parallel development process and with a strong consideration for its total life
cycle — manufacture, delivery, maintenance, decommission, and recycling.
For systems engineering to be effective in any organization, that organization
must be committed to integration of several items including timing of development
and specific delivery(ies) at specific milestones. A generic approach to facilitate this
is the following model, involving the steps of pre-feasibility analysis, requirement
analysis, design synthesis, and verification.
SL3151Ch01Frame Page 38 Thursday, September 12, 2002 6:15 PM
38
Six Sigma and Beyond
PRE-FEASIBILITY ANALYSIS
Before the actual analysis takes place there is a preliminary trade-off analysis as to
what the customer needs and wants and what the organization is willing to provide.
This is done under the rubric of preliminary feasibility. When the feasibility is
complete, then the actual requirement analysis takes place.
REQUIREMENT ANALYSIS
The requirement analysis involves the following steps:
1. Collect the requirements — the customer’s needs, wants, and expectations
are collected at every level.
2. Organize requirements — group the information in such a way that requirements are easy to address. Determine if the requirements are complete.
3. Translate into more precise terms — cascade the definitions to precise
terms, honing their definition to the best possible correlation of real world
usage.
4. Develop verification requirements — preliminary verification tests are
discussed and proposed here to make sure that they are in fact doable.
At the end of the requirement analysis the results are moved to the second stage
of the system engineering model, which is design synthesis. However, before the
synthesis actually takes place, another feasibility analysis is completed to find out
whether the organization is capable of designing the requirements of the customer.
This feasibility analysis takes into consideration the organization’s knowledge from
previous or similar designs and incorporates it into the new. The idea of this feasibility analysis is to make sure the designers optimize reusability and carry over parts
and/or complete designs.
DESIGN SYNTHESIS
Design synthesis involves the following steps:
1. Generate alternatives — the more alternatives the better. The alternatives
are generated with functionality in mind from the customer’s perspective
as reflected in the system specifications. Remember that the ultimate
design is indeed a trade-off design.
2. Evaluate alternatives — the generated alternatives are evaluated with
appropriate benchmarking data and integrated into the design based on
the customer’s requirements.
3. Generate sub element requirements — big chunks or sub elements are
chosen and requirements cascaded to each sub element. As the cascading
process continues, verification requirements are also developed to test the
overall system integrity as more and more sub elements are integrated
into the total system.
SL3151Ch01Frame Page 39 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
39
At the end of the design synthesis a very important analysis takes place. This
analysis tests the integrity of the design against the customer’s requirements. If it is
found that the requirements are not addressed (design gap), a redesign or a review
takes place and a fix is issued. If, on the other hand, everything is as planned, the
process moves to the third link of the model — verification.
VERIFICATION
The final stage is verification. It involves the following:
1. Verify that requirements are complete — a review of all requirements
from both design and the customer takes place with appropriate tests and
correlated to real world usage.
2. Verify that design meets customer’s requirements — CAE tools, labs, rigs,
simulations, and key life testing are some of the verification methodologies used at this stage. The intent here is to verify that the selected system
and cascaded requirements will meet the customer’s requirements and
provide a balanced optimum design from the customer’s perspective.
At the end of this stage, if problems are found they (the problems) revert back
to the design; if there are no problems, the design goes to manufacturing, with a
design ready to fulfill the customer’s expectations. This final stage in essence tests
the integrity of the design against the actual hardware. In other words, the questions
often heard in verification are: Does the design work? Can you prove it?
The beauty of this model is that it is an iterative model, meaning that the
process — no matter where you are in the model — iterates until a balanced optimum
design is achieved. This is because the goal is to design a customer-friendly design
with compatibility, carryover, reusability, and low complexity requirements compared to other, similar designs. Iterations happen because of human oversights,
poorly defined requirements, or an increase in knowledge.
Another special characteristic of systems engineering is the notion of traceability.
Traceability is reverse cascading and is used throughout the design process to make
sure that the voices of the customer, regulator, and corporate or lower-level design
are heard and accounted for in the overall design. With traceability, extra caution is
given to the trade-off analysis. This is because by definition trade-off analysis
accounts for designs with certain priority levels among the needs and wants of the
customer. In a trade-off analysis, we choose among stated design alternatives.
However, a trade-off analysis is also an iterative process, and usually none of
the alternatives is perfect [R(t) = 1 – F(t)]. This is important to remember because
all trade-off analyses assess risk, both external and internal, of the given alternatives
so as to make robust designs.
A final word about verification and systems engineering: As we already mentioned, the intent of verification is to make sure that the hardware meets the requirements of the design. The process for conducting this verification is done —
generally — in five steps, which are:
SL3151Ch01Frame Page 40 Thursday, September 12, 2002 6:15 PM
40
Six Sigma and Beyond
1. Plan — Review all requirements and make a preliminary assessment as
to their impact. At the end of this evaluation, take ownership of important
requirements and begin the assessment of specific tests and methods. It
is not unusual at this stage to review the plan again and perhaps combine,
consolidate, or even adjust the plan completely. In this stage we select
attribute data, as well, monitor the “unselected” requirement, schedule
preliminary tests, and approve the testing schedule. As you begin to zero
in on specific targets, you may want to take into consideration features
of the proposed design and benchmarking data so that your targets become
of value to the customer. If this plan is rich in information, it is possible
to begin predicting and formulating prototype(s).
2. Execute — In this stage, the engineer in charge will determine which
test(s) to run, when to run them, what the data should look like, and what
to expect. Proper test execution is of importance here.
3. Analyze/revise — Analyze the test results, and see if the design has
changed in any way. Determine whether to redo the test if the design
changed during the test for any reason. At this stage you expect no design
changes, only testing revisions and modifications.
4. Sign-off — This is the most common ending for a verification process. In
this stage final approvals are given, usually several months before production begins.
5. Archive — This is a step that most engineers do not do, yet it is a very
important step in the process. The idea of archiving or documenting is to
make sure that key events are appropriately documented for future use. You
may want to document unusual tests, time frames of specific tests, or any
specific requirements that you had the intention of verifying but could not
verify using the planned method. In essence, this phase of verification consists of lessons learned that need to be carried forward to the next design.
The focus of this process is to make sure that the requirements are driving the
process and not the tests regardless of how sophisticated they are. To be sure, tests
are an integral part of verification, but they are the means not the end. The intent
of the tests is to verify each requirement, and there is no wrong way as long as the
testing method is linked to real world usage. The reason for doing all this is to:
1.
2.
3.
4.
Reduce workload in design verification
Improve prototype and testing efficiency by avoiding duplication
Improve testing quality resulting in higher sign-off confidence
Improve communication and stronger relationships between customer and
suppliers
ADVANCED QUALITY PLANNING
Before we address the “why” of planning, we assume that things do go wrong. But
why do they go wrong? Obviously, many specific answers address this question.
Often the answer falls into one of these four categories:
SL3151Ch01Frame Page 41 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
41
1. We never have enough time, so things are omitted.
2. We have done this, this way, in order to minimize the effort.
3. We assume that we know what has been requested, so we do not listen
carefully.
4. We assume that because we finish a project, improvement will indeed
follow, so we bypass the improvement steps.
In essence then, the customer appears satisfied, but a product, service, or process
is not improved at all. This is precisely why it is imperative for organizations to
look at quality planning as a totally integrated activity that involves the entire
organization. The organization must expect changes in its operations by employing
cross-functional and multidisciplinary teams to exceed customer desires — not just
meet requirements. A quality plan includes, but is not limited to:
•
•
•
•
•
•
•
A team to manage the plan
Timing to monitor progress
Procedures to define operating policies
Standards to clarify requirements
Controls to stay on course
Data and feedback to verify and to provide direction
An action plan to initiate change
Advanced quality planning (AQP), then, is a methodology that yields a quality
plan for the creation of a process, product, or service consistent with customer
requirements. It allows for maximum quality in the workplace by planning and
documenting the process of improvement. AQP is the essential discipline that offers
both the customer and the supplier a systematic approach to quality planning, to
defect prevention, and to continual improvement. Some specific uses are:
1. In the auto industry, demand is so high that Chrysler, Ford, and General
Motors have developed a standardized approach to AQP. That standardized
approach is a requirement for the QS-9000 and/or ISO/TS19469 certification. In addition, each company has its own way of measuring success
in the implementation and reporting phase of AQP tasks.
2. Auto suppliers are expected to demonstrate the ability to participate in
early design activities from concept through prototype and on to production.
3. Quality planning is initiated as early as possible, well before print release.
4. Planning for quality is needed particularly when a company’s management
establishes a policy of “prevention” as opposed to “detection.”
5. When you use AQP, you provide for the organization and resources needed
to accomplish the quality improvement task.
6. Early planning prevents waste (scrap, rework, and repair), identifies
required engineering changes, improves timing for new product introduction, and lowers costs.
SL3151Ch01Frame Page 42 Thursday, September 12, 2002 6:15 PM
42
Six Sigma and Beyond
7. AQP is used to facilitate communication with all individuals involved in
a program and to ensure that all required steps are completed on time at
acceptable cost and quality levels.
8. AQP is used to provide a structured tool for management that enforces
the inclusion of quality principles in program planning.
WHEN DO WE USE AQP?
We use AQP when we need to meet or exceed expectations in the following situations:
1.
2.
3.
4.
5.
During the development of new processes and products
Prior to changes in processes and products
When reacting to processes or products with reported quality concerns
Before tooling is transferred to new producers or new plants
Prior to process or product changes affecting product safety or compliance
to regulations
The supplier — as in the case of certification programs such as ISO 9000, QS9000, ISO/TS19469, and so on — is to maintain evidence of the use of defect prevention
techniques prior to production launch. The defect prevention methods used are to
be implemented as soon as possible in the new product development cycle. It follows
then, that the basic requirements for appropriate and complete AQP are:
1. Team approach
2. Systematic development of products/services and processes
3. Reduction in variation (this must be done, even before the customer
requests improvement of any kind)
4. Development of a control plan
As AQP is continuously used in a given organization, the obvious need for its
implementation becomes stronger and stronger. That need may be demonstrated
through:
1. Minimizing the present level of problems and errors
2. Yielding a methodology that integrates customer and supplier development activities as well as concerns
3. Exceeding present reliability/durability levels to surpass the competition’s
and customer’s expectations
4. Reinforcing the integration of quality tools with the latest management
techniques for total improvement
5. Exceeding the limits set for cycle time and delivery time
6. Developing new and improving existing methods of communicating the
results of quality processes for a positive impact throughout the organization
SL3151Ch01Frame Page 43 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
WHAT IS
THE
DIFFERENCE
BETWEEN
AQP
AND
43
APQP?
AQP is the generic methodology for all quality planning activities in all industries.
APQP is AQP; however, it emphasizes the product orientation of quality. APQP is
used specifically in the automotive industry. In this book, both terms are used
interchangeably.
HOW DO WE MAKE AQP WORK?
There are no guarantees for making AQP work. However, three basic characteristics
are essential and must be adhered to for AQP to work. They are:
1. Activities must be measured based on who, what, where, and when.
2. Activities must be tracked based on shared information (how and why),
as well as work schedules and objectives.
3. Activities must be focused on the goal of quality-cost-delivery, using
information and consensus to improve quality.
As long as our focus is on the triad of quality-cost-delivery, AQP can produce
positive results. After all, we all need to reduce cost while we increase quality and
reduce lead time. That is the focus of an AQP program, and the more we understand
it, the more likely we are to have a workable plan.
ARE THERE PITFALLS
IN
PLANNING?
Just like everything else, planning has pitfalls. However, if one considers the alternatives, there is no doubt that planning will win out by far. To be sure, perhaps one
of the greatest pitfalls in planning is the lack of support by management and a hostile
climate for its practice. So, the question is not really whether any pitfalls exist, but
why such support is quite often withheld and why such climates arise in organizations
that claim to be “quality oriented.”
Some specific pitfalls in any planning environment may have to do with commitment, time allocation, objective interpretations, tendency toward conservatism,
and an obsession with control. All these elements breed a climate of conformity and
inflexibility that favors incremental changes for the short term but ignores the
potential of large changes in the long run. Of these, the most misunderstood element
is commitment.
The assumption is that with the support of management, all will be well. This
assumption is based in the axiom of F. Taylor at the turn of the 20th century, which
is “there is one best way.” Planning is assumed to generate the one best way not
only to formulate, but to implement, a particular idea, product, and so on. Sometimes,
this notion is not correct. In today’s “agile world,” we must be prepared to evaluate
several alternatives of equal value. (See the section on system engineering).
As a consequence, the issue is not simply whether management is committed to
planning. It is also, as Mintzberg (1994) has observed, (1) whether planning is committed to management, (2) whether commitment to planning engenders commitment
SL3151Ch01Frame Page 44 Thursday, September 12, 2002 6:15 PM
44
Six Sigma and Beyond
to the process of strategy making, to the strategies that result from that process, and
ultimately to the taking of effective actions by the organization, and (3) whether the
very nature of planning actually fosters managerial commitment to itself.
Another pitfall, of equal importance, is the cultural attitude of “fighting fires.”
In most organizations, we reward problem solvers rather than planners. As a consequence, in most organizations the emphasis is on low-risk “fire fighting,” when in
fact it should be on planning a course of action that will be realistic, productive,
and effective. Planning may be tedious in the early stages of conceptual design, but
it is certainly less expensive and much more effective than corrective action in the
implementation stage of any product or service development.
DO WE REALLY NEED ANOTHER QUALITATIVE TOOL TO GAUGE QUALITY?
While quantitative methods are excellent ways to address the “who,” “what,” “when,”
and “where,” qualitative study focuses on the “why.” It is in this “why” that the
focus of advanced quality planning contributes the most results, especially in the
exploratory feasibility phase of our projects.
So, the answer to the question is a categorical “yes” because the aim of qualitative
study is to understand rather than to measure. It is used to increase knowledge,
clarify issues, define problems, formulate hypotheses, and generate ideas. Using
qualitative methodology in advanced quality planning endeavors will indeed lead to
a more holistic, empathetic customer portrait than can be achieved through quantitative study, which in turn can lead to enlightened engineering and production
decisions as well as advertising campaigns.
HOW DO WE USE
THE
QUALITATIVE METHODOLOGY
IN AN
AQP SETTING?
Since this volume focuses on the applicability of tools rather than on the details of
the tools, the methodology is summarized in seven steps:
1. Begin with the end in mind. This may be obvious; however, it is how most
goals are achieved. This is the stage where the experimenter determines
how the study results will be implemented. What courses of action can
the customer take and how will they be influenced by the study results?
Clearly understanding the goal defines the study problem and report
structure. To ensure implementation, determine what the report should
look like and what it should contain.
2. Determine what is important. All resources are limited and therefore we
cannot do everything. However, we can do the most important things. We
must learn to use the Pareto principle (the vital few as opposed to the
trivial many). To identify what is important, we have many methods,
including asking about advantages and disadvantages, benefits desired,
likes and dislikes, importance ratings, preference regression, key driver
analysis, conjoint and discrete choice analysis, force field analysis, value
analysis, and many others. The focus of these approaches is to improve
performance in areas in which a competitor is ahead or in areas where
SL3151Ch01Frame Page 45 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
3.
4.
5.
6.
7.
45
your organization is determined to hold the lead in a particular product
or service.
Use segmentation strategies. Not everyone wants the same thing. Learn
to segment markets for specific products or services that deliver value to
your customer. By segmenting based on wants, engineering and product
development can develop action oriented recommendations for specific
markets and therefore contribute to customer satisfaction.
Use action standards. To be successful, standards must be used, but with
diagnostics. Standards must be defined at the outset. They are always
considered as the minimum requirements. Then when the results come
in, there will be an identified action to be taken, even if it is to do nothing.
List the possible results and the corresponding actions that could be taken
for each. Diagnostics, on the other hand, provide the “what if” questions
that one considers in pursuing the standards. Usually, they provide alternatives through a set of questions specific to the standard. If you cannot
list actions, you have not designed an actionable study. Better design it
again.
Develop optimals. Everyone wants to be the best. The problem with this
statement is that there is only room for one best. All other choices are
second best. When an organization focuses on being the best in everything,
that organization is asking for failure. No one can be the best in everything
and sustain it. What we can do is focus on the optimal combination of
choices. By doing so, we usually have a usable recommendation based
on a course of action that is reasonable and within the constraints of the
organization.
Give grasp-at-a-glance results. The focus of any study is to turn people
into numbers (wants into requirements), numbers into a story (requirements into specifications), and that story into action (specifications into
products or services). But the story must be easy to understand. The results
must be clear and well-organized so that they and their implications can
be grasped at a glance.
Recommend clearly. Once you have a basis for an action, recommend that
action clearly. You do not want a doctor to order tests and then hand you
the laboratory report. You want to be told what is wrong and how to fix
it. From an advanced quality planning perspective, we want the same.
That is, we want to know where the bottlenecks are, what kind of problems
we will encounter, and how we will overcome them for a successful
delivery.
APQP INITIATIVE
AND
RELATIONSHIP
TO
DFSS
The APQP initiative in any organization is important in that it demonstrates our
continuing effort to achieve the goal of becoming a quality leader in the given
industry. Inherent in the structure of APQP are the following underlying value-added
goals:
SL3151Ch01Frame Page 46 Thursday, September 12, 2002 6:15 PM
46
Six Sigma and Beyond
1. Reinforces the company’s focus on continuous improvement in quality,
cost, and delivery
2. Provides the ability to look at an entirely new program as a single unit
• Preparing for every step in the creation
• Identifying where the greatest amount of effort must be centered
• Creating a new product with efficiency and quality
3. Provides a better method for balancing the targets for quality, cost, and
timing
4. Allows for deployment of the targets using detailed practical deliverables
with specific timing schedule requirements
5. Provides a tool for program management to follow up all program planning processes.
The APQP initiative explicitly focuses on basic engineering activities to avoid
concerns rather than focusing on the results in the product throughout all phases.
Based on the fact that the deliverables are clearly defined between departments
(supplier/customer relationships), program concerns and issues can be solved efficiently.
The APQP initiative also is forceful in viewing the review process at the end of
the cycle as unacceptable. Rather, the review must be done at the end of each planning
step. This provides a critical step-by-step review of how the organizations are
following best possible practices. Also, the APQP initiative has a serious impact on
stabilizing the program timing and content. Stabilization results in cost improvement
opportunities including reduction of special sample test trials. Understanding the
program requirements for each APQP element from the beginning provides the
following advantages:
•
•
•
•
•
Clarifies the program content
Controls the sourcing decision dates
Identifies customer-related significant/critical characteristics
Evaluates and avoids risks to quality, cost, and timing
Clarifies for all organizations product specifications using a common
control plan concept
Application of APQP in the DFSS process provides a company with the opportunity to achieve the following benefits:
1. It provides a value-added tool allowing program management to track and
follow up on all the program planning processes — focusing on engineering method and quality results.
2. It provides a critical review of how each organization is following best
possible practices by focusing on each planning step.
3. It identifies the complete program content upon program initiation, viewing all elements of the process as a whole (AIAG 1995; Stamatis 1998).
SL3151Ch01Frame Page 47 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS)
47
Once program content has been clarified, the following information can be
discerned:
1.
2.
3.
4.
Sourcing decision dates are identified.
Customer-related significant/critical characteristics are specified.
Quality, cost, and timing risks are evaluated and avoided.
Product specifications are established for all organizations using a common control plan concept.
Using the APQP process to stabilize program timing and content, the opportunities for cost improvement are dramatically increased. When we are aware of the
timing and concerns that may occur during the course of a program, it provides us
the opportunity to reduce costs in the following areas:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Product changes during the program development phase
Engineering tests
Special samples
Number of verification units to be built (prototypes, first preproduction
units, and so on)
Number of concerns identified and reduced
Fixture and tooling modification costs
Fixture and tooling trials
Number of meetings for concern resolution
Overtime
Program development time and deliverables (an essential aspect of both
APQP and DFSS)
For a very detailed discussion of APQP see Stamatis (1998).
REFERENCES
Automotive Industry Action Group (AIAG), Advanced Product Quality Planning and Control
Plan. Chrysler Co., Ford Motor Co., and General Motors. Distributed by AIAG,
Southfield, MI, 1995.
Mayne, E. et al., Quality Crunch, Ward’s AUTOWORLD, July 2001, pp. 14–18.
Mintzberg, H., The Rise and Fall of Strategic Planning, New York Free Press, New York, 1994.
Stamatis, D.H., Advanced Quality Planning. A Commonsense Guide to AQP and APQP,
Quality Resources, New York, 1998.
SELECTED BIBLIOGRAPHY
Bossert, J., Considerations for Global Supplier Quality, Quality Progress, Jan. 1998, pp.
29–34.
Brown, J.O., A Practical Approach to Service: Supplier Certification, Quality Progress, Jan.
1998, pp. 35–40.
SL3151Ch01Frame Page 48 Thursday, September 12, 2002 6:15 PM
48
Six Sigma and Beyond
Forcinio, H., Supply Chain Visibility: Is It Really Possible? Managing Automation, July 2001,
pp. 24–28
Gurwitz, P.M., Six Questions to Ask Your Supplier About Multivariate Analysis, Quirk’s
Marketing Review, Feb. 1991, pp 8–9, 23.
Mehta, P.V. and Scheffler, J.M., Getting Suppliers in on the Quality Act, Quality Progress,
Jan. 1998, pp. 21–28.
Schoenfeldt, T., Building Effective Supplier Relationships, Automotive Excellence, Winter
1999, pp.17–25.
SL3151Ch02Frame Page 49 Thursday, September 12, 2002 6:13 PM
2
Customer
Understanding
In Volume I of this series, we made a point to discuss the difference between
“customer satisfaction” and “loyalty.” We said that they are not the same and that
most organizations are interested in loyalty. We are going to pursue this discussion
in this chapter because, as we have been saying all along, understanding the difference between customer service and customer satisfaction can provide marketers with
the competitive advantage necessary to retain existing customers and attract new
ones. “Understanding” what satisfaction is and what the customer is looking for can
provide the engineer with a competitive advantage to design a product and or service
second to none. At first glance, service and satisfaction may appear to mean the
same thing, but they do not; service is what the marketer provides and what the
customer gets, and satisfaction is the customer’s evaluation of the level of service
received based on preconceived assumptions and the customer’s own definition of
“functionalities.” The satisfaction level is determined by comparing expected service
to delivered service. Four outcomes are possible:
1.
2.
3.
4.
Delight — positive disconfirmation (a pleasant surprise)
Dissatisfaction — negative disconfirmation (an unpleasant surprise)
Satisfaction — positive confirmation (expected level of service)
Negative confirmation, which suggests that you are neither managing
expectations properly nor delivering good service
In managing service delivery, relying solely on the objective aspects of service
is a mistake. Customers base future behavior on their evaluation of the experience
they actually had, which is in effect their degree of satisfaction or dissatisfaction.
In addition to determining that satisfaction degree, marketers should seek to learn
the reasons underlying customers’ feelings (the insight) in order to tell the engineers
what, how and when to make changes and maintain high satisfaction levels when
they are achieved. In researching these areas, marketers should note that the why is
not the what; nor is it the how. That is, what happened and how it made customers
feel does not tell us why they felt as they did. And not knowing that, managing not
only the service that customers experience but also their expectations becomes
difficult, if not impossible.
At times, service providers and customers tend to think differently. Consider
this dealership example: After conducting 10 focus groups for an automotive company in a medium-size Midwestern city, the researchers discussed the findings in a
review meeting with the head of marketing for the company. The researchers noted
that, after having spoken with more than 100 recent customers, they had learned
49
SL3151Ch02Frame Page 50 Thursday, September 12, 2002 6:13 PM
50
Six Sigma and Beyond
that the vast majority were frustrated and unhappy about having to wait more than
15 minutes before getting attended to. The marketing executive interrupted, saying,
“Those customers should consider themselves lucky; if they were in the dealerships
of one of our competitors, they would have to wait 20 to 30 minutes before they
were seen by the service manager.”
This example includes all the information needed to explain the difference between
customer service and customer satisfaction. The customers in this example defined their
personal expectations about the service — their waiting time experience — and clearly,
a conflict existed between their service expectation (a short wait before being seen
by a service manager) and their service experience (waits of more than 15 minutes).
Customers then were dissatisfied with the waiting rooms and the dealerships in
general. The marketing manager’s response to customer dissatisfaction was to note
that the waits could have been worse: He knew that competitors’ dealerships were
worse. He also knew customer waits of more than 30 minutes were not uncommon.
In light of these data, he judged the 15- to 20-minute waits in the waiting room
acceptable.
Herein lies the conflict between service delivery and customer satisfaction. The
important concept for this marketing executive — and for all marketers — is that
customers define their own satisfaction standards. The customers in this example
did not go to the competitors’ dealerships; instead, they came to the marketer’s
dealership with a set of their own expectations in a preconceived environment. When
the marketer used his service delivery criteria to defend the waiting time, he simply
missed the point.
Unfortunately, this illustration is typical of how many marketers think about
customer satisfaction. They tend to relate customer satisfaction directly to their own
service standards and goals rather than to their customers’ expectations, whether or
not those expectations are realistic. To assess satisfaction, marketers must look
beyond their own assessments, tapping into the customers’ evaluations of their
service experience.
Consider, for example, a bank that thought it was doing a good job of measuring
service satisfaction but really was too focused on service delivery. This bank had
developed a policy that time spent in the lobby room should be less than 15 minutes
for all customers. A customer came into the office and waited 12 minutes in the
reception area for a mortgage application. Then she waited another five minutes for
the loan officer to clear all the papers from his desk from the previous customer and
an additional three minutes for him to get the file and all the pertinent information
for the current application. As this customer was leaving, she was asked to fill out
a customer satisfaction questionnaire. Under the category for reception area waiting
time, she checked off that she had waited less than 15 minutes.
Based on this response, the bank’s marketing director assumed the customer
was satisfied, but she was not; the customer had been told that if she came in for
the mortgage loan during her lunch hour, she would be taken care of right away.
Instead, she waited a total of 20 minutes for her application process to begin. She
did not have time to shop for the gift her son needed that night for a birthday party,
and her entire schedule was in disarray. She left dissatisfied.
SL3151Ch02Frame Page 51 Thursday, September 12, 2002 6:13 PM
Customer Understanding
51
Understanding the difference between service and satisfaction is the first step
in developing a successful customer satisfaction program, and all marketers must
share the same understanding. Only customers can define what satisfaction means
to them. Here are some practical ways to understand customers’ expectations:
• Ask customers to reflect on their experiences with your services and their
needs, wants, and expectations.
• Talk to customers face to face through focus groups, as well as through
questionnaires. A wealth of information can be collected this way.
• Talk with your staff about what they hear from customers about their
expectations and experience with service delivery.
• Review warranty data.
Remember the three words that can help you learn from your customers: What,
how and why. That is, what service did you experience, how did it make you feel,
and why did you feel that way? Continual probing with these three perspectives will
deliver the answers you need to better manage service to generate customer satisfaction.
As Harry (1997 p. 2.20) has pointed out:
•
•
•
•
•
•
We do not know what we do not know
We cannot act on what we do not know
We do not know until we search
We will not search for what we do not question
We do not question what we do not measure
Hence, we just do not know
Therefore, part of this understanding is to identify a transfer function. That is,
you need a bridge (quantitatively or qualitatively) that will define and explain the
dependent variable (the customer’s needs, wishes, and excitements) with the independent variable(s) (the actual requirements that are needed from an engineering
perspective to satisfy the dependent variable). The transfer function may be a linear
one (the simplest form) or a polynomial one (a very complex form).
Typical equations expressing transfer functions may look something like:
Y = a + bx
Y = f(x1, x2…xn)
− sin θ 
df
a 
r 
a sin θ 
sin
cos
+
+
=r
θ
+
θ




dθ
g cos θ  cos θ 
cos2 θ 
g cos2 θ 
They can be derived from:
SL3151Ch02Frame Page 52 Thursday, September 12, 2002 6:13 PM
52
Six Sigma and Beyond
•
•
•
•
•
•
Known equations that describe the function
Finite element analysis and other analytical models
Simulation and modeling
Drawing of parts and systems
Design of experiments
Regression and correlation analysis
In DFSS, the transfer function is used to estimate both mean (sensitivity) and
variance (variability) of Ys and ys. When we do not know what the Y is, it is
acceptable to use surrogate metrics. However, it must be recognized from the beginning that not all variables should be included in a transfer function. Priorities should
be set based on appropriate trade-off analysis. This is because DFSS is meant to
emphasize only what is critical, and that means we must understand the mathematical
concept of measurement.
The focus of understanding customer satisfaction has been captured by Rechtin
and Hair (1998), when they wrote that “an insight is worth a thousand market
surveys.” It is that insight that DFSS is looking for before the requirements are
discussed and ultimately set. This will help us in identifying what is really going
on with the customer. Let us look at the function first.
THE CONCEPT OF FUNCTION
In any business environment, there may be no more powerful concept than that of
function. To understand why this is a potent notion, we need to consider what we
mean by “function.”
What is “function?” Let us start with a common definition: “The natural, proper,
or characteristic action of any thing...” This is the Webster’s New Collegiate Dictionary definition, and it is quite representative of what you will find in most dictionaries. This is actually a powerful and insightful definition.
Think about any product or service that you purchase. What is it about the
product or service (I will use the term “product” from here on, although every issue
that will be discussed will be equally valid for services) that causes you to exchange
money, goods, or some other scarce resource for it? Ultimately, it is because you
want the “characteristic actions” that the product provides. These “actions” may be
simple or complex, utilitarian or capricious, Spartan or gilded — but in each transaction, you enter with a set of unfulfilled wants and needs that you attempt to satisfy.
Moreover, if the product you purchase actually manages to fulfill the wants and
needs that you perceive, you are more likely to be satisfied with your purchase than
when the product fails to satisfy your desires. Within these few short sentences, we
have the fundamental principles that underpin three of the most powerful tools in
the modern pantheon of quality, productivity, and profitability: Quality Function
Deployment, Value Analysis, and Failure Modes & Effects Analysis.
To put the concept of function into action, we need to refine our definition. The
expanded definition we would like you to consider is “The characteristic actions
that a system, part, service, or manufacturing process generates to satisfy customers.”
SL3151Ch02Frame Page 53 Thursday, September 12, 2002 6:13 PM
Customer Understanding
53
In this expanded definition, you cannot only see the concept of function at work,
but you may be able to recognize the essential abstraction of a process.
In a process, some type of input is transformed into an output. As a simple
equation, we might say that
Input(s) + Transformation = Output(s)
In the case of function, the inputs are the unfulfilled wants and needs that a
customer or a prospective customer has. These can be and often are intricate; this
is why the discipline of marketing is still more art than science. (We will have more
to say about this issue in just a moment.) Nevertheless, there exist multiple sets of
unfulfilled wants and needs that are open to the lures and attractions provided by
the marketplace.
In this very broad model, the transformation is provided by the producer. With
one, ten, or hundreds of internal processes (within any discussion of process, there
is always the “Russian doll” image: processes within processes within processes),
business organizations attempt to determine the unmet wants and needs that customers have. The producer then must design and develop products and delivery
processes that will provide tangible and/or intangible media of exchange that will
assuage the unmet needs or need sets.
Finally, the external processes that involve exchange of the producer’s goods for
money or other barter provide the customer with varying degrees of satisfaction.
The gratification (or lack of satisfaction) that results can then be viewed as the output
of the general process.
In business, the inputs are not within the control of producers. As a result,
producers need powerful tools to understand, delineate, and plan for ways to meet
these needs. This can be thought of as the domain of the Kano model or Quality
Function Deployment.
The transformational activities, however, are within the control of the producer.
These “controlled” activities include planning efforts to deliver “function” at a
satisfactory price; the nuances and subtleties of this activity can be strongly influenced or even controlled by the discipline of Value Analysis.
In addition, fulfillment of marketplace “need sets” also implies that this fulfillment will occur without unpleasant surprises. Unwanted, incomplete, or otherwise
unacceptable attempts to produce “function” often result in failure. This implies that
producers have a need to systematically analyze and plan for a reduction in the
propensity to deliver unpleasant surprises. This planning activity can be greatly aided
by the application of Failure Modes and Effects Analysis techniques.
To see how these ideas mesh, we need to consider how “function” can be
comprehensively mapped. This will require several steps. To apply what will be
discussed in the rest of this section, we need to emphasize the importance of choosing
the proper scale for any analysis. The probability is that you will choose too broad
a view or too much detail; we will try to provide guidance on this issue during our
discussion of methods.
SL3151Ch02Frame Page 54 Thursday, September 12, 2002 6:13 PM
54
UNDERSTANDING CUSTOMER WANTS
Six Sigma and Beyond
AND
NEEDS
The nature of customer wants and needs is complex, deceptive, and difficult to
discern. Nevertheless, the prediction of future wants and needs in the marketplace
is perhaps the most important precursor to financial success that exists. Knowing
and doing something that is profitable are two very separate (but not completely
independent) aspects of this challenge.
The first task that must be undertaken is to list the customers that we are
interested in. Virtually no business is universal in terms of target market. Moreover,
in today’s highly differentiated world, it is likely an act of folly to suggest that any
product would have universal appeal. (Even an idealized product such as a capsule
that, when ingested, yields immortality would have its detractors and would be
rejected by some elements of humanity.) So, we need to start by cataloging the
customers that we might wish to serve.
In this effort, however, we need to recognize that there is a chain of customers.
This is often seen in discussions of the “value chain,” a concept explored in detail
by Porter (1985). For example, Porter discusses the concept of “channel value,”
wherein channels of distribution “perform additional activities that affect the buyer,
as well as influence the firm’s own activities.”
This means that there are several dimensions on which we will discover important customers. First, there are market segments and niches. These can be geographic,
demographic, or even psychographic in nature. Second, there are many intermediary
customers, who have an important influence on ultimate purchases in the marketplace. Finally, there are what might be called “overarching” customers — persons
or entities that must be satisfied even in the absence of any purchasing power — to
enable or permit the sale of goods and services.
This is readily visible in the auto industry. From the standpoint of a major parts
manufacturer, say United Technologies, Johnson Controls, Dana Corporation, or
Federal-Mogul, there are legions of important customers. In the market segment
category, there are the vehicle manufacturers, including GM, Ford, Toyota, and all
of the others. Contained within this category of customers are many sub-customers,
including purchasing agents, engineers, and quality system specialists.
As far as intermediary customers are concerned, we can consider perhaps a dozen
or more important players. We need to consider the transportation firms that carry the
parts from the parts plant to the assembly plant. We also need to think about the people
and the equipment within the assembly plant that facilitate the assembly of the part into
a vehicle. (If anyone doubts this is important, they have never tried to sell a part to an
assembly plant where the assembly workers truly dislike some aspect of the part.) The
auto dealer is yet another step in this array of hurdles, and mechanics and service
technicians constitute still one more customer who must, in some way, be reasonably
satisfied if commercial success is to spring forth.
In addition, the auto industry has a web of regulatory and statutory requirements
that govern its operation. These include safety regulations, emission standards, fleet
mileage laws, and the general requirements of contract law. Behind these government
requirements are still more governmental prerequisites, including occupational safety
law, environmental law as applied to manufacturing, and labor law. This means that
SL3151Ch02Frame Page 55 Thursday, September 12, 2002 6:13 PM
Customer Understanding
55
the governmental agencies and political constituencies that administer these laws
can be seen as the “overarching” customers described previously.
Ultimately, vehicle purchasers themselves are the critical endpoint in this chain of
evaluation. And within this category of customers are the many segments and niches
that car makers discuss, such as entry level, luxury, sport utility, and the many other
differentiation patterns that auto marketers employ. Only when a product passes through
the entire sequence will it have a reasonable chance of successfully and repeatedly
generating revenue for the producer. This provides a critical insight about function.
Function is only meaningful through some transactional event involving one or
more customers. Only customers can judge whether a product delivers desired or
unanticipated-yet-delightful function.
In many cases (in fact, most), firms simply do not consider all of these customers.
As a result, they are often surprised when problems arise. Moreover, they suffer
financial impediments as a result — even though they may simply budget some
degree of failure expectation into overhead calculations.
A rational assessment of this situation means that the first requirement for understanding function is a comprehensive listing of customers. Frankly, this is very hard
work, and it requires time, dedication, and effort. Regardless, understanding the customers that you wish to serve is an essential prerequisite to comprehension of function.
CREATING
A
FUNCTION DIAGRAM
If you want to understand function, the first requirement is the use of a special language.
Function must be described using an active verb and a measurable noun. Fowler
(1990) calls this linguistic construction a “functive” — a function described in direct
terms that are, to the greatest degree possible, unambiguous.
In a functive, the verb should be active and direct. How can you tell if the verb
meets this test? Can you subject the action described by the verb to reasonable
verification? One of the difficulties with this approach is the widespread affinity for
ambiguity, the evil spawn of corporate life. To reduce ambiguity, you must avoid
“nerd” verbs such as provide, be, supply, facilitate, and allow.
Since most people pepper their business speech with these verbs, how can you
avoid using them? If you cannot avoid “nerd verbs,” then you might try to convert
the noun to a verb. Instead of “allow adjustment” think about what it is that you are
adjusting. For example, you could easily restate this “nerd verb” functive with “adjust
clearance.” Whenever a “nerd verb” comes up, try converting the noun that goes
with the nerd verb to a verb, and then select the appropriate measurable noun. Most
of the time, this will reduce the ambiguity.
The measurable noun also must be reasonably precise. In particular, it should
be relatively unchanging in usage and should rarely be the name of a part, operation,
or activity used to generate the product or service under consideration. The test for
a measurable noun is very simple: can you measure the noun? Bear in mind, however,
that the measurement may be as simple as counting — or it can be a detailed
statement of a technical or engineering expectation of the degree to which a function
can be fulfilled. Ultimately, the combination of an active verb and measurable noun
will give rise to an extent — the degree to which the functive is executed.
SL3151Ch02Frame Page 56 Thursday, September 12, 2002 6:13 PM
56
Six Sigma and Beyond
For example, let us consider a simple mechanical pencil. The mechanism of the
pencil must feed lead at a controlled rate. This also means that there must be a
specific position for the lead. If the lead is fed too far, the lead will break. If the
feed is not far enough, the pencil may not be able to make marks. As a result, one
function that we can consider is “position lead.” The measurement is the length of
exposed lead, and the desired extent of the positioning function may be 5 mm from
the barrel end of the pencil. If there are limits on the extent in the form of tolerance,
this is a good time to think about these limits as well.*
While you are describing function in terms of an active verb and measurable
noun, it is very important to maintain a customer frame of reference. Do not forget
that function is only meaningful in terms of customer perception. No matter how
much you may be enamored of a product feature or service issue, you must decide
if the target customer will perceive your product in the same way.**
THE PRODUCT FLOW DIAGRAM
AND THE
CONCEPT
OF
FUNCTIVES
Now that we understand the essential issues involved in describing function, we can
learn more about techniques for understanding the many complex functions that
exist in a product. If products had just one or two functions, it would be easy to
understand the issues that motivate purchase behavior. In today’s complex world,
though, products seem to have more features (and hence more functions) nearly
every day. How can we understand this complexity? Fortunately, there are common
patterns that exist in the functionality of any product. We can see this through the
creation of a product flow diagram.
A product flow diagram uses simple, direct language to delineate function. This
is a valuable aid to help you understand what your product provides to customers.
We can start our efforts to develop this diagram by identifying functions.
In practice, this is best done by a group or team, and it should be done after all
participants have become familiar with the list of customers at whom the product
is aimed. A general list of functions can then be developed using brainstorming
techniques or other group-based creativity tools.
There are a few issues that you should keep in mind while simply listing
functions. Functions must describe customer wants and needs from the viewpoint
of the customer. A common problem is to confuse product functions with functions
being performed by the customer, the designer, the engineer who created the product,
or the manufacturer who produces the product. Again, think about a mechanical
pencil. Many people will start by describing the function of the pencil as “write
notes.” However, the pencil, by itself, cannott write anything. (If you can invent a
* When you do this, you have created a “specification” for this function.
** One of the most common and debilitating errors in market analysis is to assume that others will
respond the same way that you do. This is a simple but profound delusion. Most of us think that we are
normal, typical people. When we awaken in the morning, we look in the mirror and see a normal (although
perhaps disheveled if we look before the second cup of coffee) person. Thus, we think, “I like this widget.
Since I am normal, most other people will like this widget, too. Therefore, my tastes are likely to be a
good guideline to what my customers will want.” In most cases, even if you really are “normal” and
even “typical,” this easy generalization is dangerously false.
SL3151Ch02Frame Page 57 Thursday, September 12, 2002 6:13 PM
Customer Understanding
57
pencil that will write notes without a writer attached you will probably become rich.)
The function that is more appropriate for the pencil is “make marks.”
The best way to start is to simply brainstorm as many functions as you can using
active verbs and measurable nouns. There are many ways to brainstorm; in this case,
it is usually easiest to have everyone involved use index cards or sticky notes to record
their ideas. Remember that brainstorming should not be interrupted by criticism; just
let the ideas flow. You will get things that do not apply, and, until you gain experience,
you will not always use the “functive” structure that is ultimately important. Do not
worry about these issues during the idea-generation phase of this process.
Once you have a nice pile of cards or notes, start by sifting and sorting the ideas
into categories. In any pile of ideas, there will be natural “groupings” of the cards.
Determine these categories and then sort the cards. This can be thought of as “affinity
diagramming” of the ideas. You will find some duplicates and some weird things
that probably do not belong in the pile.* Discard the excess baggage and look at
the categories. Are there any important functions you have missed? Do not hesitate
to add new ideas to the categories, either.
Finally, you are ready to bear down on the linguistic issues. Make sure that all
of the ideas are expressed in terms of active verbs and measurable nouns. Change
the idea to a “functive” construction, and then look for the “nerd” verb cards. Convert
all of the “nerd verb” functions into true functives, with fully active verbs and
measurable nouns. When you are done, you will have an interesting and important
preliminary output.
Now, count the cards again. If you have more than 20 to 30 cards, you have
probably tackled too complex a subject or viewpoint. For example, a commercial
airline has thousands — even hundreds of thousands — of functions. If you wanted
to analyze function on the widest scale, you would probably be guilty of too much
detail if you listed more than 30 functives. On the broadest scales of view, you may
only list a handful of functions. Nothing is wrong with a short list, especially for
the broadest view.
If you have trouble, we can suggest some “function questions” that can assist
you in your brainstorming. Try these questions:
• What does it do?
• If a product feature is deleted, what functions disappear?
• If you were this element, what are you supposed to accomplish? Why do
you exist?
Ask the function questions in this order:
• The entire scope of the project
• A “system” view
• Each element of the project
* Do not automatically toss out strange ideas — see if the team can reword or express more clearly the
idea that underlies the oddball cards or notes. Some percentage of these cards will have important
information. Many will be eventual discards, but do not jump to conclusions.
SL3151Ch02Frame Page 58 Thursday, September 12, 2002 6:13 PM
58
Six Sigma and Beyond
• A “part” view
• Each sub-element of the project
• A “component” view
Finally, we can start our next task, which consists of arranging functions into
logical groups that show interrelationships. In addition, this next “arranging” step
will allow us to test for completeness of function identification and improve team
communication.
We start by asking “What is the reason for the existence of the product or
service?” This function represents the fundamental need of the customer. Example:
a vacuum cleaner “sucks air” but the customer really needs “remove debris.” Whatever this reason for being is, we need to identify this particular function, which we
call the task function. You must identify the task function from all of the functions
you have listed.
If you happen to find more than one task function, it is quite likely that you
have actually taken on two products. For example, a clock-radio has two task
functions: tell time and play music. However, you would be far better served by
breaking your analysis into two components — one for telling time, the other for
playing music. Alternatively, this product could be considered on a broader basis,
as a system — in which case the task function might be “inform user,” with subordinate functions of “tell time” and “play music.”
In any event, once you have identified the task function, you will realize that
there are many functions other than the task function. Divide the remaining functions
by asking:
“Is the function required for the performance of the task function?”
If the answer to this question is yes, then the function can be termed essential.
If the answer is no, then the function can be considered enhancing. All functions
other than the task function must be either essential (necessary to the task function)
or enhancing. So, your next task is to divide all of the remaining functions into these
two general categories.
You can further divide the enhancing functions — the functions that are not
essential to the task function. Enhancing functions influence customer satisfaction
and purchase decisions. Enhancing functions always divide into four categories:
1.
2.
3.
4.
Ensure dependability
Ensure convenience
Please senses
Delight the customer*
* “Delight the customer” is actually quite rare — most enhancing functions fit one of the other three
categories. If you do find a “delight the customer” function, try comparing this with an “excitement”
feature in a Kano analysis; you should find that the function fits both descriptions.
SL3151Ch02Frame Page 59 Thursday, September 12, 2002 6:13 PM
Customer Understanding
59
None of these categories is needed to accomplish the task function. In fact, if
you do not have a task function (and the associated essential functions), you probably
do not have a product. The enhancing functions are those issues that purchasers
weigh once they have determined that the task function will likely be fulfilled by
your product. So, divide all of the enhancing functions into these four categories.
Your next challenge is to create a function hierarchy that will, in finished form,
be a function diagram. Start by asking this question: how does the product perform
the task function? Primary essential functions provide a direct answer to this question
without conditions or ambiguity. Secondary functions explain how primary functions
are performed. Continue until the answer to “how” requires using a part name, labor
operation, or activity, or you deplete your reserve of essential function cards.
Now, you must reverse this process. Ask “why” in the reverse direction. For
example, for a mechanical pencil, the task function is “make marks.” One of the
functions you must perform to make marks is “support lead.” How do you support
the lead? You do it by supporting the internal barrel tube (support tube) that carries
the lead and by positioning this tube (position tube). Why do you support the tube
and position the tube? You do this to support the lead. Why do you support the lead?
You support the lead in order to make marks. The “chain” of function is driven by
the how questions from the task function to primary then secondary functions —
while this same chain is driven in reverse by why questions from secondary to
primary to task function.
As you progress, you will notice that you may be missing functions. If you find
that you are, add additional functions as needed. After you have completed building
“trees” of functions with the essential functions, repeat this process with the enhancing functions. The only difference is that the primary enhancing functions — ensure
convenience, ensure dependability, please the senses, and delight the customer —
have already been chosen.
When you have finished, you will have a completed product flow diagram. At
this point, try to delineate the extent of each function (range, target, specification,
etc.) for each of the functions. Do not forget: Extent also tests “measurability” of
each active verb–measurable noun combination.
For example, for the mechanical pencil, the assembly may look like Figure 2.1.
The sorted brainstorm list of functives may look like this:
Entire project scope:
• Make marks
• Erase marks
• Fit hand
• Fit pocket
• Show name
• Display advertising
• Convey message
• Maintain point
Tube assembly:
• Store lead
• Position lead
SL3151Ch02Frame Page 60 Thursday, September 12, 2002 6:13 PM
60
Six Sigma and Beyond
Eraser
Tube Assembly
Lead
Barrel
EAT AT JOE’S
0,7 mm
Clip
FIGURE 2.1 Paper pencil assembly.
• Feed lead
• Reposition lead
• Support lead
• Locate eraser
• Position tube
• Generate force
• Hold eraser
Lead:
• Make marks
• Maintain point
Eraser:
• Erase marks
• Locate eraser
Barrel:
• Support tube
• Support lead
• Position lead
• Protect lead
• Position tube
• Position eraser
• Show name
• Display advertising
• Convey message
• Fit hand
• Enhance feel
• Provide instructions
Clip:
• Generate force
• Position clip
• Retain clip
And, finally, the function diagram (only one possibility among many, many
different results) may look like Figure 2.2.
SL3151Ch02Frame Page 61 Thursday, September 12, 2002 6:13 PM
Customer Understanding
61
Maintain
point
Support
lead
Basic
Functions
Position
lead
Support
tube
Position
tube
Store lead
Feed lead
Re-position
lead
Make
Marks
Ensure
convenience
Erase
marks
Hold
eraser
Retain clip
Fit pocket
Supporting
Functions
Position
clip
Ensure
dependability
Provide
Instructions
Please the
senses
Enhance
feel
Fit hand
Delight the
customer
Display
advertising
Convey
message
Show name
Position
eraser
Locate
eraser
Generate
force
FIGURE 2.2 Function diagram for a mechanical pencil.
THE PROCESS FLOW DIAGRAM
If you are working with a process rather than a product, you need to create a broad
viewpoint “map” that shows how the activities in the process are accomplished. This
can be done quickly and easily with a process flow diagram. The difficulty with
most process flow diagrams is that they quickly bog down in too much detail.
Whenever the detail gets too extensive, people lose interest (except for those who
created the chart, but they are only part of the audience). Even though we need
detail, we must avoid placing all of the details into one flow chart — at least if we
want people to use the resulting charts. So, we will employ a “10 × 10” method that
will aid in both communicating and managing the level of detail in a flow chart.
If you keep the number of boxes in a flow chart to ten or fewer, most people
will find your chart easy to read and understand. You can also use a “standard”
symbol set for flow charting. After a great deal of trial and error from our experience,
we have found that a simple set of ten symbols will explain almost any business
process and provide enough options so that any team can easily illustrate what is
going on — see Figure 2.3. By using some of the American National Standards
Institute (ANSI) symbols and judiciously mixing in some easy-to-remember shapes,
anyone can learn to flow chart a process in just a few minutes. The first step is to
select a simple basis or point of view for your flow charts. This could be the view
of the process operator, the work piece, or the process owner. (Be careful — if you
SL3151Ch02Frame Page 62 Thursday, September 12, 2002 6:13 PM
62
Six Sigma and Beyond
Significant
Incidental
move
move
Output
Process
Input
Decision
Store
Delay
Inspect
Document
FIGURE 2.3 Ten symbols for process flow charting.
confuse your viewpoint while developing a flow chart, you will quickly become
confused about the process functions.)
Inputs and outputs are the easiest steps to understand. You start with an input
and you end with an output. A document may be a special kind of input or output —
it can appear at the beginning, at the end, or during the overall process. The process
box is the most common box; it describes transformations that occur within the
process. Decisions are represented by a diamond shape, and an inspection step (in
the shape of a stop sign) is just a special kind of decision. If you delay a process,
you use a yield sign. If you store information, you use an inverted yield sign — a pile.
Movement is also important. If a move is incidental, you tie the associated boxes
together with a simple arrow. However, if a movement is complex (say, sending a
courier package to Hong Kong as opposed to handing it to your next-cubicle neighbor), then you may have a special transformation or process step that we call a
“significant” move, i.e., a large horizontal arrow.
Let us look at a simple process for handling complaints. Your office deals with
customer complaints, but you have a local factory (where your office is) and a factory
in Japan. How you handle a complaint might look like Figure 2.4.
This flow chart shows many of the symbols noted above, but it is not the only
way that the process could be flow charted. However, if the team that developed the
chart (once again, a team approach is likely to be the most effective technique) can
reach a high level of consensus, then the communication of these ideas to others
will be powerful and comprehensive.
Now that the basics of 10 × 10 (ten steps or fewer using ten or fewer symbols)
are apparent, it becomes possible to construct a “hierarchy” of flow charts that will
fill in missing details that may have been skirted with the “10 step limit.”
The next step is to create a new 10 × 10 flow chart for each box in the top level
flow chart that requires additional explanation to reach the desired level of detail.
These next flow charts (typically three to five of the boxes require additional detail)
make up the second level flow charts. Wherever necessary, go to another level of
flow charts; continue creating 10 × 10 flow charts until you have a hierarchy of flow
charts that directly addresses all of the details that you feel are important.
Finally, for each process box on each flow chart, you will have a process purpose.
Why did you do this step? Simple — you had one (or possibly two) purposes in
SL3151Ch02Frame Page 63 Thursday, September 12, 2002 6:13 PM
Customer Understanding
63
Phone notice of
customer
complaint
Local or
overseas
factory?
Log
complaint
into database
Compile
information
for notice
Pending
file
Local
Local
factory is
notified of
complaint
Complaint
notice
Overseas
Send by
courier to
Japan
Factory is
notified of
complaint
FIGURE 2.4 Process flow for complaint handling.
mind when you designed this step into your process. Process purpose can be easily
described using the language of function. Once again, you must use an active verb
and a measurable noun.
Often, a team can move directly to listing process functions from the flow charts.
However, especially in manufacturing, it is common for the level of detail hidden
in flow charts to be large, especially with intricate or subtle fabrication procedures.
You may need to use an additional tool for teasing the “function” information from
a flow chart called a “characteristic matrix.”
A characteristic matrix is a reasonably simple analysis tool. The purpose of the
matrix is to show the relationships between product characteristics and manufacturing steps. The importance of product characteristics in this matrix is significant; by
considering the impact of a manufacturing step on product characteristics, we again
focus our attention on customer requirements. Too often, manufacturing emphasis
turns inward; it is critical that the focus be constantly directed at customers. Of
course, there are “internal” customers as well. It is certainly important that intermediate characteristics, necessary for facilitating additional fabrication or assembly
activities, be included in the analysis of function.
For example, a simple machining process could have the characteristic matrix
shown in Table 2.1.
In this example, a simple machining step could be shown on a process flow
chart with a process box that describes the machining operation as “CNC Lathe” or
something similar. However, the lathe operation creates several important dimensions, or product characteristics, that are needed to meet customer expectations.
These characteristics are sufficiently varied and complex that an additional level of
detail is necessary. Some of these characteristics are important to the end customer;
some are important to internal or “next step” process stations.
For this example, the three left hand columns establish important functional
information. The product characteristic is essentially the “measurable noun” (an
SL3151Ch02Frame Page 64 Thursday, September 12, 2002 6:13 PM
64
Six Sigma and Beyond
TABLE 2.1
Characteristic Matrix for a Machining Process
Product
Characteristic
Target
Value
Tolerance
Process Operations
Lathe
Turn
10
Diameter “A”
6.22 mm
±0.25 mm
Diameter “B”
3.25 mm
±0.1 mm
Shoulder “C”
12.2 mm
±0.5 mm
Radius “D”
0.5 mm
±0.05 mm
Lathe
Turn
20
X
Face
Cut
30
Deburr
40
C
L
X
C
L
X
C
Cut
Radius
50
L
X
X = Characteristic Created By This Operation
C = Characteristic Used For Clamp Down In This Operation
L = Characteristics Used As Locating Datum In This Operation
occasional adjective is acceptable in a functive if there are several identical nouns,
such as diameter in this case). The extent is shown in the target dimension and
tolerance columns, and the “active verbs” can be constructed or deduced from the
“code letters” inserted in the matrix cells in the “Process Operation” columns.
In any event, whether you are able to determine functions directly from a process
flow chart or whether you find the use of characteristic matrices important, you need
to end with a comprehensive listing of function. The important aspect of process
function is to use a flow charting technique of some type to assist in reaching the
comprehensive assessment of function that is similar to the point-by-point listing
that can be achieved by the product flow diagram technique.
USING FUNCTION CONCEPTS
QUALITY METHODOLOGIES
WITH
PRODUCTIVITY
AND
Earlier, we suggested that function concepts form a powerful fundamental basis for
three major productivity and quality methodologies:
• Quality Function Deployment (QFD)
• Failure Modes and Effects Analysis (FMEA)
• Value Analysis (VA)
While we do not intend to explain these techniques fully in this context (however,
they will be explained later), we would like to address the usefulness of function
concepts in these methodologies. In these discussions, we are assuming that you
have a passing or even detailed familiarity with these tools. If not, you may wish
to pass over to the discussion of QFD later in this chapter or to Chapters 6 and 12
for lengthy discussions of FMEA and VA.
SL3151Ch02Frame Page 65 Thursday, September 12, 2002 6:13 PM
Customer Understanding
65
For Quality Function Deployment, the most challenging issue is the one that we
have just explored: how can one determine the functions that must be analyzed for
deployment? In other forms, this is the same question facing practitioners in FMEA
and VA. Clearly, the product flow diagram provides several instrumental techniques
for improving these activities.
A major difficulty in QFD is the often overwhelming complexity of the “House
of Quality” approach. Constructing the first house, using conventional QFD techniques, is often the start of the complexity. Many different customer “wants” are
listed. This is occasionally done as a “pre-planning” matrix. Moreover, the linguistic
construction for these “wants” is undisciplined and subjective.
Similarly, in FMEA, the initial list of failure modes is difficult to obtain. In VA,
determining the “baseline” value assessment can also be difficult.*
The techniques for developing a function diagram, especially the informal suggestions about “sizing” a project, can be very helpful in this regard. QFD, like FMEA
and VA, typically fails to deliver the results expected because the project selected
is too complex. A QFD study on a car or truck, for example, could easily contain
hundreds of thousands of pages of information. That is not to say that the information
in this study would not be valuable or that it should not be done; the issue is how
complexity of this type should be dealt with.
If you start with a systemwide view and construct a function diagram of the
limited size previously discussed (20–30 functions maximum, even fewer are better),
then this will provide a first level in a “hierarchy” of function diagrams. Subsequent
analysis of various subsystems, then components and parts, and finally processes
will complete the analysis. While the end result (for a car) would conceivably be of
the same magnitude, the belief that all of the work must be done within the same
team or by the same organization would be quickly abandoned. Moreover, the
knowledge and understanding that is developed is generated at the hierarchical level
(in the supply chain) of greatest importance, utility, and impact.
Moreover, using the “functive” combination of active verbs and measurable
nouns will assist in making QFD a useful tool. The vague, imprecise, or even
confusing descriptions of function that are often used in QFD contribute to the
difficulty in usage.
A vehicle planning team may carry out a QFD study on the overall vehicle,
assessing the major issues regarding the vehicle; these could include size, styling
motifs, performance themes, and target markets. Subsequently, a study of the powertrain (engine, transmission, and axles) could be completed by another team. The
engine itself could then be divided into major components: block, pistons, electronic
controls, and so forth. Ultimately, suppliers of major and minor components alike
would be asked to carry out QFD studies on each element. The multiplicity of
information is still present, but it is no longer generated in some centralized form.
This means that accessibility, usefulness, and the likelihood of beneficial deployment
of the findings are much greater.**
* In Value Analysis, the Function Analysis System Technique or “FAST,” a close cousin of the function
diagram, is typically used to establish the initial functional baseline for value calculations.
** If the reader sees an “echo” of the hierarchy of flow charts, this is not coincidental.
SL3151Ch02Frame Page 66 Thursday, September 12, 2002 6:13 PM
66
Six Sigma and Beyond
As an added benefit, starting QFD using this approach provides benefits in the
completion of FMEA and VA studies, since a consistent set of functions will be used
as a basis for each technique. We will next consider each of these in turn. We will
start with FMEA, because the importance of function in this methodology is not
widely understood or appreciated.
In FMEA, determining all of the appropriate failure modes is usually a great
challenge. This obstacle is reflected in the widespread difficulty in understanding
what is a failure mode and what is an effect. For example, the effect “customer is
dissatisfied” is often found in FMEA studies. While this is likely to be true, it is an
effect of little or no worth in developing and improving products and processes.
Similarly, failure modes are often confused with effect. This can be illustrated
with another common product, a disposable razor. How can we determine a comprehensive list of failure modes? Simply start with an appropriate function diagram.
For each function, we need to consider how these functions can go astray. There are
a limited number of ways that this can occur, all related to function. If you consider
the completion of a function (at the desired extent) to be the absence of failure, then
pose these questions about each function in the function diagram:
•
•
•
•
•
•
What would constitute an absence of function?
What would occur if the function were incomplete?
What would demonstrate a partial function?
What would be observed if there was excess function?
What would a decayed function consist of?
What would happen if a function occurs too soon or too late (out of desired
sequence)?
• Could there be an additional unwanted function?
Each of these conditions establishes a possible failure mode. For the disposable
razor, the task function is generally understood to be “cut hair” (not, of course, to
shave). The failure mode that is most obvious is an additional unwanted function,
namely “cut skin.” Notice that the mode of failure is not “feel pain” or “bleed;”
these are failure effects.
To make use of these ideas in the context of the function diagram, we must next
define “terminus” functions. Terminus functions are simply those functions at the
right hand (or “how”) end of any function chain in the function diagram. In the
mechanical pencil example, two terminus functions would be “position eraser” and
“locate eraser.” Why do you position and locate the eraser? To hold the eraser. Why
do you hold the eraser? To erase marks. Why do you erase marks? To ensure
correctness. Since this chain is one of enhancing functions, we do not directly modify
the task function.
Start your analysis of failure modes by testing each of the possible conditions
listed above against the terminus functions. After you have completed the terminus
functions, move one step in the “why” direction. However, as you move to the left,
you will find that you frequently discover the same modes for the other functions.
Since the function chain shows the interrelated nature of the functions, this should
SL3151Ch02Frame Page 67 Thursday, September 12, 2002 6:13 PM
Customer Understanding
67
not be surprising. As a rule, you will get most (if not all) of the relevant failure
modes from the terminus functions.* So, starting with the terminus functions will
speed your work and reduce redundancy.
By working through each function chain in the function diagram, a comprehensive list of failure modes can be developed. This listing of failure modes then alters
the approach to FMEA substantially; modes are clear, and cause-effect relationships
are easier to understand. Moreover, developing FMEA studies using function diagrams that were originally constructed as part of the QFD discipline assures that
product development activities continue to reflect the initial assumptions incorporated in the conceptual planning phase of the development process.**
Once you have identified failure modes in association with functions, the remainder of the FMEA study — though still involved — is rather mechanical. For each
failure mode, you must examine the likely effects that will result from this mode.
With a clear mode statement, this is much simpler, and you are much less likely to
confuse mode and effect issues. The effects can then be rated for severity using an
appropriate table. With the effects in hand, causes can next be established and the
occurrence rating estimated. Notice that this sequence of events makes the confusion
of cause and effect much more difficult; in many cases, the logical improbability of
reversal of cause and effect statements is so obvious that you simply cannot reverse
these two issues.
Finally, you can conclude the fundamental analysis with an evaluation of controls
and detection. Once again, starting with a statement of function makes this clearer
and less subject to ambiguity. Understanding the progression from function to mode
to cause to effect sets the stage. What is it that you expect to detect? Is it a mode?
In practice, detecting modes is extremely unlikely. You are more likely to detect
effects. However, are effects what you want to detect? Once an effect is seen the
failure has already occurred, and costs associated with the failure must already be
absorbed.
Let us return to the disposable razor to understand this. If the failure mode is
“cut skin,” we must recognize that detecting “cut skin” is extremely difficult. You
are much more likely to detect an effect — namely, pain or bleeding. Now, we
recognize that we really do not want to detect failures at this point. Instead, we need
to ask what are the possible causes of this failure mode. In this simple example, two
different causes are readily apparent. From a design standpoint, the blades of the
razor could be designed at the wrong angle to the shaver head. Even if the manufacturing were 100% accurate, a design that sets the blade angle incorrectly would
have terrible consequences. On the other hand, the design could be correct; the blade
angle could be specified at the optimum angle, but it could be assembled at an
incorrect angle. Detection would best be aimed at testing the design angle*** and
* This is even more true for a system FMEA than for a design FMEA study.
** Of course, any change that is made in concept during development activities requires a continuous
updating of the function diagrams under consideration.
*** In the ISO and QS-9000 systems, we can think of this in terms of design verification.
SL3151Ch02Frame Page 68 Thursday, September 12, 2002 6:13 PM
68
Six Sigma and Beyond
at controlling the manufacturing process so that the optimum design angle would
be repeatable (within limits) in production.*
Finally, the Value Analysis process can also make use of the function diagrams
that serve in the QFD and FMEA processes. In VA, the essence of the technique is
the association of cost with function. Once this is accomplished, the method of
functional realization can be considered in a variety of “what if” conditions. If there
is a comprehensive statement of function, VA teams can be reasonably sure that
ongoing value assessments, based on the ratio of function to cost, have a consistent
and rational foundation. Moreover, the teams have a much higher confidence that
these “what if” questions take customer issues into proper account.
Too often, VA activities are carried out as if function is well understood and
only cost matters. In too many cases, no function analysis is even performed. Despite
the long-standing cautions against this, this alluring shortcut is often taken to save
time, money, or both. The shortcomings of skipping function analysis in VA are not
trivial. More disappointing results in usage of the VA methodology have probably
been obtained because function was not fully and comprehensively understood.
At a very fundamental level, how can a value ratio analysis be performed without
a full statement of function? This is like calculating a return on investment without
knowing the investment. Moreover, the analysis of value ratio can be misleading if the
function issue is not well defined. It is easy to reduce cost. You simply eliminate features
and functions from a product. Soon, you will not even be able to accomplish the task
function. (In practice, “functionless” VA studies typically eliminate important enhancing
functions that make a critical difference in the marketplace, and customers consequently
pronounce unfavorable judgments on “decontented” products. VA then gets the blame.)
Since value studies typically occur subsequent to QFD and FMEA in product
development activities, the difficulty of understanding function is eliminated if
function is fully defined and even specified during these earlier activities. By using
function as the basis for product and manufacturing activities, a degree of focus and
understanding of customer wants and needs is preserved not only during VA activities
but throughout the product life cycle.
KANO MODEL
The tool of choice that is preferred for understanding the “function” is the Kano
model. A typical framework of the model is shown in Figure 2.5. The Kano model
identifies three aspects of quality, each having a different effect on customer satisfaction. They are:
1. Basic quality — take for granted they exist
2. Performance quality — the more principle
3. Excitement quality — the wow
* This is the issue of “process control” in the ISO and QS-9000 systems — in QS-9000, it goes to the
heart of the control plan itself. Also, this is a simplified example. In more detail, the failure mode of
“cut skin” can even occur when the blade angle is correct both in design and execution. A deeper
examination of these issues quickly leads to the consideration of “robustness” in the design itself.
SL3151Ch02Frame Page 69 Thursday, September 12, 2002 6:13 PM
Customer Understanding
69
+ Y – axis (Customer satisfaction)
-
+ X – axis (product functionality)
-
FIGURE 2.5 Kano model framework.
The more we find out about these three aspects from the customer, the more
successful we are going to be in our DFSS venture. (Caution: It is imperative to
understand that the customer talks in everyday language, and that this language may
or may not be acceptable from a design perspective. It is the engineer’s responsibility
to translate the language data into a form that may prove worthwhile in requirements
as well as verification. A good source for more detailed information is the 1993
book by Shoji.)
BASIC QUALITY
“Basic” quality refers to items that the customer is dissatisfied with when the product
performs poorly but is not more satisfied with when the product performs well.
Fixing these items will not raise satisfaction beyond a minimum point. These items
may be identified in the Kano model as in Figure 2.6.
Some sources for the basic quality characteristics are: things gone right, things
gone wrong, surrogate data, surveys, warranty, and market research.
PERFORMANCE QUALITY
“Performance” quality refers to items that the customer is more satisfied with more
of. In other words, the better the product performs the more satisfied the customer.
The worse the product performs, the less satisfied the customer. Attributes that can
be classified as linear satisfiers fall into this category. A typical depiction is shown
in Figure 2.7.
Some sources for performance quality characteristics are: internal satisfaction analysis, customer interviews, corporate targets/goals, competition, and benchmarking.
EXCITEMENT QUALITY
“Excitement” quality refers to items that the customer is more satisfied with when
the product is more functional but is not less satisfied with when it is not. This is
the area where the customer can be really surprised and delighted. A typical depiction
of these attributes is shown in Figure 2.8.
Some sources for excitement quality characteristics are: customer insight, technology, interviews with comments such as high % or better than expected.
SL3151Ch02Frame Page 70 Thursday, September 12, 2002 6:13 PM
70
Six Sigma and Beyond
+ Y – axis (Customer satisfaction)
-
+ product functionality
Brakes
Horn
Windshield wipers
-
FIGURE 2.6 Basic quality depicted in the Kano model.
Performance
Customer satisfaction
+
Quiet gear shift
+ X – axis (product functionality)
Wind noise
Power
-
Fuel economy
FIGURE 2.7 Performance quality depicted in the Kano model.
Customer satisfaction
+
Style
Ride
Features
-
+ X – axis (product functionality)
-
FIGURE 2.8 Excitement quality depicted in the Kano model.
Items that are identified as surprise/delight candidates are very fickle in the sense
that they may change without warning. Indeed, they become expectations. The
engineer must be very cautious here because items that are identified as excitement
items now may not predict excitement at some future date. In fact, we already know
that over time the surprised/delighted items become performance items, the performance items become basic, and the basic items become inherent attributes of the
product. A classic example is the brakes of an automobile. The traditional brakes
were the default item. However, when disc brakes came in as a new technology,
they were indeed the excitement item of the hour. They were replaced, however,
SL3151Ch02Frame Page 71 Thursday, September 12, 2002 6:13 PM
Customer Understanding
Surprise/delight
71
Customer satisfaction
+
-
Performance
+ product functionality
Excitement quality over time
-
FIGURE 2.9 Excitement quality depicted over time in the Kano model.
with the ABS brake system, and now even this is about to be displaced by the
electronic brake system. This evolution may be seen in the Kano model in Figure 2.9.
Developing these “surprised and delighted” items requires activities that gain
insight into the customers’ emotions and motivations. It requires an investment of
time to talk with and observe the customer in the customer’s own setting, and the
use of the potential product. Above all, it requires the ability to read the customer’s
latent needs and unspoken requirements.
Is there a way to sustain the delight of the customer? We believe that there is.
Once the attributes have been identified, a robust design must be initiated with two
objectives in mind.
1. Minimize the degradation of these items.
2. Preserve the basic quality beyond expectations.
These two steps will create an outstanding reliability and durability reputation.
QUALITY FUNCTION DEPLOYMENT (QFD)
Now that we have finished the Kano analysis, and we know pretty much what the
customer sees as functional and value added items, we are ready to organize all
these attributes and then prioritize them. The methodology used is that of QFD.
QFD is a planning tool that incorporates the voice of the customer into features that
satisfy the customer. It does this by portraying the relationships between product or
process whats and hows in a matrix form. The matrix form in its entirety is called
the House of Quality — see Figure 2.10.
One of the reasons why QFD is used is because it allows us to organize the Ys
and ys and xs into a workable framework of understanding. QFD does not generate
the Ys, ys, or xs. Ultimately, however, QFD will help in identifying the transfer
function in the form Y = f(x, n)
QFD was developed in Japan, with the intent to achieve competitive advantage
in quality, cost, and timing. To understand this need, one must comprehend what
quality control is all about from Japan’s point of view. Japan’s industrial standards
define Quality Control (QC) as a system of means to economically produce goods
and/or services that satisfy customer requirements. It is this definition of QC that
SL3151Ch02Frame Page 72 Thursday, September 12, 2002 6:13 PM
72
Six Sigma and Beyond
Correlation
matrix
HOW
Importance
What
I
M
P
O
R
T
A
N
C
E
Relationship
matrix
I
M
P
O
R
T
A
N
C
E
Competitive
assessment
Technical
difficulty
How much
Competitive
assessment
Important
control items
Importance
FIGURE 2.10 A typical House of Quality matrix.
propelled the Japanese to find not only a tool but a planning tool that implements
the business objectives, of which the right application is product development. The
definition of QFD is a systematic approach for translating customer wants/requirements into company-wide requirements. This translation takes place at each stage
from research and development to engineering and manufacturing to marketing and
sales and distribution. The QFD system concept is based on four key documents:
1. Overall customer requirement planning matrix. This document provides
a way of turning general customer requirements into specified final product control characteristics.
2. Final product characteristic deployment matrix. This document translates
the output of the planning matrix into critical component characteristics.
3. Process plan and quality control charts. These documents identify critical
product and process parameters as well as benchmarks for each of those
parameters.
SL3151Ch02Frame Page 73 Thursday, September 12, 2002 6:13 PM
Customer Understanding
73
4. Operating instructions. These documents identify operations to be performed by plant personnel to assure that the important parameters are
achieved.
TERMS ASSOCIATED
WITH
QFD
There are six key terms associated with QFD:
Quality function deployment — An overall concept that provides a means
of translating customer requirements into the appropriate technical requirements for each stage of product development and production (i.e., marketing
strategies, planning, product design and engineering, prototype evaluation,
production process development, production, sales). This concept is further
broken down into “product quality deployment” and “deployment of the
quality function” (described below).
Voice of the customer — The customers’ requirements expressed in their own
terms.
Counterpart characteristics — An expression of the voice of the customer
in technical language that specifies customer-required quality; counterpart
characteristics are critical final product control characteristics.
Product quality deployment — Activities needed to translate the voice of
the customer into counterpart characteristics.
Deployment of the quality function — Activities needed to ensure that customer-required quality is achieved; the assignment of specific quality
responsibilities to specific departments. (The phrase “quality function” does
not refer to the quality department, but rather to any activity needed to ensure
that quality is achieved, no matter which department performs the activity.)
Quality tables — A series of matrices used to translate the voice of the
customer into final product control characteristics.
BENEFITS
OF
QFD
QFD certainly appears to be a sensible approach to defining and executing the myriad
of details embodied in the product development process, but it also appears to be a
great deal of extra work. What is it really worth? Setting the logical arguments aside,
there are a number of demonstrated benefits resulting from the use of QFD:
•
•
•
•
•
•
•
•
Demonstrated results
Preservation of knowledge
Fewer startup problems
Lower startup cost
Shorter lead time
Warranty reduction
Customer satisfaction
Marketing advantage
SL3151Ch02Frame Page 74 Thursday, September 12, 2002 6:13 PM
74
Six Sigma and Beyond
Preservation of knowledge — The QFD charts form a repository of knowledge,
which may (and should) be used in future design efforts. For example: Toyota is
convinced that the QFD process will make good engineers into excellent engineers.
An American engineering expert once commented, “There isn’t anything in the QFD
chart I don’t already know.” Upon reflection, he realized that few other engineers
knew everything on that chart. The QFD charts can be a knowledge base from which
to train engineers.
Fewer startup problems/lower startup cost — Toyota and other Japanese
automobile manufacturers have found that the use of QFD more effectively “front
loads” the engineering effort. This has substantially reduced the number of costly
engineering changes at startup through a marked reduction of problems at startup.
QFD has helped to identify potential problems early in design or avoid oversights
through its disciplined approach.
Shorter lead time — Toyota has reduced its product development cycle to less
than 24 months.
Warranty reduction — The corrosion problems with Japanese cars of the 1960s
and 1970s led to enormous warranty expenses, significantly impacting profitability.
The Toyota rust QFD study resulted in virtually eliminating corrosion and the
resulting warranty expense.
Customer satisfaction — The Japanese automobile manufacturers tend to focus
on products that satisfy customers (as opposed to eliminating problems). The QFD
approach has greatly facilitated the satisfying of customer wants. Domestic customer
satisfaction surveys show that Japanese products have consistently scored higher
than many American products.
Marketing advantage — A Japanese manufacturer of earth moving equipment
introduced a series of five new models that offered substantial advantages over their
Caterpillar corporation counterparts, resulting in redistribution of market share.
QFD brings several benefits to companies willing to undertake the study and
training required to put the system in place. Some of these benefits as they relate to
marketing advantage are:
• Product objectives based on customers’ requirements are not misinterpreted at subsequent stops.
• Particular marketing strategies’ “sales points” do not become lost or
blurred during the translation process from marketing through planning
and on to execution.
• Important production control points are not overlooked. Everything necessary to achieve the desired outcome is understood and in place.
• Tremendous efficiency is achieved because misinterpretation of program
objectives, marketing strategy, and critical control points is minimized.
See Figure 2.2.
All of the above translate into significant marketing advantages, that is, speedy
introduction of products that satisfy customers without problems.
In addition to all the benefits already mentioned, Table 2.2 shows some of the
benefits from the total development process perspective, which is a synergistic result
starting with QFD.
SL3151Ch02Frame Page 75 Thursday, September 12, 2002 6:13 PM
Customer Understanding
75
TABLE 2.2
Benefits of Improved Total Development Process
Cash Drain
Old Process
Technology push, but
where’s the pull?
Disregard for voice of
the customer
Concepts with no needs,
needs with no concept
The voice of the engineer and
other corporate specialists is
emphasized
Mad dash with singular
concept, usually vulnerable
Initial design is not
production intent and
emphasizes newness rather
than superior design
Make it look good for
demonstration
Large number of highly
overlapped prototype
iterations leaves little time
for improvement
Product is developed, then
factory reacts to it
Old process parameters used
repetitiously without design
improvement
Inspection creates scrap,
rework, adjustments, and
field quality loss
Lack of teamwork
Eureka concept
Pretend designs
Pampered product
Hardware swamps
Here is the product;
where is the factory?
We have always made
it this way
Inspection
Give me my targets, let
me do my thing
ISSUES
WITH
Improved Process
Technology strategy and technology transfer
bring right technology to the product
House of Quality and all steps of QFD deploy
the voice of the customer throughout the
process
Pugh process converges on consensus and
commitment to invulnerable concept
Two step design and design competitive
benchmarking lead to superior design
Taguchi optimization positions product as far
as possible away from potential problems
Only four iterations, each planned to make
maximum contribution to optimization
One total development process, product, and
production capability
Taguchi process parameter improves quality,
reduces cycle times
Taguchi’s optimal checking and adjusting
minimizes costs of inspection
Teamwork and competitive benchmarking
beat contracts, and targets lead the process,
do not manage problems
TRADITIONAL QFD
The use of traditional QFD raises several issues for business people, including the
following:
1. Change is uncomfortable.
Counterpoint: There is an old saying, “If we do what we have done, we
will get what we have.” To truly improve, we must explore new patterns of logical thinking and let go of outdated ways. We must be willing to change.
2. Success is not realized until the product is released.
Counterpoint: The truest measure of customer satisfaction comes after the
product or service is introduced. It is easy to lose sight of improvements
that do not materialize until years after the improvement effort. We
SL3151Ch02Frame Page 76 Thursday, September 12, 2002 6:13 PM
76
Six Sigma and Beyond
3.
4.
5.
6.
would be remiss not to seek ways to achieve the end goal of customer
satisfaction in our design and development process.
QFD is a long process.
Counterpoint: QFD saves the team’s time and resources with new approaches and tools. Avoiding multiple redesigns and multiple prototype
levels in response to customer input recovers the time spent on QFD.
The upstream time saves multiples of downstream time.
It is not as much fun as “fire fighting.”
Counterpoint. Finding and fixing problems may be personally gratifying.
It is the stuff from which heroes/heroines are made. But emergencies
are not in the company’s best interest and certainly not in the customer’s interest. Management must provide a system that rewards problem
prevention as well as problem solving.
The relation to the traditional product development process is not understood.
Counterpoint: QFD replaces some traditional product design and development events, i.e., target setting and functional assumptions, and thereby
does not add time.
It is difficult to accept customer input when the “voice of the engineer”
contradicts.
Counterpoint: Engineering has delivered about 80% customer satisfaction; getting to 90–95% is a tough challenge requiring enhancements to
current methods for achieving quality.
PROCESS OVERVIEW
The easiest way to think of QFD is to think of it as a process consisting of linked
spreadsheets arranged along a horizontal (Customer) axis and intersecting vertical
(Technical) axis. Important details include the following:
• From a macro perspective, the horizontal arrangement is referred to as
the Customer Axis because it organizes the Customer Wants.
• Customers are the people external to the organization who purchase,
operate, and service your products. Customers can also be internal, i.e.,
the end users of your work within the organization.
• The vertical arrangement is referred to as the Technical Axis Customer
Wants into technical metrics.
• The intersection of the axes (referred to as the Relationship Matrix)
identifies how well engineering metrics correlate to customer satisfaction.
• A closer look reveals that the interrelated matrices build upon one another
beginning with a validated list of Customer Wants.
DEVELOPING
A
“QFD” PROJECT PLAN
Perhaps one of the most important issues in QFD is the selection of appropriate
teams. Teams must share a common vision and mission to accomplish their objectives. Some of the reasons are:
SL3151Ch02Frame Page 77 Thursday, September 12, 2002 6:13 PM
Customer Understanding
• Building a project plan is the first critical team-building exercise
• The project plan has been standardized in QFD, so all teams follow a
basic strategy that includes the following steps:
• Develop Project Plan to include safety standards and any governmental
regulations, as well as timing.
• Review Project Plan with program management for buy-in.
• Complete the Customer Axis.
• Review Customer Axis interim report with program management.
• Complete Technical Axis.
• Develop corporate strategy.
• Develop final report.
• Develop Deployment Plan for integrating into business cycle.
• Communicate results to all programs and affected activities.
The Customer Axis
The steps necessary for completion of the customer axis include the following:
Determining Customer Wants
a. Obtain Customer Wants.
b. Select relevant Customer Wants — about 30% of total Wants.
c. Add applicable Wants.
d. Set up focus groups, interviews, surveys, etc.
e. Refine Customer Wants list.
f. Enter Customer Wants into QFD net.
g. Give Customer Wants to strategic standardization organization (SSO).
Obtaining customer competitive evaluations
a. Submit Customer Wants to market research (team).
b. Develop mail-out questionnaire and/or clinic (market research).
c. Send mail-out questionnaire and/or conduct clinic (market research).
d. Report results to project team (market research).
e. Enter customer competitive evaluation data into the internal team base.
Setting customer targets
a. Identify Customer Want (team).
b. Review its Customer Desirability Index (CDI) rating and rank (team).
c. Identify baseline product (team).
d. Review customer competitive evaluations (team).
e. Identify corporate strategy (team).
Calculate image ratio for each Customer Want: customer target/baseline product.
Calculate strategic CDI for each Customer Want: CDI × image ratio ×
sales point.
f. Enter corporate strategy into customer targets matrix (team).
g. Set customer targets — either opportunity to copy or sales point.
h. If opportunity to copy, enter symbol into customer targets matrix.
i. If sales point, enter values into customer targets matrix (team).
j. End.
77
SL3151Ch02Frame Page 78 Thursday, September 12, 2002 6:13 PM
78
Six Sigma and Beyond
Determining Technical System Expectations (TSE)
a. Review and adapt TSE template (team).
b. Review past and current projects for additional TSEs (team).
c. Identify and define new TSEs (team).
d. Organize adapted list of TSEs (team).
e. Enter TSEs into internal base (team).
Determining relationships
a. Review the relationship (team).
b. Confirm/establish relationships (team and subject matter experts
[SMEs]).
c. Seek team consensus (team).
d. Collect data and/or conduct experiments (team and SMEs) to find out
whether disagreements exist.
e. Check that each Want is satisfied by at least one TSE (team and SMEs).
f. Enter into internal base.
Technical competitive benchmarking
• Buy, rent, lease or borrow competitive products (team).
• Select TSEs to be benchmarked (team).
• Establish inventory of benchmarking tests and data (team and SMEs).
• Identify additional benchmarking tests required (team and SMEs).
• Develop new tests (team and SMEs).
• Conduct benchmark tests (team and SMEs).
• Enter data into QFDNET (team).
• Establish customer/engineer correlations (team and SMEs).
Setting technical targets
a. Develop technical targets (team and SMEs).
b. Review existing program targets for existing TSEs (team).
c. Recommend technical targets to program office (team and SMEs).
d. Reconcile program targets and technical targets for existing TSEs (program office).
e. Enter technical targets into QFDNET (team).
The steps listed above will result in the following QFD deliverables for the Customer
Axis:
• Validated list of Customer Wants for the product, system, subsystem, or
component
• Customer Wants prioritized to focus engineering attention
a. Customer Desirability Index of the most to least desirable Customer
Wants
b. Customer satisfaction targets for all Customer Wants, expressed as a
percent over/under satisfaction of base product, system, subsystem, or
component
c. A final rank ordered strategic index of Customer Wants based on
corporate strategies and competitive opportunities
SL3151Ch02Frame Page 79 Thursday, September 12, 2002 6:13 PM
Customer Understanding
79
Technical Axis
On the Technical Axis, the following items will need to be produced:
• Rank ordered list of key Technical System Expectations that when correctly targeted will satisfy Customer Wants at a strategically competitive
level
• Target values for key TSEs derived from technical competitive benchmarking that correlate with customer’s competitive evaluations. These
target values aid program management two ways:
a. By driving the product and engineering program toward integrated
business and technical propositions that program management can
prove
b. With managing the program team’s performance at program completion
Internal Standards and Tests
• New or modified tests or other verification methods that make certain
basic and product performance wants achieved
Institutionalizing revised tests and standards into real world usage — customer
dependent, of course — customer requirements, corporate engineering test procedures, and other documents both generic and program specific that support the
organization’s design verification system.
THE QFD APPROACH
The first concern of QFD is the customer. Therefore, in planning a new product we
start with customer requirements, defined through market research. Generally, we
call this the product development process, and it includes the program planning,
conceptualization, optimization, development, prototyping, testing, and manufacturing functions.
One can see that this development process is indeed very complex. Quite often,
it cannot be performed by one individual. This is because it consists of several tradeoffs, such as:
•
•
•
•
•
•
•
•
Shared responsibilities
Interpretations
Priorities
Technical knowledge
Long time experience
Resource changes
Communication
Lots of work
SL3151Ch02Frame Page 80 Thursday, September 12, 2002 6:13 PM
80
Six Sigma and Beyond
It is precisely this complexity that all too often causes the product development
process to create a product that fails to meet the customer requirements. For example:
Customer requirement →
Design requirements →
Part characteristics →
Manufacturing operations →
Production requirements
Note: It is of paramount importance that the communication process within an
organization does not fall victim to the use of jargon.
QFD METHODOLOGY
QFD is accomplished through a series of charts that appear to be very complex.
They do contain a great deal of information, however. That information is both an
asset and a liability.
All the charts are interconnected to what is called the House of Quality because
of the roof-like structure at its top. Since this house is made up of distinct parts or
“rooms,” let us find the function of each part, so that we can comprehend what QFD
is all about — see Figure 2.10.
QFD begins with a list of objectives or the “what” that we want to accomplish —
see Figure 2.11. This is usually the voice of the customer and as such is very general,
vague, and difficult to implement directly. It is given to us in raw form, that is, in
the everyday language of the customer. (Example: “I don’t want a leaky window
when it rains.”)
For each what, we refine the list into the next level of detail by listing one or more
“hows” for each what. The hows are an engineering task. Figure 2.11 shows the relationship between the what and the how. Figure 2.12 shows that it is possible to have
an iterative process between the what and the how, with a possible refinement of the
“old how” into the “new what” and ultimately to generate a very good “new how.”
Even though this step shows greater detail than the original what list, it is by
itself often not directly actionable and requires further definition. This is accomplished by further refinement until every item on the list is actionable. This level is
important because there is no way of ensuring successful realization of a requirement
that no one knows how to accomplish. (Note: Remember that our level of refinement
within the how list may affect more than one how or what and can in fact adversely
affect one another. That is why the arrows in Figure 2.11 are going in multiple
directions.)
To reduce possible confusion we represent the what and how in the following
manner. The enclosed matrix becomes the relationships. The relationships are shown
at the intersections of the what and how. Some common symbols are:
□ Medium relationship
Weak relationship
Very strong relationship
SL3151Ch02Frame Page 81 Thursday, September 12, 2002 6:13 PM
Customer Understanding
81
What
How
FIGURE 2.11 The initial “what” of the customer.
What
How/What
How
FIGURE 2.12 The iterative process of “what” to “how.”
The method of using symbols allows very complex relationships to be shown,
and the interpretation is easy and is not dependent on experience. There are many
variations of this, and readers are encouraged to use what is comfortable for them.
Figure 2.13 presents a typical matrix.
Once the what, how, and relationships have been identified, the next step is to
establish a “how much” for each how — see Figure 2.14. The intent here is to provide
specific objectives that guide the subsequent design and provide a means of objectivity to the process. The result is minimum interference from opinion. (Note: This
how much is another cross check on our thinking process. It forces us to think in a
very detailed, measurable fashion.)
To summarize:
The what identifies the customer’s requirements in everyday language.
The how refines the customer’s requirements (from an engineering perspective).
The relationship defines the relationship between what and how via a symbolic
language.
The how much provides an objective means of assessing that requirements
have been met and provides targets for further detail development. Pictorially, the flow is shown in Figure 2.14.
SL3151Ch02Frame Page 82 Thursday, September 12, 2002 6:13 PM
82
Six Sigma and Beyond
How
What
Importance
4
•
5
•
1
3
•
2
How
much
Importance
ratings
42
21
33
28
24
Where
= 3
•
= 9
= 1
Therefore: (4x9) + (2x3) = 42 and so on. Make sure that the ratings differentiate to the
point of discrimination between each other. Remember, you are interested in great
differentiation rather than a simple priority.
FIGURE 2.13 The relationship matrix.
HOW
What
How much
FIGURE 2.14 The conversion of “how” to “how much.”
At this point, even though a lot of information is at hand, it is not unusual to
refine the hows even further until an actionable level of detail is achieved. This is
done by creating a new chart in which the hows of the previous chart become the
whats of the new chart. The “how much” information as a general rule is carried
along to the next chart to facilitate communication. This is done to ensure that the
objectives are not lost.
The process is repeated as necessary. In the product development process, this
means taking the customer requirements and defining design requirements that are
carried on to the next chart to establish the basis for the part characteristic. This is
continued to define the manufacturing operations and the production requirements —
see Figure 2.15. (Note: The greatest gains using QFD can be realized only when
SL3151Ch02Frame Page 83 Thursday, September 12, 2002 6:13 PM
Customer Understanding
83
Functional
spec
VOC
Requirements
analysis
Design
System
design
Where:
VOC = Voice of the customer
Methods,
tools,
procedures
Technical
assessment
Resource plan
Implementation
plan
FIGURE 2.15 The flow of information in the process of developing the final “House of
Quality.”
taken down to the work detail level of production requirements. The QFD process
is well suited to simultaneous engineering in which product and process engineers
participate in a team effort.) For more information on the cascading process of the
QFD methodology, see the Appendix.
So far, we have talked about the basic charts in the House of Quality, and as a
result we have gained much information about the problem at hand. However, there
are several useful extensions to the basic QFD charts that enhance their usefulness.
These are used as required based on the content and purpose of each particular chart.
One such extension is the correlation matrix.
The correlation matrix — see Figure 2.10 — is a triangular table often attached
to the “hows.” The purpose of such placement is to establish the correlation between
each “how” item, i.e., to indicate the strength of the relationship and to describe the
direction of the relationship. To do that, symbols are used, most commonly:
Positive
Strong positive
X Negative
# Strong negative
A second extension is the competitive assessment — see Figure 2.10. This is a
pair of graphs that shows item for item how competitive products compare with
current company products. Its strength is the fact that it can be done for the whats,
hows, and how muchs.
The competitive assessment may also be used to uncover gaps in engineering
judgment. What and how items that are strongly related should also exhibit a
relationship in the competitive assessment. For example, if we believe superior
dampening will result in an improved ride, the competitive assessment would be
expected to show that products with superior dampening also have a superior ride.
If this does not occur, it calls attention to the possibility that something significant
may have been overlooked. If not acted upon, we may achieve superior performance
SL3151Ch02Frame Page 84 Thursday, September 12, 2002 6:13 PM
84
Six Sigma and Beyond
to our “in house” tests and standards but fail to achieve expected results in the hands
of our customers.
Why are we doing this? Basically, for two reasons:
1. To establish the values of the objectives to be achieved
2. To uncover engineering judgment errors
Remember that the correlation must be related to real world usage from the
customer’s perspective. What and how items that are strongly related should also be
shown to relate to one another in the competitive assessment. If the correlation does
not agree, it may mean that we overlooked something very significant.
A third extension is the importance rating — see Figure 2.10. This is a mechanism for prioritizing efforts and making trade-off decisions for each of the whats
and hows. It is important to keep in mind that the values by themselves have no
direct meaning; rather, their meaning surfaces only when they are interpreted by
comparing their magnitudes. The importance rating is useful for prioritizing efforts
and making trade-off decisions. (Some of the trade-offs may require high level
decisions because they cross engineering group, department, divisional, or company
lines. Early resolution of trade-offs is essential to shorten program timing and avoid
non-productive internal iterations while seeking a nonexistent solution.) The rating
itself may take the form of numerical tables or graphs that depict the relative
importance of each what or how to the desired end result. Any rating scale will
work, provided that the scale is a weighted one. A common method is to assign
weights to each relationship matrix symbol and sum the weights, just as we did in
Figure 2.13. Another more technical way is the following:
w ′functioni =
∑w r
yj ij
j
wfunction i =
5(w ′functioni )
maxi (w ′functioni )
where w ′functioni = unnormalized function importance; wyj = importance rating; rij =
individual rating of functions; and wfunction i = weighted function importance.
Applying this methodology to Figure 2.13 yields Figure 2.16.
QFD
AND
PLANNING
Contrary to what the name implies, quality function deployment (QFD) is not just a
quality tool. QFD was developed in Japan, growing out of the need to simultaneously
achieve a competitive advantage in quality, cost, and timing. To better comprehend
QFD, it is important to understand what the Japanese mean by the word “quality.”
The word “quality,” which we generally define as conformance to requirements,
fitness for use, or some other measure of goodness, takes on a much broader meaning
SL3151Ch02Frame Page 85 Thursday, September 12, 2002 6:13 PM
Customer Understanding
85
HOW
What
Importance
4
•
5
•
1
3
•
2
How
much
Importance
42
ratings –
unnormalized
Importance of 5
how
21
33
28
24
2.5 or 3
3.9 or 4
3.3 or 3
2.9 or 3
W’ function i = (4x9) + (2x3) = 42 and so on
W function i =
5 (42)
42
= 5 and so on
Keep in mind that when you are addressing the “hows” in essence you are
dealing with customer functionalities. Therefore, it is recommended to design for
the average, based on each function’s importance according to its capability to
supply each original Y.
FIGURE 2.16 Alternative method of calculating importance.
in Japan (there is probably no exact English translation of the Japanese version).
However, according to Japanese industrial standard Z8101–1981, “quality control”
is “a system of means to economically produce goods or services which satisfy
customer requirements.” (Italics added.)
Thus to the Japanese, “quality” means conducting the business effectively, not
just producing a good product. In this context, QFD really becomes a planning tool
for implementing business objectives, of which the most widely known application
is to product development.
In planning a new product, we start with customer needs, wants, and expectations,
often defined through market research. We wish to design and manufacture a product
that satisfies the customer’s perception of intended function, as well as or better than
our competitors (subject to certain internal company constraints). In other words:
CUSTOMER REQUIREMENTS
⇓
⇓
⇓
PRODUCT
SL3151Ch02Frame Page 86 Thursday, September 12, 2002 6:13 PM
86
Six Sigma and Beyond
Let us call the process of translating these requirements into a viable product
the “product development process.” This process includes program planning, concepting, optimization, development, prototyping, and testing, as well as the corresponding manufacturing functions. Thus:
CUSTOMER REQUIREMENTS
⇓
⇓
⇓
PRODUCT DEVELOPMENT
PROCESS
⇓
⇓
⇓
PRODUCT
In a large organization, the product development process is so detailed that often
no one individual can comprehend it all. For some, the process looks like a maze
or a mysterious “black box.” For others the process is an intricate network of
activities. Regardless of how it is represented, the product development process is
exceedingly complex, consisting of numerous trade-offs.
Shared responsibilities and interpretation differences often result in conflicting
priorities. That is the reason the team must have ownership of the projects and must
have a substantial body of technical knowledge over a relatively long time frame
while enduring resource changes. This, of course, requires a great deal of communication and a substantial work effort.
PRODUCT DEVELOPMENT PROCESS
The complexity of the product development process makes it a natural haven for
Murphy’s law, with nearly an infinite number of opportunities for problems to occur.
Despite the best of intentions and efforts, all too often the product development
process creates a product that fails to meet the customer requirements. Such failures
may occur due to:
•
•
•
•
•
•
•
•
Trade-offs
Shared responsibilities
Interpretations
Priorities
Technical knowledge
Long time frame
Resource changes
Communication — lots of work
The QFD approach focuses on customer requirements in a manner that directs
efforts toward achieving those requirements — see Figure 2.17. In Figure 2.17, for
SL3151Ch02Frame Page 87 Thursday, September 12, 2002 6:13 PM
Customer Understanding
87
Design
requirements
Product planning
Part
characteristic
Part deployment
Manufacturing
operations
Process planning
Production
requirements
Production planning
FIGURE 2.17 The development of QFD.
each of the customer requirements, a set of design requirements is determined, which
if satisfied will result in achieving the customer requirements. In like manner, each
design requirement is evolved into part characteristics, which in turn are used to
determine manufacturing operations and specific production requirements. The flow
is as follows:
CUSTOMER REQUIREMENTS
⇓
⇓
⇓
DESIGN REQUIREMENTS
⇓
⇓
⇓
SL3151Ch02Frame Page 88 Thursday, September 12, 2002 6:13 PM
88
Six Sigma and Beyond
PART CHARACTERISTICS
⇓
⇓
⇓
MANUFACTURING OPERATIONS
⇓
⇓
⇓
PRODUCTION REQUIREMENTS
⇓
⇓
⇓
So, for example: The customer requirement of “years of durability” may be
achieved in part by the design requirement of no visible rust in three years. This in
turn may be achieved in part by ensuring part characteristics that include a minimum
paint film build and maximum surface treatment crystal size. The manufacturing
process that provides these part characteristics consists of a three-coat process that
includes a dip tank. The production requirements are the process parameters within
the manufacturing process that must be controlled in order to achieve the required
part characteristics (and ultimately the customer requirements). Therefore, we can
present this in a summary form as:
CUSTOMER REQUIREMENT: Years of durability
DESIGN REQUIREMENT: No visible exterior rust in 3 years
PART CHARACTERISTICS: Paint weight — 2–2.5 gm/m2;
Crystal size — 3 max
MANUFACTURING OPERATIONS: Dip tank; 3 coats
PRODUCTION REQUIREMENTS: Time = 2.0 minutes; Acidity = 1.5 to 2.0;
Temperature = 45–55ο C
CONJOINT ANALYSIS
WHAT IS CONJOINT ANALYSIS?
We introduced conjoint analysis in Volume III of this series. Recall that conjoint
analysis is a multivariate technique used specifically to understand how respondents
develop preferences for products or services. It is based on the simple premise that
consumers evaluate the value of a product/service/idea (real or hypothetical) by
combining the separate amounts of value provided by each attribute.
It is this characteristic that is of interest in the DFSS methodology. After all, we
want to know the bundle of utility from the customer’s perspective. (The reader is
encouraged to review Volume III, Chapter 11.) So in this section, rather than dwelling
on theoretical statistical explanations, we will apply conjoint analysis in a couple
of hypothetical examples. The examples are based on the work of Hair et al. (1998)
and are used here with the publisher’s permission.
SL3151Ch02Frame Page 89 Thursday, September 12, 2002 6:13 PM
Customer Understanding
A HYPOTHETICAL EXAMPLE
89
OF
CONJOINT ANALYSIS
As an illustration of conjoint analysis, let us assume that HATCO is trying to develop
a new industrial cleanser. After discussion with sales representatives and focus
groups, management decides that three attributes are important: cleaning ingredients,
convenience of use, and brand name. To operationalize these attributes, the researchers create three factors with two levels each:
Factor
Ingredients
Form
Brand name
Level
Phosphate-free
Liquid
HATCO
Phosphate-based
Powder
Generic brand
A hypothetical cleaning product can be constructed by selecting one level of
each attribute. For the three attributes (factors) with two values (levels), eight (2 ×
2 × 2) combinations can be formed. Three examples of the eight possible combinations (stimuli) are:
• HATCO phosphate-free powder
• Generic phosphate-based liquid
• Generic phosphate-free liquid
HATCO customers are then asked either to rank-order the eight stimuli in terms
of preference or to rate each combination on a preference scale (perhaps a 1-to-10
scale). We can see why conjoint analysis is also called “trade-off analysis,” because
in making a judgment on a hypothetical product, respondents must consider both
the “good” and “bad” characteristics of the product in forming a preference. Thus,
respondents must weigh all attributes simultaneously in making their judgments.
By constructing specific combinations (stimuli), the researcher is attempting to
understand a respondent’s preference structure. The preference structure “explains”
not only how important each factor is in the overall decision, but also how the
differing levels within a factor influence the formation of an overall preference
(utility). In our example, conjoint analysis would assess the relative impact of each
brand name (HATCO versus generic), each form (powder versus liquid), and the
different cleaning ingredients (phosphate-free versus phosphate-based) in determining the utility to a person. This utility, which represents the total “worth” or
overall preference of an object, can be thought of as based on the part-worths for
each level. The general form of a conjoint model can be shown as
(Total worth for product)ij…,n = Part-worth of level i for factor 1
+ Part-worth of level j for factor 2 +...
+ Part-worth of level n for factor m
where the product or service has m attributes, each having n levels. The product
consists of level i of factor 2, level j of factor 2, and so forth, up to level n for factor m.
SL3151Ch02Frame Page 90 Thursday, September 12, 2002 6:13 PM
90
Six Sigma and Beyond
TABLE 2.3
Stimuli Descriptions and Respondent Rankings for Conjoint
Analysis of Industrial Cleanser
Stimuli Descriptions
1
2
3
4
5
6
7
8
Respondent Rankings
Form
Ingredients
Brand
Liquid
Liquid
Liquid
Liquid
Powder
Powder
Powder
Powder
Phosphate-free
Phosphate-free
Phosphate-based
Phosphate-based
Phosphate-free
Phosphate-free
Phosphate-based
Phosphate-based
HATCO
Generic
HATCO
Generic
HATCO
Generic
HATCO
Generic
Respondent 1
Respondent 2
1
2
5
6
3
4
7
8
1
2
3
4
7
5
8
6
In our example, a simple additive model would represent the preference structure
for the industrial cleanser as based on the three factors (utility = brand effect +
ingredient effect + form effect). The preference for a specific cleanser product can
be directly calculated from the part-worth values. For example, the preference for
HATCO phosphate-free powder is:
Utility = Part-worth of HATCO brand
+ Part-worth of phosphate-free cleaning ingredient
+ Part-worth of powder
With the part-worth estimates, the preference of an individual can be estimated
for any combination of factors. Moreover, the preference structure would reveal the
factor(s) most important in determining overall utility and product choice. The
choices of multiple respondents could also be combined to represent the competitive
environment faced in the “real world.”
AN EMPIRICAL EXAMPLE
To illustrate a simple conjoint analysis, assume that the industrial cleanser
experiment was conducted with respondents who purchased industrial supplies. Each
respondent was presented with eight descriptions of cleanser products (stimuli) and
asked to rank them in order of preference for purchase (1 = most preferred; 8 = least
preferred). The eight stimuli are described in Table 2.3, along with the rank orders
given by two respondents.
As we examine the responses for respondent 1, we see that the ranks for the
stimuli with the phosphate-free ingredients are the highest possible (1, 2, 3, and 4),
whereas the phosphate-based product has the four lowest ranks (5, 6, 7, and 8).
Thus, the phosphate-free product is much more preferred than the phosphate-based
cleanser. This can be contrasted to the ranks for the two brands, which show a
mixture of high and low ranks for each brand. Assuming that the basic model (an
SL3151Ch02Frame Page 91 Thursday, September 12, 2002 6:13 PM
Customer Understanding
91
additive model) applies, we can calculate the impact of each level as differences
(deviations) from the overall mean ranking. (Readers may note that this is analogous
to multiple regression with dummy variables or ANOVA.) For example, the average
ranks for the two cleanser ingredients (phosphate-free versus phosphate-based) for
respondent 1 are:
Phosphate-free: (1 + 2 + 3 + 4)/4 = 2.5
Phosphate-based: (5 + 6 + 7 + 8)/4 = 6.5
With the average rank of the eight stimuli of 4.5 [(1 + 2 + 3 + 4 + 5 + 6 + 7 +
8)/8 = 36/8 = 4.5], the phosphate-free level would then have a deviation of –2.0 (2.5
– 4.5) from the overall average, whereas the phosphate-based level would have a
deviation of +2.0 (6.5 – 4.5). The average ranks and deviations for each factor from
the overall average rank (4.5) for respondents 1 and 2 are given in Table 2.4. In our
example, we use smaller numbers to indicate higher ranks and a more preferred
stimulus (e.g., 1 = most preferred). When the preference measure is inversely related
to preference, such as here, we reverse the signs of the deviations in the part-worth
calculations so that positive deviations will be associated with part-worths indicating
greater preference. Deviation is calculated as: deviation = average rank of level – overall
average rank (4.5). Note that negative deviations imply more preferred rankings.
The part-worths of each level are calculated in four steps:
• Step 1: Square the deviations and find their sum across all levels.
• Step 2: Calculate a standardizing value that is equal to the total number
of levels divided by the sum of squared deviations.
• Step 3: Standardize each squared deviation by multiplying it by the standardizing value.
• Step 4: Estimate the part-worth by taking the square root of the standardized squared deviation.
Let us examine how we would calculate the part-worth of the first level of
ingredients (phosphate-free) for respondent 1. The deviations from 2.5 are squared.
The squared deviations are summed (10.5). The number of levels is six (three factors
with two levels apiece). Thus, the standardizing value is calculated as .571 (6/10.5
= .571). The squared deviation for phosphate-free (22; remember that we reverse
signs) is then multiplied by .571 to get 2.284 (22 × .571 = 2.284). Finally, to calculate
the part-worth for this level, we then take the square root of 2.284, for a value of
1.1511. This process yields part-worths for each level for respondents 1 and 2, as
shown in Table 2.5.
Because the part-worth estimates are on a common scale, we can compute the
relative importance of each factor directly. The importance of a factor is represented
by the range of its levels (i.e., the difference between the highest and lowest values)
divided by the sum of the ranges across all factors. For example, for respondent 1,
the ranges are 1.512 [.756 – (–.756)], 3.022 [1.511 – (–1.511)], and .756 [.378 –
(–.378)]. The sum total of ranges is 5.290. The relative importance for form, ingredients,
SL3151Ch02Frame Page 92 Thursday, September 12, 2002 6:13 PM
92
Six Sigma and Beyond
TABLE 2.4
Average Ranks and Deviations for Respondents 1 and 2
Factor Level
Ranks Across Stimuli
Average Rank of Level
Deviation from
Overall Average Rank
Respondent l
Form
Liquid
Powder
1, 2, 5, 6
3, 4, 7, 8
3.5
5.5
–1.0
+1.0
Ingredients
Phosphate-free
Phosphate-based
1, 2, 3, 4
5, 6, 7, 8
2.5
6.5
–2.0
+2.0
Brand
HATCO
Generic
1, 3, 5, 7
2, 4, 6, 8
4.0
5.0
–.5
+.5
Respondent 2
Form
Liquid
Powder
1, 2, 3, 4
5, 6, 7, 8
2.5
6.5
–2.0
+2.0
Ingredients
Phosphate-free
Phosphate-based
1, 2, 5, 7
3, 4, 6, 8
3.75
5.25
–.75
+.75
Brand
HATCO
Generic
1, 3, 7, 8
2, 4, 5, 6
4.75
4.25
+.25
–.25
and brand is calculated as 1.512/5.290, 3.022/5.290, and .756/5.290, or 28.6, 57.1,
and 14.3 percent, respectively. We can follow the same procedure for the second
respondent and calculate the importance of each factor, with the results of form
(66.7 percent), ingredients (25 percent), and brand (8.3 percent). These calculations
for respondents 1 and 2 are also shown in Table 2.5.
To examine the ability of this model to predict the actual choices of the
respondents, we predict preference order by summing the part-worths for the different combinations of factor levels and then rank ordering the resulting scores. The
calculations for both respondents for all eight stimuli are shown in Table 2.4. Comparing the predicted preference order to the respondent’s actual preference order
assesses predictive accuracy. Note that the total part-worth values have no real
meaning except as a means of developing the preference order and, as such, are not
compared across respondents. The predicted and actual preference orders for both
respondents are given in Table 2.6.
SL3151Ch02Frame Page 93 Thursday, September 12, 2002 6:13 PM
Customer Understanding
93
TABLE 2.5
Estimated Part-Worths and Factor Importance for Respondents 1 and 2
Estimated Part-Worths
Factor Level
Reversed
Squared
Deviationa Deviation
Standardized
Deviationb
Calculating Factor Importance
Estimated
Range of
Factor
Part-Worthc Part-Worths Importanced
Respondent 1
Form
Liquid
Powder
+1.0
–1.0
1.0
1.0
+.571
–.571
+.756
–.756
1.512
28.6%
Ingredients
Phosphate-free
Phosphate-based
+2.0
–2.0
4.0
4.0
+2.284
–2.284
+1.511
–1.511
3.022
57.1%
+.5
–.5
.25
.25
10.5
+.143
–.143
+.378
–.378
.756
14.3%
Brand
HATCO
Generic
Sum of squared
deviations
Standardizing valuee
Sum of part-worth
ranges
.571
5.290
Respondent 2
Form
Liquid
Powder
Ingredients
Phosphate-free
Phosphate-based
Brand
HATCO
Generic
Sum of squared
deviations
Standardizing value
Sum of part-worth
ranges
a
+2.0
–2.0
4.0
4.0
+2.60
–2.60
+1.612
–1.612
3.224
66.7%
+.75
–.75
.5625
.5625
+.365
–.365
+.604
–.604
1.208
25.0%
–.25
+.25
.0625
.0625
9.25
–.04
+.04
–.20
+.20
.400
8.3%
.649
4.832
Deviations are reversed to indicate higher preference for lower ranks. Sign of deviation used to indicate sign
of estimated part-worth.
b Standardized deviation equal to the squared deviation times the standardizing value.
c Estimated part-worth equal to the square root of the standardized deviation.
d Factor importance equal to the range of a factor divided by the sum of the ranges across all factors, multiplied
by 100 to yield a percentage.
e Standardizing value equal to the number of levels (2 + 2 + 2 = 6) divided by the sum of the squared deviations.
SL3151Ch02Frame Page 94 Thursday, September 12, 2002 6:13 PM
94
Six Sigma and Beyond
TABLE 2.6
Predicted Part-Worth Totals and Comparison of Actual
and Estimated Preference Rankings
Stimuli Description
Size
Part-Worth Estimates
Preference Rankings
Ingredients
Estimated
Size
Ingredients
Brand
Brand Total
Actual
Liquid
Liquid
Liquid
Liquid
Powder
Powder
Powder
Powder
Phosphate-free
Phosphate-free
Phosphate-based
Phosphate-based
Phosphate-free
Phosphate-free
Phosphate-based
Phosphate-based
HATCO
Generic
HATCO
Generic
HATCO
Generic
HATCO
Generic
Respondent 1
.756
1.511
.756
1.511
.756
–1.511
.756
–1.511
–.756
1.511
–.756
1.511
–.756
–1.511
–.756
–1.511
.378
–.378
.378
–.378
.378
–.378
.378
–.378
2.645
1.889
–.377
–1.133
1.133
.377
–1.889
–2.645
1
2
5
6
3
4
7
8
1
2
5
6
3
4
7
8
Liquid
Liquid
Liquid
Liquid
Powder
Powder
Powder
Powder
Phosphate-free
Phosphate-free
Phosphate-based
Phosphate-based
Phosphate-free
Phosphate-free
Phosphate-based
Phosphate-based
HATCO
Generic
HATCO
Generic
HATCO
Generic
HATCO
Generic
Respondent 2
1.612
.604
1.612
.604
1.612
–.604
1.612
–.604
–1.612
.604
–1.612
.604
–1.612
–.604
–1.612
–.604
–.20
.20
–.20
.20
–.20
.20
–.20
.20
2.016
2.416
.808
1.208
–1.208
–.808
–2.416
–2.016
2
1
4
3
6
5
8
7
1
2
3
4
7
5
8
6
The estimated part-worths predict the preference order perfectly for respondent
1. This indicates that the preference structure was successfully represented in the
part-worth estimates and that the respondent made choices consistent with the
preference structure. The need for consistency is seen when the rankings for
respondent 2 are examined. For example, the average rank for the generic brand is
lower than that for the HATCO brand (refer to Table 2.4), meaning that, all things
being equal, the stimuli with the generic brand will be more preferred. Yet, examining
the actual rank orders, this is not always seen. Stimuli 1 and 2 are equal except for
brand name, yet HATCO is preferred. This also occurs for stimuli 3 and 4. However,
the correct ordering (generic preferred over HATCO) is seen for the stimuli pairs
of 5–6 and 7–8. Thus, the preference structure of the part-worths will have a difficult
time predicting this choice pattern. When we compare the actual and predicted rank
orders (see Table 2.6), we see that respondent 2’s choices are many times mispredicted but most often just miss by one position due to the brand effect. Thus, we
would conclude that the preference structure is an adequate representation of the
choice process for the more important factors, but that it does not predict choice
perfectly for respondent 2, as it does for respondent 1.
SL3151Ch02Frame Page 95 Thursday, September 12, 2002 6:13 PM
Customer Understanding
THE MANAGERIAL USES
OF
95
CONJOINT ANALYSIS
It is beyond the scope of this section to discuss the statistical basis of conjoint
analysis. However, in DFSS, we should understand the technique in terms of its role
in decision making and strategy development. The simple example we have just
discussed presents some of the basic benefits of conjoint analysis. The flexibility of
conjoint analysis gives rise to its application in almost any area in which decisions
are studied. Conjoint analysis assumes that any set of objects (e.g., brands, companies) or concepts (e.g., positioning, benefits, images) is evaluated as a bundle of
attributes. Having determined the contribution of each factor to the consumer’s
overall evaluation, the marketing researcher could then:
1. Define the object or concept with the optimum combination of features
2. Show the relative contributions of each attribute and each level to the
overall evaluation of the object
3. Use estimates of purchaser or customer judgments to predict preferences
among objects with differing sets of features (other things held constant)
4. Isolate groups of potential customers who place differing importance on
the features to define high and low potential segments
5. Identify marketing opportunities by exploring the market potential for
feature combinations not currently available
The knowledge of the preference structure for each individual allows the
researcher almost unlimited flexibility in examining both individual and aggregate
reactions to a wide range of product- or service-related issues.
REFERENCES
Fowler, T.C., Value Analysis in Design, Van Nostrand Reinhold, New York, 1990.
Hair, J.F., Multivariate Data Analysis, 5th ed., Prentice Hall, Upper Saddle River, NJ, 1998.
Harry, M.,The Vision of Six Sigma: A Roadmap for Breakthrough, 5th ed., Vol. 1, TriStar
Publishing, Phoenix, 1997.
Porter, M., Competitive Advantage, Free Press, New York, 1985.
Rechtin, E. and Maier M., The Art of Systems Architecting, CRC, Boca Raton, FL, 1997.
Shoji, S., A New American TQM, Productivity Press, Portland, OR, 1993.
SELECTED BIBLIOGRAPHY
Afors, C. and Michaels, M.Z., A Quick, Accurate Way to Determine Customer Needs, Quality
Progress, July 2001, pp. 82–88.
Anon., Quality Function Deployment, American Supplier Institute, Inc., Dearborn, MI, 1988.
Bialowas, P. and Tabaszewska E., How to Evaluate the Internal Customer Supplier Relationship, Quality Progress, July 2001, pp. 63–67.
Carlzon, J., Moments of Truth, HarperCollins, New York, 1989.
Fredericks, J. O. and Salter, J.M., What Does Your Customer Really Want? Quality Progress,
Jan. 1998, pp. 63–70.
SL3151Ch02Frame Page 96 Thursday, September 12, 2002 6:13 PM
96
Six Sigma and Beyond
Gale, B.T., Managing Customer Value: Creating Quality and Service that Customers Can See,
Free Press, New York, 1994.
Gobits, R., The Measurement of Insight, unpublished paper presented at the 2nd International
Symposium on Educational Testing, Montreux, 1975.
Goncalves, K.P. and Goncalves, M.P., Use of the Kano Method Keeps Honeywell Attuned to
the Voice of the Customer, Quirk’s Marketing Research Review, Apr. 2001, pp. 18–25.
Gutman, J. and Miaoulis, G., Past Experience Drives Future CS Behavior, Marketing News,
Oct. 22, 2001, pp. 45–46.
Harry, M., The Vision of Six Sigma: A Roadmap for Breakthrough, 5th ed., Vol. 2, TriStar
Publishing, Phoenix, 1997.
James, H.L., Sasser, W.E., and Schlesinger, L.A., The Service Profit Chain: How Leading
Companies Link Profit and Growth to Loyalty, Satisfaction and Value, Free Press,
New York, 1997.
Mariampolski. H, Qualitative Market Research, Sage Publications, Newbury Park, CA, 2001.
Morais, R., The End of Focus Groups, Quirk’s Marketing Research Review, pp. 153–154,
May 2001.
Mudge, A.E., Numerical Evaluation of Functional Relationships, Proceedings, Society of
American Value Engineers, 1967.
Murphy, B., Methodological Pitfalls in Linking Customer Satisfaction with Profitability,
Quirk’s Marketing Research Review, Oct. 2001, pp. 22–27.
Murphy, B., Qualitatively Speaking: Of Bullies, Friends and Mice, Quirk’s Marketing
Research Review, Oct. 2001, pp. 16, 61.
Saliba, M.T. and Fisher, C.M., Managing Customer Value, Quality Progress, June 2000, pp.
63–70.
Shillito, M.L., Pareto Voting. Proceedings, Society of American Value Engineers, 1973.
Stamatis, D.H., Total Quality Management: Engineering Handbook, Marcel Dekker, New
York, 1997.
Stamatis, D.H., Total Quality Service, St. Lucie Press, Delray Beach, FL, 1996.
Sullivan, L.P., The Seven Stages in Company Wide Quality Control, Quality Progress, May
1986, pp. 77–83.
Sullivan, L.P., Quality Function Deployment, Quality Progress, June 1986, 1986, pp. 39–50.
Thomas, J. and Sasser, W.E., Why Satisfied Customers Defect, Harvard Business Review,
Nov.-Dec. 1995, pp. 88–89.
VanVierah, S. and Olosky, M., Achieving Customer Satisfaction: Registrar Satisfaction Survey
Counterbalances the Myth About Registrars, Automotive Excellence, Winter 1999,
pp. 10–15.
Veins, M., Wedel, M., and Wilms, T., Metric conjoint segmentation methods: a Monte Carlo
comparison, Journal of Marketing Research, 33, 73–85, 1996.
Wittink, D.R. et al., Commercial use of conjoint analysis: an update, Journal of Marketing,
53, 91–96, 1989.
SL3151Ch03Frame Page 97 Thursday, September 12, 2002 6:12 PM
3
Benchmarking
Benchmarking is a tool, a technique or process, a philosophy, and a new name for
old practices. It involves operations research and management science for determining (a) what to do or “goal setting” and (b) how to do it or “action plan identification.”
Benchmarking can be applied (a) systematically and comprehensively or (b) ad
hoc project by project. In both cases it can require (a) sophisticated statistical
analysis, (b) utilization of a wide variety of analytical tools, and (c) a wide range
of data sources. The basic requirements for success are:
•
•
•
•
•
•
Time, effort, and resources
A willingness to learn and to change
Continuing, long-term top management support
An external focus on customers and competitors
A common-sense approach and active listening
The ability to look at the old in a new way
GENERAL INTRODUCTION TO BENCHMARKING
A BRIEF HISTORY
OF
BENCHMARKING
The term “benchmarking” was coined by Xerox in 1979. Xerox has now performed
over 400 benchmark studies, and the process is totally integrated at all levels as part
of the business planning process. The approach has actually been in use for a number
of years — although it was often called by different names. (Reverse engineering
is an approach used to study the design and manufacturing characteristics of competitive products. Benchmarking of computer hardware and software is a very
common practice.)
Benchmarking extends the concept to consider administrative and all management processes. There is a conscious attempt to compare with the “best of the best”
even — especially — if that is not a direct competitor.
The fundamental process in starting benchmarking is to think about the area to
be benchmarked, which can be just about anything, and ask yourself, “Who is
especially good at that? What can I creatively imitate?” A typical process for doing
a benchmarking is shown in Figure 3.1.
POTENTIAL AREAS
OF
APPLICATION
OF
BENCHMARKING
Benchmarking is a methodology that can be used along with other systematic,
comprehensive management approaches to improve performance. It is not an end
unto itself. Some examples of applications of benchmarking include:
97
SL3151Ch03Frame Page 98 Thursday, September 12, 2002 6:12 PM
98
Six Sigma and Beyond
Prepare and
respond to
surveys
Agree to site
visits
Builds customer
goodwill
Builds network of
benchmarking
partners
May provide “in”
to target
companies
but
time consuming
Must be managed
to gain long term
benefit
Must be viewed as
an investment
Two-way
site visits
Informal
search for
the best
Define a process
A benchmarking study
IS a project
Make sure you have
clearance with legal
department
Involve the process
owner
Avoid the following
mentality:
We are unique
We know it all
It was not invented
here
It is too complex
We already tried it and
it does not work here
Follow a
model
Form
consortium
group
Provides true
improvement
opportunities
Answers “How do the
best do it?
Provides actionable data
But
Time consuming, must
be focused
Disciplined approach
builds results
Must be treated as an
ongoing way of doing
business
FIGURE 3.1 The benchmarking continuum process.
Broad management focus
• Cost reduction
• Profit improvement
• Business strategy development
• Total quality management
Individual management processes
• Improving customer service
• Reducing product development time
• Market planning
• Product distribution
Highly specific focus
• Invoice design
• Sales force compensation
• Fork lift truck maintenance
The critical questions to ask are:
• What are the areas that potentially could be benchmarked?
• How do you prioritize and focus the efforts?
SL3151Ch03Frame Page 99 Thursday, September 12, 2002 6:12 PM
Benchmarking
99
BENCHMARKING AND BUSINESS
STRATEGY DEVELOPMENT
Hall (1980) observed that certain industry leaders had exceptional performance even
in the bad times of 1979–1980. For example:
Goodyear
Inland Steel
Paccar
Caterpillar
General Motors
Maytag
G. Heilman Brewing
Philip Morris
Average
Company ROE
Industry ROE
9.2
10.9
22.8
23.5
19.8
27.8
25.8
22.7
20.2
7.4
7.1
15.4
15.4
15.4
10.1
14.1
18.2
12.9
How can this be so? What strategy did the more successful competitors follow?
LEAST COST
AND
DIFFERENTIATION
Hall’s study itself is an early example of successful benchmarking. By extensive
interviewing and data analysis, Hall reached conclusions based on the performance
and the experience of a group of highly successful companies.
As determined by Hall and also described in the book Competitive Strategy by
Michael Porter, the successful competitor tends to follow one of two strategies:
• Least cost
• Differentiation
Those competitors who do not explicitly follow one strategy or the other tend
to get “stuck in the middle” and do not have the highest return on investment. Hall’s
findings do, however, indicate that some firms can successfully manage both strategy
options. The generic strategies identified by Hall and Porter have been supported by
a number of research studies (see Higgins and Vincze, 1989).
For a successful business strategy to be developed, a company must decide what
course it will follow. It must also be certain that it is, in fact, realistically able to
pursue that alternative. Some questions to be asked include the following:
• Does a company really have the least cost? How do they know? What is
the basis for the claim?
• Is the company really differentiated in the eyes of the customer? How do
they know? What is the basis for the claim?
• How might competitive conditions change in the future?
Benchmarking can provide — in part — the information necessary to answer
these questions by providing focus and insight on what the best companies are doing.
SL3151Ch03Frame Page 100 Thursday, September 12, 2002 6:12 PM
100
Six Sigma and Beyond
In addition to making a choice relative to least cost versus differentiation, an
important strategy choice is that of being a mass marketer versus supplying the
needs of a specific market segment. Therefore, when benchmarking is performed
the following must always be present:
•
•
•
•
•
•
Build a relationship with your benchmarking partner.
Establish trust and mutual interest.
Be worthy of trust.
Make it last.
Be open to reciprocity.
Follow a code of conduct.
• Principle of confidentiality
• Principle of first party contact
• Principle of preparation
• Principle of third party contact
CHARACTERISTICS
OF A
LEAST COST STRATEGY
A firm following the least cost strategy must be able to deliver a product or service
with acceptable quality at a lower total cost than any of its competitors. Note that
total cost is the critical concern. The company does not have to be least cost in every
aspect of the business. The fact that the total cost is the lowest does not necessarily
mean that the price that is charged is the lowest. To determine if the least cost
strategy is viable, it is necessary to perform competitive benchmarking and gain
information relative to the following:
• What is the relative market share of the company? Does the experience
curve have a significant effect on cost reduction?
• Is the industry one that can be affected by automation possibilities, conveyorized assembly, or new production technology? Is the capital available
for investment in efficient scale facilities and product and process engineering innovation?
• Do competitors have a different mix of fixed and variable costs?
• What is the percent capacity utilization by competitive firms?
• Are the competitive firms using activity-based accounting?
• How critical is raw material supply? Does the firm have preemptive
sources of supply?
• Does the firm have a tight system of budgeting and cost control for all
functions?
• Are productions designed for low cost productions? Are products simplified
and product lines reduced in number? Are bills of material standardized?
• What is the level of product/service quality versus competition?
• How labor intensive is the process? How effective are labor/management
relations?
• Are marginal accounts minimized?
SL3151Ch03Frame Page 101 Thursday, September 12, 2002 6:12 PM
Benchmarking
101
Improved quality through benchmarking can lead to lower costs. The cost of
quality — really the cost of non-quality — consists of the costs of prevention,
appraisal (inspection), internal quality failures, and external quality failures. This
cost can amount to as much as 30–40% of the cost of goods sold. Costs include the
following:
Costs of prevention
Training
Equipment
Costs of appraisal (inspection)
Inspectors
Equipment
Cost of internal quality failures
Scrap
Rework
Machine downtime
Missed schedules
Excess inventory
Cost of external quality failures
Warranty expense
Customer dissatisfaction
Studies have shown that the average quality improvement project results in
$100,000 of cost reduction. The associated cost to diagnose and remedy the problem
has averaged $15,000. Consequently, the payout from benchmarking in this area can
be significant. Velcro reported a 50% reduction in waste as a percentage of total
manufacturing cost in the first year and an additional 45% decrease in the second
year of its quality program.
Motorola achieved a quality level in 1991 that was 100 times better than it was
in 1987. By 1992, this company was striving for six sigma quality. That means three
defects per million or 99.9997 percent perfection. Motorola believes that super
quality is the lowest cost way of doing things, if you do things right the first time.
Their director of manufacturing — at that time — pointed out that each piece of
equipment has 17,000 parts and 144,000 opportunities for mistakes. A 99 percent
quality rate is equivalent to 1,440 mistakes per piece. The cost to hire and train
people to fix those mistakes would put the company out of business.
CHARACTERISTICS
OF A
DIFFERENTIATED STRATEGY
A firm following the differentiation strategy must be able to provide a unique product
or service to meet the customer’s expectations. The challenge of being unique is
that of providing a sustainable source of differentiation. It is very difficult to create
something that is totally sustainable. This may depend upon a corporate culture
producing a positive attitude toward quality and customer service or perhaps the
value of information or computer-to-computer linkages.
SL3151Ch03Frame Page 102 Thursday, September 12, 2002 6:12 PM
102
Six Sigma and Beyond
Following a differentiation strategy does not mean that a company can be
inefficient relative to costs. Although cost is not the primary driving force, costs still
must be minimized for the degree of differentiation provided. To determine if the
differentiation strategy is viable it is necessary to perform competitive benchmarking
and gain information relative to segmentation.
When developing corporate or marketing strategy, it is important to identify the
different market segments that make up the total market. A market segment is a
group of customers with similar or related buying motives. The members of the
segment have similar needs, wants, and expectations. A focus on market segments
allows a company to tailor its products, services, pricing, distribution, and communication message to meet the specific needs of a market. The opposite of market
segmentation is mass marketing.
Segmentation allows a smaller company to successfully compete with a larger
company by concentrating resources at the specific point of competition. Any market
can be segmented. The toothpaste market, for example, can be segmented into the
sensory segment (principal benefit sought is flavor or product appearance), the
sociable segment (brightness of teeth), the worriers (decay prevention), and the least
cost buyer. To segment a market you need to know who the customers are, what
they buy, how they buy, when they buy, why they buy, and where they buy. Some
typical questions in this area are:
•
•
•
•
How do you segment your market?
What do you do differently for each of these segments?
How does the competition segment the market?
What new segments are likely to develop due to changes in sociological
factors, technology, legislation, economic conditions, or growing internationalism?
BENCHMARKING AND STRATEGIC
QUALITY MANAGEMENT
Strategic Quality Management (SQM) or Total Quality Management (TQM), as defined
by J.M. Juran, W. Edwards Deming and others, consists of a systematic approach for
setting and meeting quality goals throughout a company. Just as companies have set
out to achieve financial goals through a process of corporate business planning, so also
can companies achieve quality goals by SQM or TQM or six sigma.
An overly simplified definition of TQM is “Doing the right thing, right the first
time, on time, all the time; always striving for improvement, and always satisfying the
customer.” This requires a focus on customer needs, people, systems and process, and
a supportive cultural environment. But this really is not any different from what the six
sigma methodology proposes. The essential steps of the quality management process
consist of:
Quality planning
• Identifying target market segments
• Determining specific customers’ needs, wants, and expectations
SL3151Ch03Frame Page 103 Thursday, September 12, 2002 6:12 PM
Benchmarking
103
• Translating the customer needs into product and process requirements
• Designing products and processes with the required characteristics
(Competitive benchmarking can assist in this part of the process.)
Quality control
• Measuring actual quality performance versus the design goals
• Diagnosing the causes of poor quality and initiating the required corrective steps
• Establishing controls to maintain the gains
Quality improvement
• Establishing a benchmarking process
• Providing the necessary resources
It is important to note that the process:
• Is strategic in nature, proactive
• Is competitively focused on meeting customer needs as opposed to techniques of analysis
• Is goal oriented
• Is comprehensive in terms of level and functions
• Manages in quality, not simply defect reduction
The following are very closely linked:
•
•
•
•
•
•
Six sigma
Business strategic planning
Strategy development (least cost versus differentiation)
TQM
Pricing strategy
Benchmarking
The classical approach to benchmarking viewed as process — which has become
the de facto process — has the following characteristics:
•
•
•
•
•
Inspection to control defects is primary tool.
Better quality means higher costs.
Significant scrap and rework activity takes place.
Quality control is found only in manufacturing.
SPC is used as an example; other tools are used occasionally.
Top management commitment
• Level 5: Continuous improvement is a natural behavior even for routine
tasks.
• Level 4: Focus is on improving the system.
• Level 3: Adequate money and time are allocated to continuous
improvement and training.
• Level 2: There is a balance of long-term goals with short-term objectives.
SL3151Ch03Frame Page 104 Thursday, September 12, 2002 6:12 PM
104
Six Sigma and Beyond
• Level 1: The traditional approach is in place.
Note that the Level 1 commitment is the status quo, and not much is
happening. It is the least effective way of demonstrating to the organization at large that management commitment is a way of life. On
the other hand, Level 5 is the most effective and demands change of
some kind.
Obsession with excellence
• Level 5: Constant improvement in quality, cost, and productivity
• Level 4: Use of cross-functional improvement teams
• Level 3: TQM and six sigma support system set up and in use
• Level 2: Executive steering committee set up
• Level 1: Traditional approach
Organization is customer satisfaction driven
• Level 5: Customer satisfaction is the primary goal. More customers
desire a long-term relationship.
• Level 4: Striving to improve value to customers is a routine behavior.
• Level 3: Customer feedback is used in decision making.
• Level 2: Customer rating of company is known.
• Level 1: The traditional approach is in place.
Supplier involvement
• Level 5: Suppliers fully qualified in all benchmark areas
• Level 4: Suppliers actively implementing TQM and aware of the six
sigma demands
• Level 3: Direct involvement in supplier awareness training; supplier
criteria in place
• Level 2: Suppliers knowledgeable about your TQM as well as the six
sigma direction; supplier number reduction started
• Level 1: Traditional approach
Continuous learning
• Level 5: Training in TQM and six sigma tools is common among all
employees.
• Level 4: Top management understands and applies TQM and the six
sigma philosophy.
• Level 3: Ongoing training programs are in place.
• Level 2: A training plan has been developed.
• Level 1: The traditional approach is in place.
Employee involvement
• Level 5: People involvement; self-directed work groups.
• Level 4: Manager defines limits, asks group to make decisions.
• Level 3: Manager presents problem, gets suggestions, makes decision.
• Level 2: Manager presents ideas and invites questions, makes decision.
• Level 1: The traditional approach is used.
Use of incentives
• Level 5: Gainsharing
• Level 4: More team than individual incentives and rewards
• Level 3: Quality-related employee selection and promotion criteria
SL3151Ch03Frame Page 105 Thursday, September 12, 2002 6:12 PM
Benchmarking
• Level
• Level
Use of tools
• Level
• Level
• Level
• Level
• Level
105
2: Effective employee suggestion program used
1: Traditional approach
5: Statistics a common language among all employees
4: More team than individual incentives and rewards
3: SPC used for variation reduction
2: SPC used in manufacturing
1: Traditional approach
The Malcolm Baldrige National Quality Award encapsulates the essential elements of Strategic Quality Management. The key attributes considered when making
this award are listed below. Many agree that the criteria provide the blueprint for a
better company. The urgency to win the award can accelerate change within an
organization. Some companies have told their suppliers to compete or else. These
are the criteria:
• Quality is defined by the customer.
• The senior management of a business needs to have clear quality values and
build the values into the way the company operates on a day-to-day basis.
• Quality excellence derives from well-designed and well-executed systems
and processes.
• Continuous improvement must be part of the management of all systems
and processes.
• Companies need to develop goals, as well as strategic and operational
plans, to achieve quality leadership.
• Shortening the response time of all company operations and processes
needs to be part of the quality improvement effort.
• Operations and decisions of the company need to be based on facts and
data.
• All employees must be suitably trained and involved in quality activities.
• Design quality and defect and error prevention should be major elements
of the quality system.
• Companies need to communicate quality requirements to suppliers and
work with suppliers to elevate supplier quality performance.
Achievement of the award requires extensive top management effort and support.
All of the Quality Award winners have been in highly competitive industries and
either had to improve or get out of the business. On a scale of 10 (best) to 1 (poor),
how would you rate your company on each of these attributes? If you find yourself
on the low end, there may be a need for benchmarking.
BENCHMARKING
AND
SIX SIGMA
Within the information and analysis part of the examination or survey, the practitioners of benchmarking look specifically at competitive comparisons and benchmarks. It has been reported in the literature that many companies do not do enough
SL3151Ch03Frame Page 106 Thursday, September 12, 2002 6:12 PM
106
Six Sigma and Beyond
in the way of benchmarking. They compare themselves against other manufacturers
but do not make comparisons with outside businesses or even “true” best-in-class
companies.
A six sigma company is expected to describe the company’s approach to selecting quality-related competitive comparisons and world-class benchmarks to support
quality planning, evaluation, and improvement. The specific areas to address are:
• Criteria and rationale the company uses for making competitive comparisons and benchmarks. These include:
• The relationship to company goals and priorities for the improvement
of product and service quality and/or company operations
• The companies for comparison within or outside the industry
• Current scope of competitive and benchmark involvement and data collection relative to:
• Product and service quality
• Customer satisfaction and other customer needs
• Supplier performance
• Employee data
• Internal operations, business processes, and support services
• Other
• For each, the company is directed to list sources of comparisons and
benchmarks, including companies benchmarked and independent testing
or evaluation, and:
• How each type of data is used
• How the company evaluates and improves the scope, sources, and uses
of competitive and benchmark data
• The company must also indicate how this data is used to support:
• Company planning
• Setting of priorities
• Quality performance review
• Improvement of internal operations
• Determination of product or service features that best predict customer
satisfaction
• Quality improvement projects
Specific uses of benchmarking are to assist in:
•
•
•
•
Developing plans
Goal setting
Continuous process improvement
Determining trends and levels of product and service quality, the effectiveness of business practices, and supplier quality
• Determining customer satisfaction levels
A closer review of the criteria indicates several factors that are essential for effective
quality excellence and benchmarking activities within a company, including:
SL3151Ch03Frame Page 107 Thursday, September 12, 2002 6:12 PM
Benchmarking
107
Customer-driven quality
• Quality is judged by the customer. The customer’s expectations of
quality dictate product design and this, in turn, drives manufacturing.
• All product and service attributes that lead to customer satisfaction and
preference must be taken into consideration.
• Customer driven quality is a strategic concept. Why do people buy
your product? How do you know?
• Leadership is crucial. A company’s senior management must create
clear quality values, specific goals, and well-defined systems and methods for achieving the goals.
• Ongoing personal involvement is essential. The attitude must be
changed from a “management control” focus to a “management committed to help you” focus.
Continual improvement
• Constant improvement in many directions is required: improved products and services, reduced errors and defects, improved responsiveness,
and improved efficiency and effectiveness in the use of resources. All
of this takes time. If you do not have the time, do not start.
Fast response
• An increasing need exists for shorter new product and service development and introduction cycles and a more rapid response to customers.
Actions based on facts, data and analysis
• A wide range of facts and data is required, e.g., customer satisfaction,
competitive evaluations, supplier data, and data relative to internal
operations.
• Performance indicators to track operational and competitive performance are critical. These performance indicators or goals can act as
the cohesive or unifying force within an organization. They can also
provide the basis for recognition and reward.
• Participation by all employees is important. Reward and recognition systems need to reinforce total participation and the emphasis on quality.
• Factors bearing on the safety, health, and well being of employees need
to be included in the improvement objectives.
• Effective training is required. The emphasis must be on preventing
mistakes, not merely correcting them. Employees must be trained to
inspect their own work on a continuous basis.
• Participation with suppliers is essential. It is important to get suppliers
to improve their quality standards.
NATIONAL QUALITY AWARD WINNERS
AND
BENCHMARKING
Example — Cadillac
To show the strong relationship between National Quality Award winners and benchmarking, we provide a historical perspective. The first example comes to us from
SL3151Ch03Frame Page 108 Thursday, September 12, 2002 6:12 PM
108
Six Sigma and Beyond
Cadillac’s approach to excellence. (Cadillac was the 1990 winner of the National
Quality Award.) The brief case study that follows indicates the integration of business
planning, excellent quality management, and benchmarking.
The Business Plan was the Quality Plan. The plan was designed to ensure that
Cadillac is the “Standard of the World” in all measurable areas of quality and
customer service.
The major components of the plan were:
• Mission
• Objectives
• Quality — Emphasis on six major vehicle systems:
• Exterior component and body mechanical
• Chassis/powertrain
• Seats and interior trim
• Electrical/electronics
• Body in white
• Instrument panel
• Competitiveness
• Disciplined planning and execution
• Leadership and people
• Goals
For each objective, the following issues were addressed:
• What are the measurable performance indicators of quality and customer
service? When answering, consider both the product itself and the management process that led to the improved product or service.
• What does the customer need or want?
• What levels are achieved by the best-of-class companies considering both
direct competitors and any other company?
• What are the time-phased quality improvement goals?
Action plans
• Took appropriate and applicable action to fulfill all the requirements so
that the customer could be satisfied.
A Second Example — Xerox
In the early 1980s, Xerox realized that Japanese competition was selling products for
less than the Xerox cost. Many of the required reforms focused on Xerox suppliers
because the cost of purchases amounted to 80% of the copiers cost of goods sold.
Xerox asked suppliers to restate their company performance data so that the supplier
could be compared with the best of class Xerox could find anywhere in the world.
Some of the benchmarks Xerox used to measure operations proficiency included:
• Ratio of functional cost to revenue (percent)
• Headcount per unit of output
• Overhead rate (dollars/hour)
SL3151Ch03Frame Page 109 Thursday, September 12, 2002 6:12 PM
Benchmarking
•
•
•
•
•
•
•
•
•
•
•
109
Cost per order entered
Cost per engineering drawing
Customer satisfaction rating (index value)
Internal and external defect rates (parts per million)
Service response time (hours)
Billing error rates
Days inventory on hand
Total manufacturing lead time (days)
New product development time (weeks)
Percent of parts delivered on time
New ideas per employee
Xerox reduced its number of suppliers from 5,000 in 1980 to 300 by 1986 based
on performance data and attitude. Suppliers were classified as: (a) does not think
improvement is necessary, (b) slow to accept or manage change, and (c) willing to
go for it and strong enough to be a survivor. Xerox reallocated its internal efforts
to concentrate on the companies in the third group. Xerox provided extensive training
to these companies, and defect rates in incoming materials dropped 90 percent in
three years.
In addition to performance improvement, the suppliers were asked to participate
in copier design, as early in the concept phase as possible, and to make suggestions
so that overall quality could be improved and costs reduced. When this information
was used, the cost of purchased material dropped by 50 percent.
Third Example — IBM Rochester
IBM Rochester describes its quality journey as follows:
1981
1984
1986
Vision
Goal
Vision
Goal
Vision
Goals
1989
Vision
1990–1994
Goal
Vision
Goal
Product reliability
Zero defects
Process effectiveness and efficiency
All process rated
Customer and supplier partnerships
Competitive and functional benchmarks
Best of competition
Over 350 benchmarking teams are in place; scores of benchmarking studies
have been completed; strategic targets are derived from the comprehensive
benchmarking process
Market-driven customer satisfaction
Total business process focus
Closed loop quality management system
Total customer satisfaction
Customer — the final arbiter
Quality — excellence in execution
Products and services — first with the best
People — enabled, empowered, excited, rewarded
Undisputed leadership in customer satisfaction
SL3151Ch03Frame Page 110 Thursday, September 12, 2002 6:12 PM
110
Six Sigma and Beyond
Results:
A 30 percent improvement in productivity occurred between 1986 and 1989.
This was a period of extensive benchmarking activity.
Product development time has been reduced by more than half, and manufacturing cycle time has been trimmed by 60 percent since 1983.
Fourth Example — Motorola
Each of the firm’s six major groups and sectors have “benchmarking” programs that
analyze all aspects of a competitor’s products to assess their manufacturability,
reliability, manufacturing cost, and performance. Motorola has measured the products of some 125 companies against its own standards, verifying that many Motorola
products rank as “best in their class.” (It is imperative for the reader to understand
that the result of a benchmarking study may indeed provide the researcher with data
to support the assertion that the current practices of your own organization are the
“best in class.”)
BENCHMARKING
AND THE
DEMING MANAGEMENT METHOD
There is a very close relationship between the approach of W. Edwards Deming and
that specified by the requirements of the National Quality Award. The potential role
of benchmarking to implement certain aspects of the Deming approach is apparent.
Deming’s fourteen points are summarized below:
1. Create constancy of purpose for the improvement of product and services.
2. Adopt the new philosophy that quality is critical for the competitive
survival of a company.
3. Cease dependence on mass inspection, and create the processes that build
a quality product from the start.
4. End the practice of awarding business based on price alone, and take into
consideration the quality of products and services received.
5. Improve constantly and forever the system of production and service. This
begins with product design and goes through every phase of business
operations.
6. Institute training and retraining.
7. Provide leadership and the resources required to get the job done.
8. Drive out the fear of admitting problems and suggesting new and different
ways of doing things. Get around the not invented here syndrome.
9. Break down interdepartmental barriers so that all departments can work
toward the common objective of satisfying the customer.
10. Eliminate slogans, exhortations, and targets for the workforce without
providing the ways and means for accomplishment. Do not tell people
what to do without telling them how to do it and providing the systems
and support necessary.
SL3151Ch03Frame Page 111 Thursday, September 12, 2002 6:12 PM
Benchmarking
111
11. Eliminate numerical quotas. These often promote poor quality. Instead
analyze the process to determine the systemic changes required to enable
superior performance.
12. Remove barriers to pride in workmanship by providing the training, communication, and facilities required.
13. Institute a vigorous program of education and retraining. Help people to
improve every day.
14. Take action to accomplish the transformation required.
BENCHMARKING
AND THE
SHEWHART CYCLE
OR
DEMING WHEEL
Plan
Study a process to determine what changes might be made to improve it. What type
of performance is achieved by the best of the best? What do they do that we are not
doing? What results do they achieve? What changes would we have to make? What
does the customer expect? What is the customer level of satisfaction? Is the change
economically justified?
Do
Determine the specific plan for improvement and implement it. This involves the
development of creative alternatives by work teams and the conscious choice of a
strategy to be followed. This may require internal or external benchmarking.
Study — Observe the Effects
Was the root cause of the problem identified and corrected? Will the problem recur?
Are the expected results being achieved?
Act
Study the results and repeat the process. Was the plan a good one? What was learned?
This approach amounts to the application of the scientific method to the solution
of business problems. It is the basis of organizational learning.
WHY DO PEOPLE BUY?
Differentiation and quality management both focus on the need to meet customer
needs, wants, and expectations. Why does a person buy a particular product?
• One view: (marketing based)
• A second view: (psychologically based)
How can we define quality? This is a very critical question and may indeed
prove the most important question in pursuing benchmarking. The importance of
this question is that it will focus the research on “best” in a very customized fashion
SL3151Ch03Frame Page 112 Thursday, September 12, 2002 6:12 PM
112
Six Sigma and Beyond
from the organization’s perspective. This is a question that must be addressed as
early as possible.
ALTERNATIVE DEFINITIONS
OF
QUALITY
People buy a combination of products and services for a price that depends upon
the perception of the value received. In order to conduct benchmarking studies
relative to quality, it is important to define the elusive term “quality.” Garvin (1988)
and Stamatis (1996, 1997) provide various definitions of quality as follows:
Garvin’s eight dimensions of product quality are:
Performance — Performance refers to the ability of the product to perform
up to expectations relative to its primary operating characteristics. For
example, a camera can be self-focusing and automatically adjust the lens
opening. Products can often be ranked in terms of levels of performance,
i.e., good, better, best. People’s expectations differ depending upon the task
to be performed. Products are designed for different uses. Therefore, a
failure to perform might simply indicate another product class or market
segment focus and not inferior quality.
Features — Features are secondary attributes that affect a product’s performance. For example, the camera mentioned above can weigh less than two
pounds. A car can have power steering as a feature. Features can often be
bundled or unbundled. The distinction between performance and features
is arbitrary. One person’s performance characteristics can be another person’s features.
Reliability — Reliability reflects the ability of a product to perform properly
over a period of time. A car, for example, might perform without major
repairs for 50,000 miles. Measures used to evaluate reliability are factors
such as the mean time between failures, the mean time to first failure, and
the failure rate per 1000 items.
Conformance — Conformance measures whether product quality specifications have been met. Is a shaft the required diameter? Are the parts of
impurity per million within the specified limits? Individual parts can be
within tolerance; however, there can be a problem of tolerance stackup.
Four parts, each 1.000 inch wide plus or minus .0005 inch, when stacked
up will not be 4.000 inches tall plus or minus .0005 inch.
Durability — Durability measures a product’s expected operating life. Product
life can be limited due to technical failure (mechanical, electrical, hydraulic,
pneumatic), technical obsolescence, or the economics of continued repair.
For example, a light bulb has technical failure when the filament burns out.
An automobile has economic failure when the owner decides it is no longer
economically advantageous to repair it.
Serviceability — Serviceability refers to the speed, ease, cost, certainty, and
effectiveness of repair. Of critical concern are the courtesy of the repair
people, the speed of getting the product back, and whether or not it is really
fixed.
SL3151Ch03Frame Page 113 Thursday, September 12, 2002 6:12 PM
Benchmarking
113
Aesthetics — Aesthetics are concerned with the look, taste, feel, sound, and
smell of an item. This can be critical for products such as food, paint, works
of art, fashion, and decorative items.
Perceived quality — Perceived quality is determined by factors such as image,
advertising, brand identity, and word of mouth reputation.
Stamatis, on the other hand, has introduced a modified version of the above
points with some additional points — especially for service organizations. They are:
Function — The primary required performance of the service
Features — The expected performance (bells and whistles) of the service
Conformance — The satisfaction based on requirements that have been set
Reliability — The confidence of the service in relationship to time
Serviceability — The ability to service if something goes wrong
Aesthetics — The experience itself as it relates to the senses
Perception — The reputation of the quality
To be effective and efficient, the following characteristics must be present:
•
•
•
•
•
•
Be accessible
Provide prompt personal attention
Offer expertise
Provide leading technology
Depend — quite often — on subjective satisfaction
Provide for cost effectiveness
What is interesting about these two lists is the fact that both Garvin and Stamatis
recognize that design for optimum customer satisfaction is a design issue. Design,
indeed, is the integrating factor. The designer has to make the tough trade-offs.
Concurrent engineering and Quality Function Deployment suggest that the product
designer, the manufacturing engineer, and the purchasing specialist work jointly
during the product design phase to build quality in from the start. The focus, of
course, is to design all the above characteristics as a bundle of utility for the customer.
That bundle must address in holistic approach the following:
Image
Transcendent view — This view defines quality as that property that you
will innately recognize as such once you have been exposed to it.
Something about the product or service or the way it has been promoted/communicated to you causes you to recognize it as a quality
offering — perhaps an excellent one.
Performance
Product-based view — This view defines quality in terms of a desirable
attribute or series of attributes that a product contains. A high-quality
fuel product could have a high BTU content and a low percentage of
sulfur.
SL3151Ch03Frame Page 114 Thursday, September 12, 2002 6:12 PM
114
Six Sigma and Beyond
User-based view — This view defines quality in terms of how well a
product or service meets the expectation of the customer. If the product
meets expectations, it is considered to be of high quality. Expectations
vary widely, and meeting expectations may not lead to the best product.
For example, a bestseller may not be the best literature.
Manufacturing-based view — This view defines quality in terms of conformance to manufacturing specifications. This view may, however, promote manufacturing efficiencies at the expense of suitability to the user.
For example, problems of tolerance stackup are particularly noteworthy.
Value
Value-based view — This view, which is gaining in popularity, looks at value as the trade-off between quality and price. From this perspective, quality consists of all of the non-price reasons to buy a product or service.
To come up with reasonable definitions and actions for the above characteristics,
a team must be in place and team dynamics at work. A very good approach for this
portion of benchmarking that we recommend is the nominal group process:
The process features are as follows:
Group size: five to nine core individuals
Group composition: multidisciplinary and cross-functional
Reflection — 20 minutes: all participants allowed to express their views as to
what the problem is and how the team should progress
Sharing of ideas: Discussion of the presented ideas
Voting: evaluation of ideas and selection of the “best”
Tabulation: Final resolution of what is at stake and how to proceed so that
success will result
The discussion and direction of the nominal process must not focus on price
alone because that is a very narrow point of view. Some examples of non-price
reasons to buy are:
Product non-price reasons to buy
• Ease of product use
• Performance
• Features
• Reliability
• Conformance
• Durability
• Serviceability
• Aesthetics/style
• Perceived quality
• Ability to provide a bundled package
Service and image non-price reasons to buy
• Speed of delivery
• Dependability of delivery
SL3151Ch03Frame Page 115 Thursday, September 12, 2002 6:12 PM
Benchmarking
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Fill rate
Fun to deal with
Number/location of stocking warehouses
Repair facilities and location
Technical assistance
Service — before, during, and after sale
Willingness to hold inventory
Flexibility
Access to salespeople
Access to multiple supply sources
Reputation
Life cycle cost
Financing terms
Turnkey operations
Consulting/training
Warehousing
Guarantees/warranty
Services provided by salespeople
Ease of resale
Computer placement of orders
Professional endorsement
Packaging
Up front engineering
Vendor financial stability
Confidence in salespeople
Backup facilities
Courtesy
Credibility
Understandability
Responsiveness
Accessibility of key players
Flexibility
Confidentiality
Safety
Delivery
Ease of installation
Ability to upgrade
Customer training
Provision of ancillary services
Product briefing seminars
Repair service and parts availability
Warranty
Image
Brand recognition
Atmosphere of facilities
Sponsor of special events
115
SL3151Ch03Frame Page 116 Thursday, September 12, 2002 6:12 PM
116
Six Sigma and Beyond
The service and image features define the “augmented product.” They answer
the questions:
• What does your customer want in addition to the product itself? (the
unspoken requirements)
• What does your customer perceive to have value?
• What does your customer view as “quality”?
In order to focus benchmarking efforts, it is critical to define the unique selling
proposition or the product concept. A statement of product concept requires the definition of both attribute(s) and benefit(s). Attributes consist of both form and features
(specific product or service characteristics) and technology (how they are to be provided). For example, a new brewing technique brings a double-strength beer to add to
your enjoyment by capturing the taste of the 1800s (technology, form, benefit).
So what do you expect to get out of this team effort and integration? Simply
put, you should get the answers to some very fundamental questions about your
organization and the product/service you offer. Some typical questions are:
• What are the non-price reasons to buy your product? How do they compare
with the product and service attributes listed above?
• How do your customers define quality? How does your company define
quality?
• What is more important? Product or service?
• Can specific, measurable attributes be defined?
• How does your competitor define quality?
• How do you compare with your competitor?
• What other companies or industries influence your customer as to what
should be expected relative to each of these characteristics?
• What does this suggest in the way of benchmarking opportunities?
For example, here are some non-price reasons to buy that might apply to a
supermarket:
•
•
•
•
•
•
•
•
•
•
•
•
Large parking lot
Zoo in parking lot
Lots of giveaways
Makes shopping fun for the entire family
Clowns
Disneyland figures
Well-stocked, attractive displays
Rock hard containers of ice cream
Complaint box (policy to respond the next day to the customer)
Fast cash out
“Forget your checkbook? Pay next time.”
Trains all associates
SL3151Ch03Frame Page 117 Thursday, September 12, 2002 6:12 PM
Benchmarking
•
•
•
•
•
•
•
•
117
Uses Dale Carnegie courses
Walt Disney people management
One aisle that rambles through the store
No question return policy
Bus for senior citizens
Customer focus groups every three weeks
Associates who take the initiative to please customers
In-store dairy and bakery
None of these, in itself, is earth-shaking. But they could make the difference in
an industry that operates with a profit margin of less than 1%.
We cannot pass up the opportunity to address non-price issues for the WALMart corporation, which allegedly spends 1% of 100 details in the following items:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Aggressive hospitality
People greeters
Associates not employees
Tough people to sell to
Weekly top management meetings
Low cost, no frills environment
Good computerized database
Rapid communications by phone
Managers in the field Monday through Thursday
High-efficiency distribution centers
Emphasis on training of people
Department managers having cost and margin data
Profit sharing if store meets goals
Bonus if shrinkage goal is met
Open door policy
Grass-roots meetings
Constant improvements
Competitive ads shown in store
DETERMINING
THE
CUSTOMER’S PERCEPTION
OF
QUALITY
Differentiation is uniqueness in the eyes of the customer. Quality is meeting the
unique needs, wants, and expectations of the customer in terms of the non-price
reasons to buy. But who is the customer? Depending on (a) defining the customer
for multiple channels of distribution or (b) identifying the multiple buying influences
in a business-to-business sale, the customer may be:
•
•
•
•
User
Technical buyer
Economic buyer
Corporate general interest buyer
SL3151Ch03Frame Page 118 Thursday, September 12, 2002 6:12 PM
118
Six Sigma and Beyond
Who is the competitor? Assume for example a recreation environment. Here are
some questions you might ask that would help you to determine who the competitors are:
What
•
•
What
•
•
What
•
•
What
•
•
is the desire I want to satisfy? (Desire competitor)
Recreation
Education
kind of recreation do I enjoy? (Generic competitor)
Baseball
Boating
kind of boating? (Form competitor)
Power boat
Sailboat
brand boat? (Brand competitor)
Bayliner
Boston Whaler
Once these questions have been addressed, now we are ready to do the competitive evaluation in the following stages:
Survey design
• Attributes considered
• Relative weight given to each
• Direct competitors
• Performance versus competition
Approaches to making the survey
Internal
• Sales force
• Sales management (Remember, the more accurate data you have,
the better the survey. For example: Colgate Palmolive audits 75,000
customers for all products. “People know what they want and will
not settle for happy mediums.”)
External
• Market research firms/universities
• Attribution/non-attribution
• Use of customer service hot line — GE progressed from receiving
1000 calls per week in 1982 to receiving 65,000 calls per week with
the installation of an 800 number answer center. The 150 phone
reps need a college degree and sales experience. They have been
effective in spotting trends in complaints as well as increasing sales.
The increase in sales has been estimated at more than twice the
operating cost of the center. (Did this trigger off a benchmarking
candidate for you?)
Groups to be surveyed
• Current customers
• Lost customers
• Prospects
SL3151Ch03Frame Page 119 Thursday, September 12, 2002 6:12 PM
Benchmarking
119
Survey frequency
Comparison of company internal view versus the customer view
QUALITY, PRICING
AND
RETURN
ON INVESTMENT
(ROI) — THE PIMS RESULTS
Being perceived as being the best or having the product with the highest quality can
have significant bottom line results. Buzzell and Gale (1987) introduced the PIMS
(Profit Impact of Marketing Strategies) system, which is an elaborate benchmarking
database developed by the Strategic Planning Institute in Cambridge, Mass. The database contains information for over 450 companies and over 3000 business experience
pools in a wide variety of industries, including manufacturers, raw material producers,
service companies, distributors, and durable and non-durable consumer products. Data
are collected for independent business units, each with a defined served market.
The objectives of the Strategic Planning Institute and benchmarking are to help
organizations in the process of becoming excellent organizations. How do they do
it? By:
1. Using the statistical analysis and modeling of business experience
2. Isolating the key factors that determine return on investment (ROI)
ROI equals net income before interest and taxes divided by the total of working
capital and fixed capital.
As a result the Institute can help organizations with:
• Understandability
• Predictability
of their own organization’s behavior and their own products and services.
Of course, the choice of strategy depends upon several factors, including but
not limited to:
•
•
•
•
•
•
•
•
Market growth rate and product life cycle
Current market share
Price/quality sensitivity by segment
Competitive response profiles
Current and planned capacity
Cost and feasibility of quality improvements
Market perception of quality improvements
Financial and marketing goals — long and short term (The period
described as “short term” and “long term” will differ widely among
various strategies and organizations.)
BENCHMARKING AS A MANAGEMENT TOOL
So far we have talked about benchmarking but we really have not defined it. A
formal definition, then, is that benchmarking is a systematic, continual (ongoing)
SL3151Ch03Frame Page 120 Thursday, September 12, 2002 6:12 PM
120
Six Sigma and Beyond
management process used to improve products, services, or management processes
by focusing on and analyzing the best of the best practices, by direct competitors
or any other companies, to determine standards of performance and how to achieve
those standards, to provide least cost, quality or differentiation, in the eye of the
customer.
Key words in this definition are systematic and ongoing, which imply that in
order to have a successful benchmarking one must be familiar with the Kano model,
Shewhart-Deming cycle, and principle of Kaizen improvement. This systematic and
ongoing pursuit of excellence is applicable to all aspects of business and in all
methodologies including the six sigma. It is an integral part of the strategic, operational, and quality planning process. It is not an end in itself.
Benchmarking identifies the best of class and determines standards of excellence
based on the market — considering both customers and competitors. It is a challenge
with a solution. It provides the what and how. (A narrow focus on what you want
to get done — a results orientation that controls performance with a carrot and a
stick — is not effective without a broader focus on how best to do it — a process
orientation that identifies the process changes that need to be made in order for the
results to be achieved consistently.)
Benchmarking is a creative imitation because part of its goal setting process that
encourages the development of proactive plans is the action to bring about change.
To do that, of course, analysis is required to determine all of the factors necessary
for a solution to work, as appropriate and applicable to a given organization. In
addition, it is necessary to project the future performance of the competition to set
improvement goals. Otherwise, a company is always playing catch-up. Some of the
key factors in this analysis are:
•
•
•
•
People/culture/compensation
Process/procedure
Facilities/systems
Material
WHAT BENCHMARKING IS
AND IS
NOT
Benchmarking is not:
• A way to cut costs or headcount, necessarily
• A quick fix or a panacea
• A cookbook approach
Rather, it is a methodology that is an integral part of the management process
and provides the organization with many benefits including but not limited to:
• Identifying the specific action plans required to achieve success in company growth and profitability
• Assessing objectively strengths and weaknesses versus competition and
the best in class
SL3151Ch03Frame Page 121 Thursday, September 12, 2002 6:12 PM
Benchmarking
121
• Improving quality as perceived by the customer (The customer can be
external to the company or the next department in the company.)
• Determining goals objectively and realistically based on the actual
achievements of others
• Providing a vision of what can be accomplished in terms of both what
and how
• Providing hard, reliable data as a basis for performance improvement
• Causing people to think creatively and to look at proactive alternative
solutions to a problem
• Promoting an opportunity for personal and corporate growth, learning,
and development
• Raising the company level of awareness of the outside world and of
customer needs
• Stimulating change — “Others are doing this, why can’t/shouldn’t we?”
• Identifying all of the factors required to get a job done
• Promoting an in-depth analysis and quantification of operations and management processes
• Encouraging teamwork and communication within an organization
• Creating an awareness of problems and stimulating change
• Documenting the fact that a good job is being done and that you are the
best in class
• Allowing a company to leapfrog competition by looking outside of an
industry
• Changing the rules of the game by breaking with the traditions of an
industry
THE BENCHMARKING PROCESS
The benchmarking process can differ from company to company. However, the tenstep process below is generally followed.
I. Benchmark planning and prioritization
1. Identification of benchmarking alternatives
2. Prioritization of the benchmarking alternatives
II. Benchmark data collection
3. Identification of the benchmarking sources
4. Benchmarking performance and process analysis — company operations
• What do we do?
• What is the process?
• What are the resource inputs?
• What are the outputs?
• What is the resource cost per unit of output?
• What are the limitations?
• What are possible changes?
5. Benchmarking performance and process analysis — partner’s operations
SL3151Ch03Frame Page 122 Thursday, September 12, 2002 6:12 PM
122
Six Sigma and Beyond
III. Benchmark implementation
6. Gap analysis
7. Goal setting
8. Action plan identification and implementation
IV. Benchmark monitoring and control
9. Monitor company performance and action plan milestones
10. Identify the new “best in class”
TYPES
OF
BENCHMARKING
Benchmarking can be performed for any product, service, or process. Different
classification schemes have been suggested. For example, Xerox classifies benchmarking in the following categories:
• Internal benchmarking
• Direct product competitor benchmarks
• Functional benchmarking — This is a comparison with the best of the
best even if from a different industry.
• Generic benchmarking — This is an extension of functional benchmarking. It requires the ability to creatively imitate any process or activity to
meet a specific need. For example, the technique used for high speed
checking of paper currency (into the categories of good, mutilated, or
counterfeit) by a bank could be adapted for high speed identification and
sorting of packages in a warehouse.
ATT, on the other hand, uses the classification indicated below. Specific examples
of benchmarking studies for each are shown. These are not limited to ATT examples:
Task
• Invoicing
• Order entry
• Invoice design
• Customer satisfaction
• Supplier evaluation
• Flow charting
• Accounts payable
Functional
• Promotion by banks
• Purchasing
• Advertising by media type
• Pricing strategy
• Safety
• Security
Management process
• PIMS par report
• Profit margin/asset turnover
SL3151Ch03Frame Page 123 Thursday, September 12, 2002 6:12 PM
Benchmarking
123
• Strategic planning
• Operational planning
• Capital project approval process
• Technology assessment
• Research and development (R and D) project selection
• Innovation
• Training
• Time-based competition
• Benchmarking
• Self-managed teams
Operations process
• Warehouse operations
• Make versus buy
Another classification of benchmarking projects is by:
•
•
•
•
Function — sales and marketing
Process — missionary selling
Activity — cold calling
Task — preparation of target list
Still another classification is in terms of:
• Overall financial performance
• Department or functional benchmarking
• Cost benchmarking
ORGANIZATION
FOR
BENCHMARKING
Ad hoc benchmarking studies can be helpful and productive. However, many companies are attempting to institutionalize benchmarking as part of the business planning and six sigma process.
The business planning process consists of strategic planning followed by operational planning. Both phases require the development of functional area plans.
However, the time periods considered, the alternatives of interest, and the level of
detail are very different. The general flow of the planning process is:
What should we do?
• Situation analysis performed to determine critical success factors,
strengths, weaknesses, opportunities, and threats
• Mission development
• Statement of objectives and goals
How should we to do it?
• Strategy determination
• Tactics identified
• Action plans specified
SL3151Ch03Frame Page 124 Thursday, September 12, 2002 6:12 PM
124
Six Sigma and Beyond
What are the expected results?
• Budgets and financial projections
How did we do?
• Monitoring and control
Who should get rewarded?
• Performance evaluation and compensation
Benchmarking is often an integral part of the situation analysis. It can also have
a major impact on the mission statement, the goals, the strategy, the tactics, and the
identification and determination of action plans. Benchmarking can provide major
guidance when determining what to do, how to do it, and what can be expected.
Benchmarking for strategic planning might concentrate on the determination of
the critical success factors for an industry (based on customer and competitive inputs)
and identifying what has to be done to be the success factors. This then leads to the
development of detailed action plan with effort and result goals. Benchmarking for
operational planning might concentrate on the cost and cost structure for each
functional area relative to the outputs produced.
All quality initiatives — including six sigma — have a significant influence on
the mission statement and the objectives and goals of an organization. As such, they
can provide an added impetus to do benchmarking to satisfy the quality goals.
Benchmarking can be centralized (ATT) or decentralized (Xerox). Xerox has several
functional area benchmarking specialists, including specialists for finance, administration, marketing, and manufacturing. The big advantage of a decentralized
approach is a greater likelihood of organizational buying of the final results of the
benchmarking study. The effort required to perform a benchmarking study can vary
significantly. For example, the L.L. Beam study performed by Xerox took one person
year of effort. Generally, three to six companies are included in the benchmark.
However, some companies use only one or two. Also, some studies are performed
in depth, while others are fairly casual. The “One Idea Club” was a simple approach
with a substantial reward.
REQUIREMENTS
FOR
SUCCESS
All initiatives have requirements for success. Benchmarking is no different. Some
of these requirements are:
• Management vision and support to ensure the conditions necessary for
the success of the strategy — people, money, time.
• Goal-focused management with a customer/competitive focus on continuously improved quality
• Performance- or results-based compensation
• Action plan prioritization and focus
• Defined roles and responsibilities for a multidisciplinary approach
• Defined organizational approach — central versus decentralized
• Integration with other management processes
SL3151Ch03Frame Page 125 Thursday, September 12, 2002 6:12 PM
Benchmarking
125
• Ability to maintain focus on the continuous improvement of hundreds of
small items a little bit at a time
• Willingness to deal with the conflict caused by a lack of goal congruity
and the need to share scarce resources and to make tough decisions
• Tolerance to deal with the ambiguity of results as research is conducted
to determine when, where, and how to improve operations
• Openness to learn and to change; results can affect organization structure,
allocation of resources, corporate culture, and individual work assignments
• Use of the scientific method: hypothesis formation, data collection, testing, and learning
• Humility and the willingness to admit weakness and the possibility for
improvement
• Identification of the impediments to change and the development of a plan
for change
• Patience and resources to perform the analytical studies and to complete
the required documentation
• Long-term commitment to achieving results
• Flexibility and discipline to implement the required changes
• Communication of intent and approach, findings, concerns, and apprehensions
• Training and total employee involvement, empowerment, and teamwork
• A process that starts slow, showcases, and picks up speed as experience
and confidence are gained
• Market segmentation focus and a defined corporate strategy
It sounds good. But does benchmarking work? Let us see what the SAS Airlines
did, as an example.
When Jan Carlzon took over as president of Scandinavian Airlines (SAS) in 1980,
the company was losing money. For several previous years, management had dealt with
this problem by cutting costs. After all, this was a commodity business. Carlzon saw
this as the wrong solution. In his view, the company needed to find new ways to compete
and build its revenue. SAS had been pursuing all travelers with no focus on superior
advantage to offer to anyone. In fact, it was seen as one of the least punctual carriers
in Europe. Competition had increased so much that Carlzon had to figure out:
• Who are the customers?
• What are their needs?
• What must we do to win their preference?
Carlzon decided that the answer was to focus SAS’s services on frequently flying
business people and their needs. He recognized that other airlines were thinking the
same way. They were offering business class and free drinks and other amenities.
SAS had to find a way to do this better if it was to be the preferred airline of the
frequent business traveler.
SL3151Ch03Frame Page 126 Thursday, September 12, 2002 6:12 PM
126
Six Sigma and Beyond
The starting point was market research to find out what frequent business
travelers wanted and expected in the way of airline service. Carlzon’s goal was to
be one percent better in 100 details rather than 100 percent better in only one detail.
The market research showed that the number one priority was on-time arrival.
Business travelers also wanted to check in fast and be able to retrieve their luggage
fast. Carlzon appointed dozens of task forces to come up with ideas for improving
these and other services. They came back with ideas for hundreds of projects, of
which 150 were selected with an implementation cost of $40 million.
One of the key projects was to train a total customer orientation into all of SAS’s
personnel. Carlzon figured that the average passenger came into contact with five
SAS employees on an average trip. Each interaction created a “moment of truth”
about SAS. At that point of contact, the person was SAS. Given the 5 million
passengers per year flying SAS, this amounted to 25 million moments of truth where
the company either satisfied or dissatisfied its customer.
To create the right attitudes toward customers within the company, SAS sent
10,000 front line staff to service seminars for two days and 25,000 managers to
three-week courses. Carlzon taught many of these courses himself. A major emphasis
was getting people to value their own self-worth so that they could, in turn, treat
the customer with respect and dignity. Every person was there to serve the customer
or to serve someone who was serving the customer.
The results: Within four months, SAS achieved the record as the most punctual
airline system in Europe, and it has maintained this record. Check-in systems are
much faster, and they include a service where travelers who are staying at SAS
hotels can have their luggage sent directly to the airport for loading on the plane.
SAS does a much faster job of unloading luggage after landings as well. Another
innovation is that SAS sells all tickets as business class unless the traveler wants
economy class.
The company’s improved reputation among business flyers led to an increase in
its full fare traffic in Europe of 8 percent and its full fare intercontinental travel of
16 percent, quite an accomplishment considering the price cutting that was taking
place and zero growth in the air travel market. Within a two-year period, the company
became a profitable operation.
Carlzon’s impact on SAS illustrates the customer satisfaction and profits that a
corporate leader can achieve by creating a vision and target for the company that
excites and gets all the personnel to swim in the same direction — namely, toward
satisfying the target customers. As a leader, Carlzon created the conditions necessary
to ensure the success of the strategy by implementing the projects required for the
front line people to do their jobs well.
BENCHMARKING AND CHANGE MANAGEMENT
Several behavioral models underscore the psychological requirements for change in
a person or an organization. The classic equation for change is:
D×V×F>R
SL3151Ch03Frame Page 127 Thursday, September 12, 2002 6:12 PM
Benchmarking
127
where D = dissatisfaction with current situation; V = vision of a better future; F =
the first steps of a plan to convert D to V; and R = resistance to change.
Typical attitudes/comments of resistance are:
•
•
•
•
•
•
•
Perceived threat of loss — power, position
Everything is OK. Why fix it?
What should we change? How?
What is management trying to tell me?
Takes a long time to see results!
We do not have time to do that “stuff.”
If this is so good, why aren’t they doing it?
Benchmarking can accelerate the change process by offering the organization’s
managers facts that relate to their needs and expectations, by understanding the
psychology of change. For example, while the previous mathematical formula is a
quantifiable entity on its own, it gives us little opportunity to explore change from
the individual’s perspective. Change begins with an individual. That individual must:
1. Believe that he or she has the skill necessitated by the change. Can I do it?
2. Perceive a reasonable likelihood of personal value fulfillment as a result
of making the change. What will I get out of it?
3. Perceive that the total personal cost of making the change is more than
offset by the expectation of personal gain. Is it worth making the change?
This model suggests that we manage change by education and communication
to influence what a person thinks and that this, in turn, causes a change in behavior.
Thought is affected by:
•
•
•
•
Beliefs
Facts
Values
Feelings
Benchmarking can help implement change by providing the required facts and
challenging beliefs, especially if there are data to be supported from other organizations. Other models to manage change are:
•
•
•
•
•
Facilitation and support
Participation and involvement
Negotiation
Manipulation
Explicit and implicit coercion
Corporate culture is important:
SL3151Ch03Frame Page 128 Thursday, September 12, 2002 6:12 PM
128
Six Sigma and Beyond
• Reward risk taking
• Encourage passionate champions
• Focus on base hits versus home runs
Sources of dissatisfaction that can drive change include:
• Financial pressure
• Quarterly earnings
• Cash flow (Need: to improve operational efficiency)
STRUCTURAL PRESSURE
•
•
•
•
Cyclical business mix
Customer mix
Cash flow conflicts
Product life cycle mix (Need: To improve business mix or effectiveness)
ASPIRATION
FOR
EXCELLENCE
The need to improve is an internal perception. “You do not have to be bad to
improve.” Organization positions can be viewed as having either innovation and/or
maintenance responsibilities relative to change. How does the mix change for workers, supervisors, middle management, and top management in an organization that
strives for excellence?
Current success can mask underlying problems and can prevent or delay action
from taking place when it should, i.e., when the company has the time and resources
to do something. Consider the classic story of the “boiled frog” as an example. (If
you recall, the frog was boiled when the temperature was increasing at a very slow
rate. The frog was adapting. The frog could not differentiate the change and ultimately, was boiled. On the other hand, the frog that was thrown into hot water
jumped out right away and saved its life.)
FORCE FIELD ANALYSIS
Force field analysis is a systematic way of identifying and portraying the forces (often
people) for or against change in an organization. The specific forces will differ depending upon the area where benchmarking is applied. Here is how the process works:
1.
2.
3.
4.
5.
6.
7.
Define the current situation.
Define the desired position based on the results of the benchmarking study.
Define the worst possible situation.
What are the forces for change? What is their relative strength?
What are the forces against change? What is their relative strength?
What forces can you influence?
Define the specific action to be taken relative to each of those forces that
you can influence.
SL3151Ch03Frame Page 129 Thursday, September 12, 2002 6:12 PM
Benchmarking
129
One effective way to start the benchmarking process is to select one high
visibility area of concern to the influence leaders in a company and produce results
that can showcase the benchmarking process. This might start with a library search
to highlight the results that are possible.
IDENTIFICATION OF BENCHMARKING
ALTERNATIVES
As indicated earlier, benchmarking candidates can be identified in a wide variety of
ways. They can be detected, for example, during the business planning process, as
part of a quality initiative, during a six sigma project, or during a profit improvement
campaign. Both external and internal analysis can lead to potential candidates.
EXTERNALLY IDENTIFIED BENCHMARKING CANDIDATES
Industry Analysis and Critical Success Factors
Based on the structure of an industry and the dynamics of the customer/supplier
interface, certain factors are critical to the success of a business. An identification
of the critical success factors and an evaluation of the company’s current capabilities
can lead to benchmarking opportunities.
The competitive rivalry among firms in an industry has a significant impact on
total demand and the level and stability of prices. Competitive rivalry is a function
of several interrelated factors that affect the supply and demand for products and
services. The balance of supply and demand at a particular time affects the percent
capacity utilization in an industry. The percent capacity utilization directly affects
price levels and price elasticity.
The factors affecting demand are:
• The strategy of the buyer to be least cost or differentiated — How well
do you meet your customers’ specific needs?
• The availability of and knowledge about substitute products — What are
existing and new competitors likely to do?
• The ease of switching from one product to another — How can you
increase the cost of switching to another supplier?
• Governmental regulations — What can you do to influence these?
The factors affecting supply are:
• The ease of market entry — What can you do that will make it hard to
enter the business?
• The barriers to market exit — What can you do to make it easy for a
competitor to get out of the business?
• Governmental regulations — What can you do to influence these?
SL3151Ch03Frame Page 130 Thursday, September 12, 2002 6:12 PM
130
Six Sigma and Beyond
Based on an analysis of the industry as it exists now and might exist in the
future, what are the factors absolutely critical for success? Five or six critical success
factors can usually be identified for a company. Examples are:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Customer service
Distribution
Technically superior product
Styling
Location
Product mix
Cost control
Dealer system
Product availability
Supply source
Production engineering
Advertising and promotion
Packaging
Staff/skill availability
Quality
Convenience
Personal attention
Innovation
Capital
Once the critical success factors have been identified, the company can assess its
current position to determine whether benchmarking is required. One technique for
performing this analysis is to make a tabulation showing how the major competitors in
an industry rank for each critical success factor. As a cross check, there should be a
correlation between the tabulated results, market share and return on equity.
PIMS Par Report
The PIMS par report indicates the financial results that companies in similar circumstances have been able to achieve. As such, it provides a quantitative benchmark.
The PIMS report also indicates those factors that should enable you to earn greater
than par and those factors that would cause you to earn less than par.
Financial Comparison
If PIMS data are not available, a comparison of the company’s financial performance
versus that of other companies in the same industry can suggest the value of
benchmarking in specific areas. Potential areas that might be identified are:
• Gross margin improvement
• Overhead cost reduction
SL3151Ch03Frame Page 131 Thursday, September 12, 2002 6:12 PM
Benchmarking
•
•
•
•
•
131
Fixed asset utilization
Inventory or accounts receivable reduction
Liquidity improvement
Financial leverage
Sales growth
Competitive Evaluations
As discussed earlier, a competitive evaluation is a periodic assessment made to
determine, objectively, what factors a buyer takes into consideration when deciding
to buy from one supplier versus another, the relative weight given to each of those
factors, the competing firms, and the relative performance of each firm with respect
to each buying motive.
Focus Groups
Focus groups are used to determine what a customer segment thinks about a product
or service and why it thinks that way. Participants are invited to join the group
usually with some type of personal compensation. A focus group starts with a series
of open-ended questions relative to a specific subject. Representatives of the sponsoring company view the entire process either through a one-way mirror or by closed
circuit TV. As a second phase, the company representatives ask specific follow-up
questions (through the facilitator), based on the open-ended probing.
Importance/Performance Analysis
The customer perception of performance versus importance can be used to identify
benchmark alternatives. A list of attributes can be prepared using either a nominal
group process or a focus group. The customer is asked to rate each attribute in terms
of both importance and company performance. A matrix is then prepared showing
high and low performance versus high and low importance. It has the following
implications:
•
•
•
•
Continue with high importance, high performance.
Reduce emphasis on low importance, low performance.
Increase emphasis on high importance, low performance.
Reduce emphasis on low importance, low performance.
In addition to determining the customer’s perception of performance versus
importance, it is also valuable to determine the customer’s versus the company’s
perception of importance. This can also be used to determine areas for intensification
and reduction of effort and benchmarking possibilities, including:
1. Customer-oriented goals
2. Service/quality goals
SL3151Ch03Frame Page 132 Thursday, September 12, 2002 6:12 PM
132
Six Sigma and Beyond
INTERNALLY IDENTIFIED BENCHMARKING CANDIDATES — INTERNAL
ASSESSMENT SURVEYS
An internal assessment of strengths and weaknesses can be used to identify benchmarking candidates. This determination can be made using a Business Assessment
Form followed by group discussion or by using the Nominal Group Process. The
assessment can be made by the owner of a product, process, or service and/or by
the department being served.
An internal assessment can also be approached from the viewpoint of the generic
value added chain. For each block of the chain, two questions can be asked:
1. What are the alternatives for least cost operation?
2. What are the alternatives to provide differentiation?
The value added chain can also provide customer perspective by suggesting the
questions:
1. How does our product or service help customers to minimize their cost?
2. How does our product or service help customers to differentiate their
product?
Nominal Group Process: General Areas in Greatest Need
of Improvement
•
•
•
•
•
•
•
Improving the precision of the sales forecast
Reducing the cycle time to bring out new products
Increasing the success rate in bidding for new business
Reducing the time required to fill customer orders
Reducing the errors in invoices
Major problems or issues
Areas of competitive disadvantage
Pareto Analysis
Pareto analysis is a form of data analysis that requires that each element or possible
cause of a problem be identified along with its frequency of occurrence. Items are
then displayed in order of decreasing frequency of probability of occurrence. This
can help to identify the most significant problem to attack first. A common expression
of the Pareto Law is the 80/20 rule, which states that 20% of the problem causes
80% of the difficulties. A Pareto analysis of setup delay might include factors such
as: necessary material not available, tooling not ready, lack of gages, setup personnel
not available, another setup has priority, material handling equipment not available,
error — incorrect setup. Develop a Pareto analysis for the production of scrap. (There
is a tremendous difference between knowing the facts and guessing).
SL3151Ch03Frame Page 133 Thursday, September 12, 2002 6:12 PM
Benchmarking
133
Statistical Process Control
Statistical process control is a technique for identifying random (or common) causes
versus identifiable (or special) causes in a process. Both of these are potential sources
for improvement. The amount of random variation affects the capability of a machine
to produce within a desired range of dimensions. Hence, benchmarking could be
performed to determine machine processing capabilities and how to achieve those
levels. The determination and correction of recurring systematic changes is also a
benchmarking possibility.
The reduction of the random variation or the uncertainty of the process and the
identification and correction of special causes are critical aspects of the total quality
management process. Correction often requires a change in the total manufacturing
process, tooling, the equipment being used, and/or training in setup and operations.
The first step in process improvement is to control the environment and the
components of the system so that variations are within natural, predictable limits.
The second step is to reduce the underlying variation of the process. Both of these
are candidates for benchmarking.
Trend Charting
Historic data can be used to develop statistical forecasts and confidence intervals
that depict acceptable random variation. When data fall within the confidence intervals, you have no cause to suspect unusual behavior. However, data outside of the
confidence intervals could provide an opportunity for benchmarking. It might also
be informative to pursue benchmarking as a device to reduce the range of variation
or the size of the confidence interval.
A simple trend analysis of your own past data can also provide a basis for
improvement. The following data relative to the percent scrap and rework illustrate
the improvement made and could provide the basis for benchmarking:
1987
1988
1989
1990
2.1%
3.0%
1.0%
0.7%
Product and Company Life Cycle Position
Products tend to go through a defined life cycle starting with an introductory phase
and proceeding through growth, maturity, and decline. The management style and
business tactics are very different at each stage. Anticipating and managing the
transitions can be important. This could lead to opportunities for benchmarking of
product life cycle management and product portfolio management. Product portfolio
management can lead to the need for new product identification and introduction.
These areas have both been the subjects of benchmarking studies.
In addition to the changes that products go through, companies tend to go through
various stages of development and crises. Again, the management of the transitions
can be an important benchmarking candidate.
SL3151Ch03Frame Page 134 Thursday, September 12, 2002 6:12 PM
134
Six Sigma and Beyond
Failure Mode and Effect Analysis
Failure Mode and Effect Analysis (FMEA) is a systematic way to study the operating
environment for equipment or products and to determine and characterize the ways
in which a product can fail. Benchmarking can be used to determine component and
system design goals and alternatives (see Chapter 6).
Cost/Time Analysis
To evaluate its new product introduction process, a company may plot cost per unit
produced versus elapsed time for each element of the process, e.g., design and
engineering, production, sub-assembly, and assembly. The area under the curve
represents money tied up (inventory), and smaller is better.
NEED
TO IDENTIFY
UNDERLYING CAUSES
Problem, Causes, Solutions
When solving a problem, it is critical to attack the underlying cause of the problem
and not the symptoms. The underlying cause can be identified by listing all possible
causes and identifying the most probable cause based on data collection and a Pareto
analysis. This sometimes leads to multiple benchmarking opportunities. Failure to
diagnose a problem (ready, fire, aim) can lead to an inefficient use of resources and
frustration.
The Five Whys
When identifying underlying causes, it can also be useful to ask five sequential
“whys” to get to the heart of a problem. For example:
Problem: The milling machine is down.
Why? The chucking mechanism is broken.
Why? A piece got jammed when being loaded.
Why? There was excess flash from the stamping operation.
Why? The stamping die was not changed.
Why? The die usage control log was not updated daily.
Cause and Effect Diagram
The development of a Cause and Effect Diagram or Fishbone Diagram or Ishikawa
Diagram is another way to identify and display the underlying causes of a problem.
Causes are usually displayed in terms of major categories such as human or personnel, machines or technology, materials and methods or procedures. Once causes are
identified, an analysis is made to determine actionable solutions.
The determination of cause and effect can require the use of designed experiments to measure effects and interaction. For example, to reduce conveyor belt
spillage, it was necessary to determine the effects of belt wipers, belt surface, dryness
of the belt and material, and particle size in various combinations of each factor.
SL3151Ch03Frame Page 135 Thursday, September 12, 2002 6:12 PM
Benchmarking
135
BUSINESS ASSESSMENT — STRENGTHS
AND WEAKNESSES
You will be asked to evaluate the organization relative to sales and marketing,
manufacturing and operations, R & D, and general management. A typical assessment is shown in Table 3.1.
TABLE 3.1
A Typical Assessment Instrument
Please indicate how you evaluate the organization using the following key:
(There are many ways to use a key. This is only one example.)
++
+
E
–
•
Extremely strong, definite leaders
Better than average
Average
Weak, should do better
Extremely weak, area of major concern
Sales and marketing
Customer base
Market share
Market research
Customer knowledge
Brand loyalty
Company business image
Response to customers
Breadth of product line
Product differentiation
Product quality
Distributors
Locations
Size
Warehousing
Transportation
Communication
Influencing customers
Sales force
People and skills
Size
Type
Location
Productivity
Morale
Advertising
SL3151Ch03Frame Page 136 Thursday, September 12, 2002 6:12 PM
136
TABLE 3.1 (Continued)
A Typical Assessment Instrument
National regional cooperative
Promotion devices
Prices/incentives
Customer communication
Service
Before sale
After sale
Credit
Long term
Short term
Trade allies
Costs
Selling
Distribution
Manufacturing/operations
Materials management
People and skills
Sourcing
Inventory P & C
Production P & C
Capability P & C
Computer system
Physical plant
Capacity
Utilization
Flexibility
Plant
Size location
Number
Age
Equipment
Automation
Maintenance
Flexibility
Processes
Uniqueness
Flexibility
Degree of integration
Engineering
Process
Tool design
Cost improve
Six Sigma and Beyond
SL3151Ch03Frame Page 137 Thursday, September 12, 2002 6:12 PM
Benchmarking
TABLE 3.1 (Continued)
A Typical Assessment Instrument
Time standards
Quality control
People and skills
Workforce
Skills mix
Utilization
Availability
Turnover
Safety
Unionization
Costs
Productivity
Morale
Direct/indirect
Research and development
Basic research
Concepts and studies
Emphasis
People and skills
Conversions to applications
Patents
Applied research
Finding
Emphasis
People and skills
Conversion to prototype
Patents
Basic engineering
Prototypes
Emphasis
People and skills
Convert to product
Design engineering
Designs
Patents and copyrights
Emphasis
People and skills
Design for production
Funding
Amount
Consistency
Sources
Project selection
137
SL3151Ch03Frame Page 138 Thursday, September 12, 2002 6:12 PM
138
TABLE 3.1 (Continued)
A Typical Assessment Instrument
General management
Leadership
Vision
Risk/return profile
Clarity of purpose
Implementation skills
Turnover
Experience
Motivation skills
Leadership style
Delegation
Strategic emphasis
Organization
Type
Size
Location
Communication
Defined responsibility
Coordination
Speed of reaction
Fix with strategy
Commitment
Planning and control
Early alert system
Forecasting
Operational budget
Control
MBO program
Capital planning
Long range planning
Contingency planning
Cost analysis
Resource allocation
Accounting and finance
Financial public relations
Financial relations
Auditing
Decision making
Style
Techniques used
Responsiveness
Position in org. criteria used
Six Sigma and Beyond
SL3151Ch03Frame Page 139 Thursday, September 12, 2002 6:12 PM
Benchmarking
139
TABLE 3.1 (Continued)
A Typical Assessment Instrument
Personnel
Effectiveness
Hourly labor
Clerical labor
Sales people
Scientists and engineers
Supervisors
Middle management
Top management
Comp. and reward
Management development
Management depth
Turnover
Morale
Information systems
Decision support system
Customer data
Product line data
Fixed/variable costs
Exception reporting
Culture
Shared values
Pluralism
Conflict resolution
Openness
Optional Information
Name:
Date:
Title:
Dept:
PRIORITIZATION OF BENCHMARKING
ALTERNATIVES — PRIORITIZATION PROCESS
A variety of prioritization approaches are available. Use the one most appropriate
to a specific situation.
PRIORITIZATION MATRIX
The following steps are required to complete a prioritization matrix:
SL3151Ch03Frame Page 140 Thursday, September 12, 2002 6:12 PM
140
Six Sigma and Beyond
1.
2.
3.
4.
5.
List all items to be prioritized.
List the goals or the prioritization criteria.
Specify the goal weights.
Indicate the impact score of each item relative to each goal.
Determine the value index for each item by totaling the cross product of
each goal weight times the impact score.
6. Sort the items from highest to lowest value index.
QUALITY FUNCTION DEPLOYMENT (HOUSE
OF
QUALITY)
Quality function deployment is an extension of the prioritization matrix described
above. However, the rows and the columns are interchanged. The rows become the
evaluation criteria (or goals) and the columns represent the alternative solutions to
be prioritized. The following procedure is used to complete the Quality Function
Deployment analysis:
1. List the items indicating “what” you want to accomplish. These are the
evaluation criteria.
2. List “how” you will accomplish what you want to do. These are the
alternatives to be evaluated.
3. Indicate the degree of importance for each of the “whats.” This is a number
ranging from 1 to 10 (10 is most important).
4. Indicate the company and the competitive rating using a scale from 1 to
10 (10 is best). Plot the competitive comparison.
5. Specify the planned or desired future rating.
6. Calculate the improvement ratio by dividing the planned rating by the
company current rating.
7. Select at most four items to indicate as “sales points.” Use a factor of 1.5
for major sales points and a factor of 1.2 for minor sales points.
8. Calculate the importance rate as the degree of importance times the
improvement ratio times the sales points.
9. Calculate the relative weight for each item by dividing its importance rate
by the total of the importance rates for all “whats.”
10. Indicate the relationship value between each “what” and “how.” Use values
of 9, 3, and 1 to indicate a strong, moderate or light interrelationship.
11. Calculate the importance weight for each “how.” This is the total of the
cross products of the relationship value and the relative weight of the
“what.”
12. Indicate the technical difficulty associated with the “how.” Use a scale of
5 to 1 (5 is the most difficult).
13. Indicate the company, competitive values, and benchmark values for the
“how”.
14. Specify the plan for each of the “hows.”
Quality function deployment is usually applied at four different interrelated
levels:
SL3151Ch03Frame Page 141 Thursday, September 12, 2002 6:12 PM
Benchmarking
141
1. Product planning
What — customer requirements
How — product technical requirements
2. Product design
What — product technical requirements
How — part characteristics
3. Process planning
What — part characteristics
How — process characteristics
4. Production planning
What — process characteristics
How — process control methods
IMPORTANCE/FEASIBILITY MATRIX
Importance is a function of urgency and potential impact on corporate goals. It is
expressed in terms of high, medium, and low. Feasibility takes into consideration
technical requirements, resources, and the cultural and political climate. It is also
expressed in terms of high, medium, and low.
Paired Comparisons
This approach is based on a pair-by-pair comparison of each set of alternatives to
determine the most important. Count the total number of times each alternative was
selected to determine the overall prioritization.
Improvement Potential
To determine how to prioritize cost improvement benchmarking alternatives, perform
the following analysis:
1. Make a Pareto analysis of cost components
2. Assess the percent improvement possible for each of the most significant
cost components.
3. Multiply the cost times the percent improvements possible to determine
the improvement potential.
4. Prioritize the benchmark studies based on improvement potential.
This approach can be used to prioritize other areas as well.
Prioritization Factors
When prioritizing benchmarking candidates, it is important to take into consideration
many factors. Some of these factors are listed below. It is important to narrow projects
down to the significant few and to choose a good starting project to showcase the
value of the approach.
SL3151Ch03Frame Page 142 Thursday, September 12, 2002 6:12 PM
142
Six Sigma and Beyond
The first project should be a winner. It should address a chronic problem, there
should be a high likelihood of completion in several weeks, and the results should
be (a) correlated to customer needs and wants, (b) significant to the company, and
(c) measurable. Factors to be used subsequently are:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Importance of business need long term
Basis for a sustainable competitive advantage
Percent improvement possible
Customer impact
Realism of expectations
Urgency
Ease of implementation/degree of difficulty
Time to implement
Consistency with mission, values, and culture
Organizational buy in
Passionate champion identified
Resource requirements and availability
• Capital expenditures
• Working capital
• Time by skill category
Synergy
Risk versus return
Measurability of result
Modularity of approach
Anticipated problems
Potential resistance
ARE THERE ANY OTHER PROBLEMS? WHAT IS
OF EACH OF THESE?
THE
RELATIVE IMPORTANCE
The Japanese approach to improvement is called “Kaizen.” This philosophy espouses
an innovative, small-step-at-a-time approach that is implemented by creating an
awareness of need and empowerment throughout an organization. This contrasts to
the Western approach, which tends to be higher tech, capital intense, and focused
on major innovative changes. (Several studies have demonstrated that the U.S. is
much better at discovery and invention than Japan, but that we lag in commercial
development and implementation of the ideas.) Could the low tech, people-oriented
focus work in your competitive situation? What does this suggest in terms of
benchmarking prioritization?
IDENTIFICATION OF BENCHMARKING SOURCES
TYPES
OF
BENCHMARK SOURCES
The benchmarking process often starts with a library search to identify alternative
views, issues, approaches, and possible benchmarking sources. Benchmarking
SL3151Ch03Frame Page 143 Thursday, September 12, 2002 6:12 PM
Benchmarking
143
sources can be internal best performers, competitive best performers, or best in class
worldwide.
Internal Best Performers
Xerox used internal benchmarking when it studied Fuji-Xerox’s manufacturing
methods (but not until Florida Power and Light began to emulate them). Different
divisions, plants, distribution outlets, and departments tend to do things differently.
Much can often be learned by looking at these company operations.
Competitive Best Performers
The advantage of making comparisons with direct competitors is obvious. However,
it can be difficult to get competitors to share their source of competitive advantage.
When working with direct competitors, it can also be difficult to get out of the
industry mind-set and come up with creative ideas. It could be that the competitors
in an industry are not particularly good at what they do and hence provide little
stimulus for improvement.
Xerox regularly benchmarks all direct competitors, all their suppliers, and all
major competitors to those suppliers. Updates are important. Knowing how fast
competitors are moving is just as important as knowing where they are.
Best of Class
There is, in general, no way to know the “best” of the best. Companies generally
pick the “best” based on reputation through publications, speeches, news releases,
etc. A company might start out with four to ten “best” candidates and narrow them
down based on initial discussions.
Xerox looked at IBM and Kodak but also L.L. Bean, the catalog sales company,
known for effective and efficient warehousing and distribution of products. Additional benchmarking partners used by Xerox were:
Customer satisfaction, customer retention
Financial stability and growth
SPC and quality
Customer care and training
USAA (Insurance Co.)
A.G. Edwards & Sons
Florida P&L
Walt Disney
Milliken & Company, winner of the 1989 National Quality Award, provided the
following partial list of benchmarks:
Strategy
Safety
Customer satisfaction
Innovation
Education
Strategic planning
Time based competition
DuPont
ATT, IBM
3M, KBC
IBM, Motorola
Frito-Lay, IBM, ATT
Lenscrafters
SL3151Ch03Frame Page 144 Thursday, September 12, 2002 6:12 PM
144
Six Sigma and Beyond
Benchmarking
Self-managed teams
Continuous improvement
Heroic goals concept
Role model evaluation
Environmental practice
Statistical methods
Flow charting
Quality process
Security
Accounts payable
Order handling
Quality Process
Xerox
Goodyear, P&G
Japanese
Motorola
Xerox
DuPont, Mobay, Ciba-Geigy
Motorola
Sara Lee
FP&L, Westinghouse, Motorola
Miscellaneous
DuPont
Mobay
L.L. Bean
SELECTION CRITERIA
How do you know who is the best? Here are some ways to get that information:
•
•
•
•
Library search
Reputation
Consultants
Networking
Characteristics to be examined when seeking partners include:
•
•
•
•
•
•
•
Company size
Customer non-price reasons to buy
Industry critical success factors
Availability of data
Data collection costs
Innovation
Receptivity
One hundred percent accuracy of information is not required. You only need
enough to head you in the right direction.
SOURCES
OF
COMPETITIVE INFORMATION
Read everything and ask, “Has anyone faced this or a similar problem? What have
they done?”
Do not forget to ask people in your own organization, including:
•
•
•
•
Past employees of benchmark company
Family members
Market researchers
Sales and marketing
SL3151Ch03Frame Page 145 Thursday, September 12, 2002 6:12 PM
Benchmarking
145
It is also helpful to make use of trade associations and consultants and to network.
Review studies in which people have identified the characteristics of best performers. Good sources here are Clifford and Cavanagh (1988), Smith and Brown
(1986), and Berle (1991).
Another good source is the Encyclopedia of Business Information Sources,
published frequently by Gale Research, Detroit, Michigan. This source contains
references by subject to the following:
•
•
•
•
•
•
•
•
•
•
•
•
•
Abstracting and indexing services
Almanacs and yearbooks
Bibliography
Biographical sources
Directories
Encyclopedias
Financial ratios
Handbooks and manuals
Online databases
Periodicals and newsletters
Research centers and institutes
Trade associations/professional associations
Other
Additional sources may also be found in the John Wiley publication entitled
Where to Find Business Information, as well as the following:
Books and periodicals
• Trade journals
• Functional journals
• F.W. Dodge reports
• Technical abstracts
• Local newspapers, national newspapers
• Nielson — Market Share
• Yellow Pages
• Textbooks
• Special interest books
• City, region, state business reviews
• Standard and Poors industry surveys
Directories
• Trade show directory
• Directory of Associations
• Brands and Their Companies
• Who Runs the Corporate 1000
• Corporate Technology Directory
• American Firms in Foreign Countries
• Corporate Affiliations
• Foreign Manufacturers in U.S.
• Directory of Company Histories
SL3151Ch03Frame Page 146 Thursday, September 12, 2002 6:12 PM
146
Six Sigma and Beyond
• International Trade Names
• Leading Private Companies
• Marketing Economics Key Plants
• Directory of Advertisers
• Books of business lists
• Thomas Register
• Wards Directory
• Lists of 9 Million Businesses — ABI
Computer databases — CD-ROM or online
Text databases
• Business dateline — articles
• BusinessWire — press releases
• Intelligence Tracking Service — consumer trends
• Dow Jones Business and Financial Report
• Newsearch
• Trade and Industry Index
Statistical business information
• BusinessLine
• Cendata
• Consumer Spending Forecast
• Disclosure Database
• CompuServe
• Retail Scan Data
• Moody’s 5000 Plus
Demographic data
• Census Projection 1989–1993
• Donnelley demographics
Directories
• Dun’s Million Dollar Directory
• Thomas Register
Company direct
• Advertising
• Benchmarking partner
• Company newsletters
• Minority interest partners
• Speeches
• Direct contact
Financial sources
• Annual reports, 10k, proxies, 13D
• Investment reports
• Prospectus
• Filings with regulatory agencies
• Dun and Bradstreet, Robert Morris
• Moody’s Manuals
• S&P Reports
SL3151Ch03Frame Page 147 Thursday, September 12, 2002 6:12 PM
Benchmarking
Individuals
• Company employees
• Past employees/retirees
• Social events
• Construction contractors
• Landlords, leasing agents
• Salesmen
• Service personnel
• Focus groups
Professional societies
• Professional society members
• Trade shows/conventions
• National associations
• User groups
• Seminars
• Rating services
• Newsletters
Government
• Public bid openings
• Proposals
• National Technical Information Center
• Freedom of Information Act
• Occupational Safety and Health Administration (OSHA) filings
• Environmental Protection Agency (EPA) filings
• Commerce Business Daily
• Government Printing Office Index
• Federal depository libraries
• Court records
• Bank filings
• Chamber of Commerce
• Government Industrial Program reviews
• Uniform Commerce Code filings
• State corporate filings
• County courthouse
• U.S. Department of Commerce
• Federal Reserve banks
• Legislative summaries
• The Federal Database Finder
• Patents
Customers
• New customers
• Consumer groups
Industry members
• Suppliers
• Equipment manufacturers
147
SL3151Ch03Frame Page 148 Thursday, September 12, 2002 6:12 PM
148
Six Sigma and Beyond
• Distributors
• Buying groups
• Testing firms
Snooping
• Reverse engineering
• Hire past employees
• Interview current employees
• Dummy purchases
• Shopping
• Request a proposal
• Hire to do one job
• Apply for a job
• Mole
• Site inspections
• Trash
• Chatting in bars
• Surveillance equipment
Schools and universities
• Directories of case studies
• Industry studies
Consultants
• Business schools on a consulting basis
• Jointly sponsored studies
• Information brokers
• Industry studies
• Market research studies
• Seminars
GAINING
THE
COOPERATION
OF THE
BENCHMARK PARTNER
Without confidentiality, benchmarking will not work. Some items for consideration
in gaining this confidentiality and cooperation are:
• Use consultants or trade associations or universities to ensure confidentiality.
• Make sure that there is mutual sharing — could be different areas.
• Be prepared to give and receive.
• Focus on mutual learning and self improvement.
• Benefit of probing questions and debate
• Opening up of a vision
• Confirmation of good practice
• Consider benchmarking a circuit of companies.
• Important that all know in advance
• Consider all security and legal implications of sharing data.
SL3151Ch03Frame Page 149 Thursday, September 12, 2002 6:12 PM
Benchmarking
MAKING
149
CONTACT
THE
When making the contact for benchmarking, follow these steps:
• Call and express interest in meeting.
• Send/receive a detailed list of questions.
• Make sure that you have prepared your questions carefully. (The quality
of the questions can be the signal for a worthwhile use of time.)
• Follow up by telephone.
• Visit — keep an open mind — document everything
BENCHMARKING — PERFORMANCE AND
PROCESS ANALYSIS
PREPARATION
OF THE
BENCHMARKING
PROPOSAL
Factors to be considered in the preparation of the benchmarking proposal include:
•
•
•
•
•
•
•
•
•
•
•
Mission
Objective/scope
Statement of importance
Information available
Critical questions
Ethical and legal issues
Partner selection
Roles and responsibilities
Visit schedule
Data analysis requirements
Form of recommendation
ACTIVITY
BEFORE THE
VISIT
The approach that follows is very comprehensive. It might not be economical to
follow all the steps in every study. Let practical common sense be the guide to action.
Understanding Your Own Operations
You need to understand your own operations very thoroughly before comparing
them with the operations of others. Here are some steps you should take to make
sure that you understand your current methods:
Ask open-ended questions. For example, for “who”:
•
•
•
•
Who
Who
Who
Who
does it per the job description?
is actually doing it?
else could do it?
should be doing it?
SL3151Ch03Frame Page 150 Thursday, September 12, 2002 6:12 PM
150
Six Sigma and Beyond
Ask similar questions for what, where, when, why, and how.
Activity Analysis
Activity analysis consists of the following steps:
1. Define the Activity
Activities can be defined through:
•
•
•
•
Function
Process
Marketing and sales
Sell products
Activity: These are the major action steps of a process. For example, make a
proposal.
Task: Prepare proposal draft
Operation: Type proposal
2. Determine the Triggering Event
Identify what happens to trigger the activity. Why does the activity get performed
at a specific time? What is the status of material or information before the activity
occurs? What documentation signifies that the activity is to start?
Example: Receive material
Material receipt document
3. Define the Activity
Document how to perform the activity. Indicate what has to be done and the order
in which it is done. This will define all business procedures, policies, and controls.
Questions to ask include:
•
•
•
•
•
What
What
What
What
What
are the key process variables?
controls these variables?
levels lead to optimum performance?
are the causes of variation?
are the limitations?
Activities should be classified in terms of repetitive versus non-repetitive, primary versus secondary, and required versus discretionary. It is important in this
analysis to determine limitations, sources of error, rejects, and delays.
Example: Raw material
Inspection process manual
SL3151Ch03Frame Page 151 Thursday, September 12, 2002 6:12 PM
Benchmarking
151
4. Determine the Resource Requirements
Identify the resources to perform the activity. Include factors such as direct material,
direct labor (hours and grade), equipment requirements, information requirements,
and space requirements. The resources might come from more than one department.
It is crucial to trace all of the resources required to perform the activity.
The resources can be determined by a careful analysis of the chart of accounts.
When making the cost analysis, carefully choose among using actual, budgeted,
standard, or planned cost information.
Example: Inspector, material, handling equipment, inspection equipment, inspection
area, inspection manual
5. Determine the Activity Drivers
What are the factors external to the activity that cause more or less of the resources
to be used? What drives the need for the activity and the level of resources required?
Consider both efficiency and effectiveness, as follows:
1. Efficiency: Doing things right.
2. Effectiveness: Doing the right things
Example: bad weather, poor product quality, automated equipment, workplace layout
6. Determine the Output of the Activity
What units can be used to measure the output of the activity? This will be a measure
of production level such as pieces produced, lots produced, invoices processed,
checks written, or standard hours earned.
Example: Lots of raw material inspected, pieces inspected, or material acceptance
forms completed
7. Determine the Activity Performance Measure
Identify that output measure that most closely controls the level of resources
required. For example, when looking at clerical activities, the number of invoices
is more significant than the dollar volume of the invoices. When moving material,
the tons moved is more significant than the number of invoices represented. In
general, the activity measure will be a resource input per unit of an output measure.
Examples:
•
•
•
•
•
•
Cost/lot
Pieces/hour
Cost/unit
Square foot per person
Patents per engineer
Drawings per engineer
SL3151Ch03Frame Page 152 Thursday, September 12, 2002 6:12 PM
152
Six Sigma and Beyond
•
•
•
•
Lines of code per programmer
Contact labor/company labor
Sales dollars/sales manager
Machine changeover –% total
Model the Activity
Modeling an activity involves the following:
•
•
•
•
Define the process
Define the cost or resource requirements
Define the output variables
Determine the metric or resource per unit of output (This may require the
use of regression analysis or the design of experiments.)
Critical considerations are:
• What is the relationship between fixed and variable costs?
• What determines the capacity limitations of the process?
• How much does overhead change with a change in the volume of business?
It is important to distinguish between the metric (resource per unit of output)
and the cost drivers.
The metric or activity measure for inserting pins might be cost per pin inserted.
However, the cost drivers might be the product design and the technology used. A
different design might require fewer insertions, and a different technology might
avoid the need for any insertions.
Examples of Modeling
The modeling of raw material cost per unit produced might consider the following
variables:
•
•
•
•
•
•
The number of parts to be produced
The standard raw material per part
The percent scrap produced
Raw material unit price
Raw material quantity discounts
Exchange rates
Note that a simple comparison of raw material cost as a percentage of sales
dollars provides little real basis for comparing costs and cost improvement.
The number of units sold of an item could be modeled as the number of potential
buyers times the percentage who become aware of the product if they can get it times
the percentage of potential buyers who can get the product times the percentage of triers
who will be repeat buyers times the number of units purchased by a repeat buyer.
SL3151Ch03Frame Page 153 Thursday, September 12, 2002 6:12 PM
Benchmarking
153
When working with salaries and wages, it is necessary to take into consideration
factors such as headcount, rate by grade, straight time/overtime ratios, benefits, skill
level, age, education, union vs. non-union, and incentives. Salary and wage ratios
that can be benchmarked are:
•
•
•
•
Skilled/unskilled labor
Direct/indirect labor
Training cost per employee
Overtime hours/straight time hours
Flow Chart the Process
To determine the sales dollars from a new account, start by flow charting the steps
required to sell a new account. Start with cold calls and work through to a close.
Use of symbols in flow charting:
•
•
•
•
•
•
Start or stop
Flow lines
Activity
Document
Decision
Connector
Then ask some key questions:
• What are the major activities?
• What are the ratios required to forecast sales?
• What factors affect the selling cost per rep or the revenue per rep? Does
looking at these ratios tell you very much? What would you benchmark?
Here is an example of activity performance measures for warehouse operations:
Picking operations
Orders filled per person per day
Line items per person per day
Pieces per person per day
Number of picks per order
Standard hours earned per day
Line items per order
Receiving operations
Number of trucks unloaded per shift
Number of pallets received per day
Number of cases received per day
Number of errors per day
Direct labor hours unloading trucks
SL3151Ch03Frame Page 154 Thursday, September 12, 2002 6:12 PM
154
Six Sigma and Beyond
Incoming QC operations
Number of inspections per period
Number of rejects per period
Direct inspection labor hours
Putaway/storage operations
Number of full pallet putaways per period
Number of loose case putaways per period
Direct labor hours putaway or storage
Cube utilization
Truck loading
Number of units loaded per truck per period
Number of trucks per period
Time per trailer
Customer service operations
Fill rate
Elapsed time between order and shipment
Error rate
Customer calls taken per day
Number of problems solved per call
Number requiring multiple calls
Number of credits issued
Number of backorders
At this stage we are ready to identify information required when meeting with
the benchmark partner. The following information is typical and may be used to
focus the meeting with the benchmark partner and to highlight information requirements:
1. Description of company activity and results:
2. Alternative ways of performing the activity:
Alternative 1:
Alternative 2:
Alternative 3:
3. The pros and cons of the alternatives are:
Pros
Cons
Alternative 1:
Alternative 2:
Alternative 3:
4. Information required to reach a conclusion as to the best approach:
Review the assumptions for the study to make sure that the outcomes are
correlated to what you were studying. (At this stage, it is not unusual to find surprises.
That is, you may find items that you overlooked or you thought were unimportant
and so on.)
SL3151Ch03Frame Page 155 Thursday, September 12, 2002 6:12 PM
Benchmarking
ACTIVITIES
DURING THE
155
VISIT
By far the most important characteristic of the visit is to:
Observe, question, analyze and learn
Make sure to notice:
•
•
•
•
What are they doing?
How is it different from what we are doing?
Why are they doing it that way?
How can the results be measured?
Ask open-ended questions, just as you did when observing your own operations.
For example, for “who”:
•
•
•
•
Who
Who
Who
Who
does it per the job description?
is actually doing it?
else could do it?
should be doing it?
Ask similar questions for what, where, when, why, and how.
Understand the Benchmark Partner’s Activities
Follow the procedures described above for analyzing the company activities. You
may encounter some analytical difficulties because of the following factors.
Accounting differences
Account definitions vary in terms of what is included in the account. For
example, does the cost of raw material include the cost of freight in and
insurance? Where is scrap accounted for?
Cost allocations.
Identification of all multi-department costs.
Different economies of scale/learning curve
Specialization
Automation
Time/unit
Identification of All of the Factors Required for Success
Factors to consider when trying to determine if you have identified all the factors
required for success include the following:
Analysis and intuition
• MRP and inventory cycle count
• Salary and wage comparisons — are the jobs really comparable?
SL3151Ch03Frame Page 156 Thursday, September 12, 2002 6:12 PM
156
Six Sigma and Beyond
• The use of manufacturing work cells (This may require a change in
socialization.)
• Level of advertising per dollar of sales (Just knowing this may not be
very helpful. The relevant question is, “How effectively are the advertising dollars spent?”)
Regression analysis
Warehouse study
Design of experiments
ACTIVITIES
AFTER THE
VISIT
Key activities after the visit include the following:
•
•
•
•
•
•
Be sure to send a thank you note promptly.
Document findings for each visit.
Summarize all findings — analysis and synthesis.
Compare current operations with findings.
Gather more specific data if required.
Identify opportunities for improvement — combine, eliminate, change
order, etc.
• Develop team recommendation.
• Distribute benchmark report.
Benchmarking Examples
1. Functional Analysis
Hours/1000 pcs
Company
Best Company
Functions
Primary machining
Heat training
Grinding
Assembly
Packing
.75
.50
Gap
.25
2. Cost Analysis
Cost Item
Raw material
Direct labor
% Total Cost
Cum % Total
Company
Cost per Unit
Best
40
20
40
60
17.50
7.50
15.50
5.50
3. Technology Forecasting
The benchmarking of competitive technologies can be very critical. This is particularly true when the product or technical life cycle is very short. Keys are:
SL3151Ch03Frame Page 157 Thursday, September 12, 2002 6:12 PM
Benchmarking
157
• Knowing the current technology and its limitations
• Identifying the emerging technologies that become the new benchmarks.
• Knowing what customers really buy and relating this to the emerging
technology.
• Having the courage and foresight to change.
4. Financial Benchmarking
Financial benchmarking compares a company (or the major segments of a company) relative to the financial performance of other companies. The modified Du
Pont chart provides a convenient way to do this. The idea of the modified Du Pont
plan is to start with the return on equity and progressively calculate the return on
assets, profit margin, gross margin, sales, cost of goods sold (COGS), sales per
day, cost of goods sold per day, days inventory (COGS), days receivable (COGS),
and days payable (COGS). Company results can be compared with data provided
by:
• Dun and Bradstreet Industry Norms and Key Business Ratios
• Robert Morris Associates Annual Statement Summary
• Prentice Hall Almanac of Business and Industrial Financial Ratios
5. Sales Promotion and Advertising
The comparison of company strategy versus industry strategy can lead to the need
for more specific benchmarking studies.
6. Warehouse Operations
The performance of units engaged in essentially the same type of activity can be
compared using statistical regression analysis. This technique can be used to determine the significant independent variables and their impact on costs. Exceptionally
good and bad performance can be identified and this provides the basis for further
benchmarking studies.
7. PIMS Analysis
The PIMS analysis is a further application of multiple regression analysis. It can be
used to identify the major determinants of company profitability.
8. Purchasing Performance Benchmarks
The Center for Advanced Purchasing Studies (Tempe, Arizona) has benchmarked
the purchasing activity for the petroleum, banking, pharmaceutical, food service,
telecommunication services, computer, semiconductor, chemical, and transportation
industries. For a wide variety of activity measures, the reports provide the average
value, the maximum, the minimum, and the median value.
SL3151Ch03Frame Page 158 Thursday, September 12, 2002 6:12 PM
158
Six Sigma and Beyond
Motorola Example
Perhaps one of the most famous examples of benchmarking in recent history is the
Motorola example. Motorola, through “Operation Bandit,” was able to cut the product development time for a new pager in half to 18 months based on traveling the
world and looking for “islands of excellence.” These companies were in various
industries: cars, watches, cameras, and other technically intensive products. The
solution required a variety of actions:
•
•
•
•
Automated factories
Removing barriers in the workplace
Training of 100,000 employees
Technical sharing alliance with Toshiba
Motorola was particularly impressed by the P200 program of a Hitachi plant.
This stands for a 200% increase in productivity by year end. The plant set immutable
deadlines for the solutions to problems. All departments had six sigma goals.
GAP ANALYSIS
DEFINITION
OF
GAP ANALYSIS
There are at least two ways to view “gap.”
1. Result Gap — A result gap is the difference between the company performance and the performance of the best in class as determined by the
benchmarking process. This gap is defined in terms of the activity performance measure. The gap can be positive or negative.
2. Practice or Process Gap — A practice or process gap is the difference
between what the company does in carrying out an activity and what the
best in class does as determined by the benchmarking process. This gap
is measured in terms of factors such as organizational structure, methods
used, technology used, or material used. The gap can be positive or
negative.
The determination of a gap can be a strong motivator toward the improvement
of performance. It can create the tension necessary for change to occur.
CURRENT
VERSUS
FUTURE GAP
It is critical to distinguish between the current gap and the likely future gap. Remember that the benchmark partner’s performance will not remain at the current levels
nor will the expectations of the marketplace. More likely than not, performance will
improve as time goes on. A company that concentrates only on closing the current
gap will find itself in a constant game of catch-up.
SL3151Ch03Frame Page 159 Thursday, September 12, 2002 6:12 PM
Benchmarking
159
The company that ignores likely improvements of the benchmark gets caught
in the Z trap. The Z trap, of course, is the step-wise improvement that is OK for
catching up but never good enough to be the best in class.
To summarize the benchmark findings, it is often helpful to make a tabulation
showing the current practice and metric and the expected future practice and metric
for the company, the competition, and the best in class. In order for the benchmarking
process to be effective, it is critical that management accept the validity of the gap
and provide the resources necessary to close the gap.
GOAL SETTING
GOAL DEFINITION
Two terms that are often used interchangeably are “objective” and “goal.” There is,
of course, no one correct definition. As long as the terms are used consistently within
an organization, it does not really matter. For our purposes, however, objectives are
broad areas where something is to be accomplished, such as sales and marketing or
customer service. Goals, on the other hand, are specific and measurable and have a
time frame. For example, “Answer all inquiries within 2 hours by the 3rd quarter
of 2002.”
GOAL CHARACTERISTICS
For best results, goals should be (a) tough (you need to stretch to attain them) and
(b) attainable (realistic).
When evaluating these two characteristics, always take into consideration the
current capabilities of the company versus the benchmark candidate now and projected. A good way to monitor progress towards attainment is through trend charting.
RESULT
VERSUS
EFFORT GOALS
Result goals define the specific performance measure to be achieved. For example,
“Sell $4 million of product x to company y in 2003.”
Effort goals define specific accomplishments that are completely under the
control of the goal setter. They are necessary to achieve the result goals. They can
be thought of as action plans. For example, an effort goal would be, “Make x cold
calls a week to new departments of x company”.
GOAL SETTING PHILOSOPHY
Best of the Best versus Optimization
There can be a clear difference between implementing an inventory control system
that ensures that a company never runs out of stock and an inventory control system
that optimizes the level of inventory. The optimum inventory balances off the cost
of holding the inventory and the cost of carrying the inventory.
SL3151Ch03Frame Page 160 Thursday, September 12, 2002 6:12 PM
160
Six Sigma and Beyond
A similar consideration is that of determining the optimum feature set for a
product, taking into consideration what specific market segments value and will pay
for. Differentiation that is not valued by the market could result in an unnecessary
expenditure of funds. The determination of value has to be based on the underlying
need of the customer. If this had not been done, there would be no need to have
produced a ballpoint pen, only a better fountain pen. Who asked for electricity, the
camera, or the copy machine before they existed? No one by name, but many in
terms of desire and underlying need.
There is a fundamental difference between working within the constraints facing
a business and removing the constraints. For example, a company can either (a)
optimize production given the setup time for a job or (b) reduce the setup time.
Optimization within the constraints leads to larger lots, higher inventory, perhaps
poorer quality, and delays. It is much more effective to remove the constraint. The
key to manufacturing excellence is to remove the constraints that cause the tradeoffs between cost and customer satisfaction.
Kaizen versus Breakthrough Strategies
The Kaizen philosophy of management stresses making small, constant improvements as opposed to looking for the one magic silver bullet that will lead to success.
Which company is likely to be more innovative: (a) a company that is looking
for the one big idea or (b) a company that is constantly making small improvements?
Both are appropriate strategies depending on the specific situation. However, if a
company is in dire need of improvement there is no better way than to look at
benchmarking. The benchmarking in this case will be a true breakthrough. On the
other hand, the Kaizen approach tells us that we should not relax in our effort to be
the best. There is always something that we can do better.
GUIDING PRINCIPLE IMPLICATIONS
The decisions made regarding goals can have a profound interaction with the mission
statement of the company and/or the values as defined in the statement of guiding
principles. The statement of guiding principles generally consists of:
1. Mission statement — a description of the product and markets served or
who, what, and how
2. Values — those human and ethical principles that guide the conduct of
the business
GOAL STRUCTURE
Cascading Goal Structure
A consistent goal structure can provide focus and direction to the entire organization.
In order to create this, start with the most important goal, as viewed by the president
or chief executive officer, and decompose each of these by functional area working
from one management level to the next. For example, starting with a return on equity
SL3151Ch03Frame Page 161 Thursday, September 12, 2002 6:12 PM
Benchmarking
161
goal, what does this mean each department has to do? What does this suggest in the
way of specific benchmarking goals?
Interdepartmental Goals
One of the most elusive tasks of management is to get all departments to work
together toward a common set of goals. One way to manage this is to have each
department indicate its goals and what it requires in the way of performance from
other departments to reach those goals. A cross tabulation can then be used to develop
the total goals for a department or function.
ACTION PLAN IDENTIFICATION
AND IMPLEMENTATION
The benchmarking process has been used to identify the present and projected result
and performance gap. The actual solution to closing the gap may be the synthesis
of several of the benchmark partner’s ideas. In order to creatively identify new
solutions, the following questions can be helpful:
Put to other uses?
New ways to use as is? Other uses if modified?
Adapt?
What else is this like? What other ideas does this suggest? Does the past
offer a parallel? What could I copy? Whom could I emulate?
Modify?
New twist? Change meaning, color, motion, sound, odor, form or shape?
Other changes?
Magnify?
What to add? More time? Greater frequency? Stronger? Higher? Longer?
Thicker? Extra value? Plus ingredients? Duplicate? Multiply? Exaggerate?
Minimize?
What to subtract? Smaller? Condensed? Miniature? Lower? Shorter?
Lighter? Omit? Streamline? Split up? Understate?
Substitute?
Who else instead? What else instead? Other ingredients? Other material?
Other process? Other power? Other place? Other approach? Other tone
of voice?
Rearrange?
Interchange components? Other pattern? Other layout? Other sequence?
Transpose cause and effect? Change place? Change schedule?
Reverse?
Transpose positive and negative? How about opposites? Turn it backwards? Turn it upside down? Reverse roles? Change shoes? Turn tables? Turn other cheek?
SL3151Ch03Frame Page 162 Thursday, September 12, 2002 6:12 PM
162
Six Sigma and Beyond
Combine?
How about a blend, an alloy, an assortment, an ensemble? Combine units?
Combine purpose? Combine appeals? Combine ideas?
A CREATIVE PLANNING PROCESS
It is highly desirable that more than one alternative way to achieve a goal be
identified. It is also critical that each viable alternative be fully evaluated on its own
merits and that a conscious choice be made to select the best alternative. For each
alternative, consider the following process:
1.
2.
3.
4.
5.
Develop a vision or a dream of what you would like to have happen.
Identify the critical success factors for achieving the vision.
Determine the required action programs.
Match resource requirements and availability.
Determine if the vision is feasible and either implement the required action
programs or consciously decide to drop or modify the vision.
6. Implement the plan by assigning action plan responsibility.
7. Monitor performance versus expectations and revise the plan as required.
ACTION PLAN PRIORITIZATION
If more action plans are identified than can be implemented, it will be necessary to
prioritize the action plans relative to the corporate goals and customers’ needs, wants,
and expectations. The process identified earlier in the discussion of prioritization of
benchmark alternatives may be used for this purpose.
One aspect of action plan prioritization is the determination of the most desirable
plan from a financial point of view. Evaluations of this type often involve the
comparison of cash flows that occur in different years. Consequently, the time value
of money has to be taken into consideration when deciding which plan is most
desirable.
ACTION PLAN DOCUMENTATION
The action plan must be documented and the person(s) responsible for individual
tasks must be identified:
• Use of Critical Path Scheduling Tools
• Action plan format
• Technique for sequencing activities using Post It Notes
• Importance of identifying milestones, deliverables, and decision-making roles
MONITORING
AND
CONTROL
A good way to maintain monitoring and control is through formalized periodic
reporting of performance versus plan. Issues to keep in mind are:
SL3151Ch03Frame Page 163 Thursday, September 12, 2002 6:12 PM
Benchmarking
163
Need to assign responsibility for ongoing review and evaluation
Use of a control chart for each variable with the responsible person identified
Just because the official benchmarking study has been completed does not mean
that you are done. To the contrary, you must be vigilant in monitoring your competitor’s activities by tracking the competitive performance versus plan. This is
because things change and modifications must be made to recalibrate the results.
Some items of interest are:
•
•
•
•
•
•
Benchmarks may need to be recalibrated.
Changes may occur in industry, customers, or competitors.
How fast are things moving and in what direction?
Critical success factors may change.
New competitors may enter the field.
Competition may be better or worse than expected.
FINANCIAL ANALYSIS
OF BENCHMARKING ALTERNATIVES
When comparing benchmarking alternatives, it is often necessary to take into consideration the fact that cash is received and/or disbursed in different time periods
for each of the alternatives. Cash received in the future is not as valuable as cash
received today because cash received today can be reinvested and earn a return. In
order to compare the current value of cash received or disbursed in different periods,
it is necessary to convert a future dollar value to its present value.
For example, the present value of $1100 received a year from now is $1000 if
money can be invested at 10%. The alternative way to view this is to note that the
future value of $1000 invested for one year at 10% is equal to 1000 times 1.10 or
$1100.
The following table can be used to determine the present value of a future cash
flow depending upon the discount rate and the number of years from the present
that the investment is made. To relate to the discussion above, note that the Present
Value Factor for one year at 10% is .9091. Therefore, the present value of $1100
received a year from now is $1000, i.e., $1100 times .9091.
A typical capital project of benchmark alternative evaluation is discussed in the
following pages. The projected net income after tax as well as a summary of the
investments made in the project, the after-tax salvage value, and the cash flow
associated with the project are indicated.
The assumptions used to generate the net income are indicated below the projection. Note the separation of fixed and variable cost and the relationship between
specific assumptions and the level of capacity utilization. In this case, the investment
is assumed to occur at the end of the first year. Also, there is no increase in working
capital associated with the construction of the plant.
The cash flow can be determined in one of two ways: (a) it is equal to the net
income after tax plus depreciation or (b) it is equal to revenue minus operating
SL3151Ch03Frame Page 164 Thursday, September 12, 2002 6:12 PM
164
Six Sigma and Beyond
Present Value Factors
Discount Rate
Year
10%
20%
30%
40%
1
2
3
4
5
6
7
8
9
10
0.9091
0.8264
0.7513
0.6830
0.6209
0.5645
0.5132
0.4665
0.4241
0.3822
0.8333
0.6944
0.5787
0.4823
0.4019
0.3349
0.2791
0.2326
0.1938
0.1615
0.7692
0.5917
0.4552
0.3501
0.2693
0.2072
0.1594
0.1226
0.0943
0.0725
0.7143
0.5102
0.3644
0.2603
0.1859
0.1328
0.0949
0.0678
0.0484
0.0346
expenses minus taxes. The net present value is indicated for several discount rates
(10 to 40%). The net present value at 10% is determined, for example, as in Table 3.2.
If the company cost of capital is 15%, then this project would be acceptable
because the net present value is positive at that discount rate. A similar analysis can
be used to determine a breakeven product price — see Table 3.3.
MANAGING BENCHMARKING FOR PERFORMANCE
To summarize this chapter, here are some do’s and don’ts for successful benchmarking:
Requirements for success
• Use goal-oriented management — measure and monitor everything;
link to compensation plan.
• Start small and showcase.
• Recognize that conflict is inevitable because of the need to share
resources to reach conflicting goals. Management has to make tough
decisions to resolve the healthy conflict.
• Link goals to action plans.
• Understand that adequate resources are necessary to ensure the success
of the plan.
• Ensure continuing top management support with the recognition that
benchmarking does not necessarily supply a quick fix.
• Place emphasis both on the result (what to do) and the process (how
to do it).
• Accept the concept of constant, incremental change.
• A blend of analytical and intuitive skills requiring the ability to synthesize sometimes ambiguous data is needed.
• Be willing to admit that change or improvement is possible and perhaps
desirable.
• Focus on the needs of specific target market segments and business
strategy when setting the priorities to benchmark.
SL3151Ch03Frame Page 165 Thursday, September 12, 2002 6:12 PM
Benchmarking
165
TABLE 3.2
An Example of Cash Flow and Present Value
End of Year
Cash Flow
Present Value Factor
Present Value
1
2
3
4
–1,000,000
246,680
597,764
1,008,814
0.9091
0.8264
0.7513
0.6830
Total Net Present Value
–909,091
203,868
449,109
689,034
432,919
TABLE 3.3
Benchmark Project Evaluations
2001
Sales (units)
Unit price
Revenue
Operating expense
Depreciation
Net income before tax
Tax
Net income after tax
Investment
Salvage value
Cash flow
Interest rate (%)
Net present value
1,000,000
10,000
–1,000,000
10
432,919
Assumptions
Plant capacity (units)
Unit price — start
Tax rate (%)
Depreciation (%)
Capacity utilization (%)
Price increase (%)
Operating Expense
Units
Fixed
Variable
10,000
20,000
30,000
40,000
50,000
60,000
70,000
200
200
200
400
400
500
500
20.00
20.00
20.00
21.00
21.00
21.00
21.00
2002
2003
2004
21,000
38.00
798,000
420,200
50,000
327,800
131,120
196,680
49,000
40.66
1,992,340
1,029,400
50,000
912,940
365,176
547,764
66,500
45.54
3,028,357
1,397,000
50,000
1,581,357
632,543
948,814
246,680
20
170,404
597,764
30
2,030
1,008,814
40
–107,982
70,000
38.00
40
5
30
7
70
7
95
12
SL3151Ch03Frame Page 166 Thursday, September 12, 2002 6:12 PM
166
Six Sigma and Beyond
• Create a corporate culture that thrives on learning and self-improvement with constant, though gradual, change. Constantly apply the Plan,
Do, Check, Act cycle.
• Use Statistical Process Control to determine when events, results, or
processes are out of control.
• Change the role of middle management. The middle manager is no
longer “the boss.” Middle managers must encourage and enable workers to think.
Common mistakes
• Giving lip service to the process and not providing the resources to
get the job done properly
• Failure to effectively communicate the benchmark findings and drive
them to implementation: all analysis and no action
• Failure to precisely define the expected results of benchmark improvement and to monitor actual performance (In the absence of this, no
organizational learning occurs.)
• Lack of a comprehensive prioritization of the benchmarking projects
to ensure the best cost/benefit results
• The expectation of quick results and a short-term focus on quarterly
earnings
• Lack of constant purpose, focus, and direction
• Failure to implement results in small size, meaningful modules with
specific deliverables; looking for “the” big win
• Unwillingness to face the reality of a situation and recognize that
change is necessary and that hard choices have to be made
• Not drawing the correct balance between required accuracy and the
practical ability to achieve better results; 100 percent accuracy, certainty, or performance is not required
• Failure to recognize that the early follower is almost as profitable as
the pioneer and sometimes even more so
• Reliance on executive office analysis versus observation of the handson experience of others both within and outside the company
• Focus on problem reduction and not problems avoidance
• Failure to realize that, in most cases, benchmarking follows strategy
• Failure to recognize the constantly rising level of expectations in the
marketplace
• Lack of contingency planning
• Failure to get participation at all levels and to break down interdepartmental barriers so that the total resources of the organization can be
focused on the solution to common problems
REFERENCES
Berle, G., Business Information Sourcebook, Wiley, New York, 1991.
Buzzell, R.D. and Gale B.T., The PIMS Principles, Free Press, New York, 1987.
SL3151Ch03Frame Page 167 Thursday, September 12, 2002 6:12 PM
Benchmarking
167
Clifford, D.K. and Cavanagh, R.E., The Winning Performance: How America’s High Growth
Midsize Companies Succeed, Bantam Books, New York, 1988.
Garvin, D.A, Managing Quality, Free Press, New York, 1988.
Hall, W.K., Survival Strategies in a Hostile Environment, Harvard Business Review, Sept./Oct.
1980, pp. 34–38.
Higgins, H. and Vincze, A., Strategic Management, Dryden Press, New York, 1989.
Smith, G.N. and Brown, P.B., Sweat Equity, Simon and Schuster, New York, 1986.
Stamatis, D.H., Total Quality Service, St. Lucie Press, Boca Raton, FL, 1996.
Stamatis, D.H., TQM Engineering Handbook, Marcel Dekker, New York, 1997.
SELECTED BIBLIOGRAPHY
Balm, G.J., Benchmarking: A Practitioner’s Guide for Becoming and Staying Best of the Best,
Quality & Productivity Management Association, Schaumburg, IL, 1992.
Barnes, B., Squeeze Play: Satisfaction Programs Are Key for Manufacturers Caught Between
Declining and Increasing Raw Material Costs, Quirk’s Marketing Research Review,
Oct. 2001, pp. 44–47.
Bosomworth, C., The Executive Benchmarking Guidebook, Management Roundtable, Boston,
MA, 1993.
Boxsvell, R.J., Jr., Benchmarking for Competitive Advantage, McGraw-Hill, New York, 1994.
Camp, R., Business Process Benchmarking: Finding and Implementing Best Practices, ASQC
Quality Press, Milwaukee, WI, 1995.
Chang, R.Y. and Kelly, P.K., Improving through Benchmarking, Richard Chang Associates,
Publications Division, Irvine, CA, 1994.
Karlof, B. and Ostblom, S., Benchmarking: A Signpost to Excellence in Quality and Productivity, John Wiley & Sons, New York, 1993.
Lewis, S., Cleaning Up: Ongoing Satisfaction Measurement Adds to Japanese Janitorial Firm’s
Bottom Line, Quirk’s Marketing Research Review, Oct 2001, pp. 20–21, 68–70.
SL3151Ch03Frame Page 168 Thursday, September 12, 2002 6:12 PM
SL3151Ch04Frame Page 169 Thursday, September 12, 2002 6:11 PM
4
Simulation
As companies continue to look for more efficient ways to run their businesses,
improve work flow, and increase profits, they increasingly turn to simulation, which
is used by best-in-class operations to improve their processes, achieve their goals,
and gain a competitive edge. Simulation is used by some of the world’s most
successful companies, including Ford, Toyota, Honda, DaimlerChrysler, Volkswagen, Boeing, Delphi Automotive Systems, Dell Corp. Gorton Fish Co., and many
others. Both design and process simulations have become increasingly important
and integral tools as businesses look for ways to strip non-value-adding steps from
their processes and maximize human and equipment effectiveness, all parts of the
six sigma philosophy. The beauty of simulation is that, while it complements and
aids in the six sigma initiative, it can also stand alone to improve business processes.
In this chapter, we do not dwell on the mathematical justification of simulation;
rather, we attempt to explain the process and identify some of the key characteristics
in any simulation. Part of the reason we do not elaborate on the mathematical
formulas is the fact that in the real world, simulations are conducted via computers.
Also, readers who are interested in the theoretical concepts of simulation can refer
to the selected bibliography found both at the end of the chapter and at the end of
this volume.
WHAT IS SIMULATION?
Simulation is a technology that allows the analysis of complex systems through
statistically valid means. Through a software interface, the user creates a computerized version of a design or a process, otherwise known as a “model.” The model
construction is a basic flow chart with great additional capabilities. It is the interface
a company uses to build a model of its business process.
Simulation technology has been around for a generation or more, with early
developments mostly in the area of programming languages. In the last 10 to 15
years, a number of off-the-shelf software packages have become available. More
recently, these tools have been simplified to the point that your average business
manager with no industrial engineering skills can effectively employ this technology
without requiring expert assistance. (Some companies have actually modified the
commercial versions to adopt them into their own environments.)
Simplicity is the key to today’s simulation software. The basic simulation structure is as follows: after flow charting the process, the user inputs information about
how the process operates by simply filling in blanks. While completing a model,
the user answers three questions at each step of the process: how long does the step
169
SL3151Ch04Frame Page 170 Thursday, September 12, 2002 6:11 PM
170
Six Sigma and Beyond
take, how often does it happen, and who is involved? After the model is built and
verified, it can be manipulated to do two critical things: analyze current operations
to identify problem areas and test various ideas for improvement.
The latest improvements in simulation software have made it an excellent tool
for enhancing the design for six sigma (DFSS) process, which strives to eliminate
eight wastes: overproduction, motion, inventory, waiting, transportation, defects,
underutilized people, and extra processing. DFSS targets non-value-added
activities — the same activities that contribute to poor product quality.
In this chapter we are not going to discuss commercial packages; rather we are
going to introduce three methodologies that facilitate simulation — Monte Carlo,
Finite Element analysis, and Excel’s Solver approach.
SIMULATED SAMPLING
The sampling method, known generally as Monte Carlo, is a simulation procedure
of considerable value.
Let us assume that a product is being assembled by a two-station assembly line.
There is one operator at each of the two stations. Operation A is the first of the two
operations. The operator completes approximately the first half of the assembly and
then sets the half-completed assembly on a section of conveyor where it rolls down
to operation B. It takes a constant time of 0.10 minute for the part to roll down the
conveyor section and be available to operator B. Operator B then completes the
assembly. The average time for operation A is 0.52 minute per assembly and the
average time for operation B is 0.48 minute per assembly. We wish to determine
the average inventory of assemblies that we may expect (average length of the
waiting line of assemblies) and the average output of the assembly line. This may
be done by simulated sampling as follows:
1. The distributions of assembly time for operations A and B must be known
or procured.
Usually this is done through historical data, sometimes with surrogate. A
study was taken for both operations, and two frequency distributions
were constructed (not shown here). In the case of operation A, the value
0.25 minute occurred three times, 0.30 occurred twice, and so on. For
operation A, the mean was 0.52 min with N = 167 and for operation B
the mean was 0.48 with N = 115. The two distributions do not necessarily fit mathematical distributions but this is not important.
2. Convert the frequency distributions to cumulative probability distributions.
This is done by summing the frequencies that are less than or equal to each
performance time and plotting them. The cumulative frequencies are
then converted to percents by assigning the number 100 to the maximum value. The cumulative frequency distribution (not shown here)
for operation A began at the lowest time, 0.25 minute; there were three
observations. Three is plotted on the cumulative chart for the time 0.25
minute. For the performance time 0.30 minute, there were two observations, but there were five observations that measured 0.30 minute or
SL3151Ch04Frame Page 171 Thursday, September 12, 2002 6:11 PM
Simulation
171
less, so the value five is plotted for 0.30 minute. For the performance
time 0.35 minute, there were 10 observations recorded, but there were
15 observations that measured 0.35 minute or less. When the cumulative frequency distribution was completed, a cumulative percent scale
was constructed on the right, by assigning the number 100 to the maximum value, 167 in this case, and dividing the resulting scale into equal
parts. This results in a cumulative probability distribution. We can use
this distribution to say for example that 100 percent of the time values
were 0.85 minute or less, 55.1 per cent were 0.50 minute or less and so on.
3. Sample at random from the cumulative distributions to determine specific
performance times to use in simulating the operation of the assembly line.
We do this by selecting numbers between 0 and 100 at random (representing probabilities or percents). The random numbers could be selected
by any random process, such as drawing numbered chips from a box,
using a random number table, or using computer-generated random
numbers. For small studies, the easiest way is to use a table of random
numbers.
The random numbers are used to enter the cumulative distributions in order to obtain time values. In our example, we start with the random
number 10. A horizontal line is projected until it intersects the distribution curve; a vertical line projected to the horizontal axis gives the midpoint time value associated with the intersected point on the
distribution curve, which happens to be 0.40 minute for the random
number 10. Now we can see the purpose behind the conversion of the
original distribution to a cumulative distribution. Only one time value
can now be associated with a given random number. In the original distribution, two values would result because of the bell shape of the
curve.
Sampling from the cumulative distribution in this way gives time values in
random order, which will occur in proportion to the original distribution, just as if assemblies were actually being produced. Table 4.1 gives
a sample of 20 time values determined in this way from the two distributions.
4. Simulate the actual operation of the assembly line.
This is done in Table 4.2, which is very similar to waiting (queuing) line
problems. The time values for operation A (Table 4.1) are first used to
determine when the half-completed assemblies would be available to
operation B. The first assembly is completed by operator A in 0.40
minute. It takes 0.10 minute to roll down to operator B, so this point in
time is selected as zero. The next assembly is available 0.40 minute later, and so on. For the first assembly, operation B begins at time zero.
From the simulated sample, the first assembly requires 0.60 minute for
B. At this point, there is no idle time for B and no inventory. At time
0.40 the second assembly becomes available, but B is still working on
the first so the assembly must wait 0.20 minute. Operator B begins
SL3151Ch04Frame Page 172 Thursday, September 12, 2002 6:11 PM
172
Six Sigma and Beyond
TABLE 4.1
Simulated Samples of 20 Performance Time Values for Operations A and B
Operation A
Random Number
10
22
24
42
37
77
99
96
89
85
28
63
9
10
7
51
2
1
52
7
Totals
Operation B
Performance Time
from Cumulative
Distribution for
Operation A
0.40
0.40
0.45
0.50
0.45
0.60
0.85
0.75
0.65
0.65
0.45
0.55
0.40
0.40
0.35
0.50
0.30
0.25
0.50
0.35
9.75
Random Number
79
69
33
52
13
16
19
4
14
6
30
25
38
0
92
82
20
40
44
25
Performance Time
from Cumulative
Distribution for
Operation B
0.60
0.50
0.40
0.45
0.35
0.35
0.35
0.30
0.35
0.30
0.40
0.35
0.40
0.25
0.70
0.60
0.35
0.40
0.45
0.35
8.20
work on it at 0.60. From Table 4.1, the second assembly requires 0.50
minute for B. We continue the simulated operation of the line in this
way.
The sixth assembly becomes available to B at time 2.40, but B was ready
for it at time 2.30. He therefore was forced to remain idle for 0.10
minute because of lack of work. The completed sample of 20 assemblies is progressively worked out — see Table 4.2.
The summary at the bottom of Table 4.4 shows the result in terms of the idle
time in operation B, the waiting time of the parts, the average inventory between
the two operations, and the resulting production rates. From the average times given
by the original distributions, we would have guessed that A would limit the output
of the line since it was the slower of the two operations. Actually, however, the line
production rate is less than that dictated by A (116.5 pieces per hour compared to
123 pieces per hour for A as an individual operation). The reason is that the interplay
SL3151Ch04Frame Page 173 Thursday, September 12, 2002 6:11 PM
Simulation
173
TABLE 4.2
Simulated Operation of the Two-Station Assembly Line
when Operation A Precedes Operation B
Assemblies
Available
for
Operation B
Operation B at
Begins at
Operation B
Time in
Ends at
Operation B
Idle Waiting
Time of
Assemblies
Number of
Parts in Line,
Excluding Assembly
Being Processed
in Operation B
0.00
0.00
0.60
0
0
0.40
0.60
1.10
0
0.20
0.85
1.10
1.50
0
0.25
1.35
1.50
1.95
0
0.15
1.80
1.95
2.30
0
0.15
2.40
2.40
2.75
0.10
0
3.25
3.25
3.60
0.50
0
4.00
4.00
4.30
0.40
0
4.65
4.65
5.00
0.35
0
5.30
5.30
5.60
0.30
0
5.75
5.75
6.15
0.15
0
6.30
6.30
6.65
0.15
0
6.70
6.70
7.10
0.05
0
7.10
7.10
7.35
0
0
7.45
7.45
8.15
0.10
0
7.95
8.15
8.75
0
0.20
8.25
8.75
9.10
0
0.50
8.50
9.10
9.50
0
0.60
9.00
9.50
9.95
0
0.50
9.35
9.95
10.30
0
0.60
Idle time in operation B
= 2.10 minutes
Waiting time of parts
= 3.15 minutes
Avenge inventory of assemblies between A and B = 3.15/9.35 = 0.34 assemblies
Average production rate of A
= [20 × 60]/9.75 = 123 pieces/hour
Average production rate of B (while working)
= [20 × 60]/8.20 = 146 pieces/hour
Average production rate of A and B together
= [20 × 60]/10.30 = 116.5 pieces/hour
0
1
1
1
1
0
0
0
0
0
0
0
0
0
0
1
1
2
2
2
Note: In the above computations, 20 is the total number of completed assemblies; 9.75 is the total work
time of operation A for 20 assemblies from Table 4.1; 8.20 is the total work time, exclusive of idle time,
for operation B for 20 assemblies from Table 4.1.
of performance times for A and B does not always match up very well, and sometimes
B has to wait for work. B’s enforced idle time plus B’s total work time actually
determine the maximum production rate of the line.
A little thought should convince us that, if possible, it would have been better
to redistribute the assembly work so that A is the faster of the two operations. Then
the probability that B will run out of work is reduced. This is demonstrated by
Table 4.3, which assumes a simple reversal of the sequence of A and B. The same
SL3151Ch04Frame Page 174 Thursday, September 12, 2002 6:11 PM
174
Six Sigma and Beyond
TABLE 4.3
Simulated Operation of the Two-Station Assembly Line
when Operation B Precedes Operation A
Assemblies
Available
for
Operation A
Operation A at
Begins at
Operation A
Time in
Ends at
Operation A
Idle Waiting
Time of
Assemblies
Number of
Parts in Line,
Excluding Assembly
Being Processed
in Operation A
0.00
0.00
0.40
0
0
0.50
0.50
0.90
0.10
0
0.90
0.90
1.35
0
0
1.35
1.35
1.85
0
0
1.70
1.85
2.30
0
0.15
2.05
2.30
2.90
0
0.25
2.40
2.90
3.75
0
0.40
2.70
3.75
4.50
0
1.05
3.05
4.50
5.15
0
1.45
3.35
5.15
5.80
0
1.80
3.75
5.80
6.25
0
2.05
4.10
6.25
6.80
0
2.15
4.50
6.80
7.20
0
2.30
4.75
7.20
7.60
0
2.45
5.45
7.60
7.95
0
2.15
6.05
7.95
8.45
0
1.90
6.40
8.45
8.75
0
2.05
6.80
8.75
9.00
0
1.95
7.25
9.00
9.50
0
1.75
7.60
9.50
9.85
0
1.90
Idle time in operation A
= 0.10 minute
Waiting time of parts
= 25.75 minutes
Average inventory of assemblies between A and B = 25.75/7.60 = 3.4 assemblies
Average production rate of A (while working)
= [20 × 60]/9.75 = 123 pieces/hour
Average production rate of B
= [20 × 60]/8.20 = 146 pieces/hour
Average production rate of A and B together
= [20 × 60]/9.85 = 122 pieces/hour
0
0
0
0
1
1
1
2
2
3
3
4
4
5
5
5
5
6
5
6
sample times have been used and the simulated operation of the line has been
developed as before. With the faster of the two operations being first in the sequence,
the output rate of the line increases and approaches the rate of the limiting operation,
and the average inventory between the two operations increases. With the higher
average inventory there, the second operation in the sequence is almost never idle
owing to lack of work. Actually, this conclusion is a fairly general one with regard
to the balance of assembly lines; that is, the best labor balance will be achieved
when each succeeding operation in the sequence is slightly slower than the one
before it. This minimizes the idle time created when the operators run out of work
because of the variable performance times of the various operations. In practical
SL3151Ch04Frame Page 175 Thursday, September 12, 2002 6:11 PM
Simulation
175
situations, it is common to find safety banks of assemblies between operations in
order to absorb these fluctuations in performance.
We may have wanted to build a more sophisticated model of the assembly line.
Our simple model assumed that the performance times were independent of other
events in the process. Perhaps in the actual situation, the second operation in the
sequence would tend to speed up when the inventory began to build up. This effect
could have been included if we had knowledge of how inventory affected performance time.
If we have followed this simulation example through carefully, we may be
convinced that it would work but that it would be very tedious for problems of
practical size. Even for our limited example, we would probably wish to have a
larger run on which to base conclusions, and there would probably be other alternatives to test. For example, there may be several alternative ways to distribute the
total assembly task between the two stations, or more than two stations could be
considered. Which of the several alternatives would yield the smallest incremental
cost of labor, inventory costs, etc.? To cope with the problem of tedium and excessive
person-hours to develop a solution, the computer may be used. If a computer were
programmed to simulate the operation of the assembly line, we would place the two
cumulative distributions in the memory unit of the computer. Through the program,
the computer would select a performance time value at random from the cumulative
distribution for A in much the same fashion as we did by hand. Then it would select
at random a time value from the cumulative distribution for B, make the necessary
computations, and hold the data in memory. The cycle would repeat, selecting new
time values at random, adding and subtracting to obtain the record that we produced
by hand. A large run could be made easily and with no more effort than a small run.
Various alternatives could be evaluated quickly and easily in the same manner.
FINITE ELEMENT ANALYSIS (FEA)
This technique is not thought of as being a reliability improvement method, yet it
can contribute significantly to its enhancement. Finite Element Analysis (FEA) is a
technique of modeling a complex structure into a collection of structural elements
that are interconnected at a given number of nodes. The model is subjected to known
loads, whereby the displacement of the structure can be determined through a set
of mathematical equations that account for the element interactions. The reader is
encouraged to read Buchanan (1994) and Cook (1995) for a more complete and
easy understanding of the theoretical aspects of FEA.
In commercial use, FEA is a computer-based procedure for analyzing a complex
structure by dividing it into a number of smaller, interconnected pieces (the “finite
elements”), each with easily definable load and deflection characteristics.
TYPES
OF
FINITE ELEMENTS
The library of finite elements available in general purpose codes can be subdivided
into the following categories:
SL3151Ch04Frame Page 176 Thursday, September 12, 2002 6:11 PM
176
Six Sigma and Beyond
1. Point elements: An example of a point element is a lumped mass element
or an element specifically created to represent a particular constraint or
loading present at that point.
2. Line elements: Truss links, rods, beams, pipes, cables, rigid links, springs,
and gaps are examples of line elements. This type of element is usually
characterized by two grid points or nodes at each end.
3. Surface elements: Membranes, plates, shells and certain types of fluid and
thermal elements fall into this category. The surface elements can be
triangular or quadrilateral, and thin or thick; accordingly they are characterized by a connectivity of three or more grid points or nodes.
4. Solid elements: Examples of solid elements include wedges, prisms, cubes,
parallelepipeds and three-dimensional fluid and thermal elements. Elements
in this category are usually defined using six or more grid points or nodes.
5. Special purpose elements: Combinations of springs, gaps, dampers, electrical conductors, acoustic, fluid, magnetic, mass, superelement, crack
tips, radiation links, etc., are included in this category.
For example, commonly used elements in the automotive industry (body engineering) are:
•
•
•
•
•
•
TYPES
Beams
Rigid links
Thin plates — triangular and quadrilateral
Solid elements
Springs
Gaps (contact or interface elements)
OF
ANALYSES
There are many combinations of analyses one may perform with FEA as the driving
tool. However, the two predominant types are nonlinear and dynamic. Using these types
one may focus on specific analysis of — for example nonlinearities types such as:
Geometric
• Stress less than yield strength
• Euler (elastic) buckling
• Examples: quarter panel under jacking and towing; hood following
front crash
Material
• Stress greater than yield strength or material is nonlinear elastic
• Plastic flow
• Examples: seat belt pull; door intrusion beam bending
Combination of geometric and material
• Stress is greater than yield strength and buckling takes place
• Crippling
• Examples: rails during crash; roof crush
SL3151Ch04Frame Page 177 Thursday, September 12, 2002 6:11 PM
Simulation
177
The reader should also recognize that combinations of these types exist as well,
for example linear/static — the easiest and most economical. Most of the FEA
applications involve this kind of analysis. Examples include joint stiffness and door
sag. Nonlinear/static is less frequently used. Examples include door intrusion beam,
roof crush, and seat belt pull. Linear/dynamic is rarely used. Examples include
windshield wipers or latch mechanism. Nonlinear/dynamic is the most complex and
most expensive. Examples include knee bolster crash, front crash, and rear crash.
Let us look at these combinations a little more closely:
Linear static analysis: This is the simplest form of analytical application and
is used most frequently for a wide range of structures. The desired results
are usually the stress contours, deformed geometry, strain energy distribution, unknown reaction forces, and design optimization. Typical examples
are door sag simulation, margin/fit problems, joint stiffness evaluation, high
stress location search for all components, spot weld forces, and thermal
stresses.
Euler buckling analysis: This analysis is also relatively simple to perform and
is used to calculate critical buckling loads. Caution should be exercised
when performing this analysis because it produces analytical results that
are not conservative. In other words, the critical buckling load thus calculated is usually higher than the actual load that would be determined through
testing. A typical application is hood buckling.
Normal modes analysis: This is an extremely useful technique for determining
the natural frequencies (eigenvalues) of components and also the corresponding eigenvectors which represent the modes of deformation. Strictly
speaking, this category does not fall under dynamic analysis since the
problem is not time dependent. Typical examples include instrument panels,
total vehicle or component NVH evaluation, door inner panel flutter, and
steering column shake.
Nonlinear static analysis: In general, all nonlinear analysis requires advanced
methodology and is not recommended for use by inexperienced analysts.
Usually, a graduate degree or several graduate level courses in the theory
of elasticity, plasticity, vibrations, and solid and fluid mechanics are
required to understand nonlinear behavior. Nonlinear FEA tends to be as
much an art as it is a science, and familiarity with the subject structure is
essential. Typical examples are seat back distortion, door beam bending
rigidity studies, underbody components such as front and rear rails and
wheel housings, bumper design, and crush analysis of several components.
Nonlinear dynamic analysis: This FEA category is the most advanced. It
involves very complex ideas and techniques and has become practicable
only due to the availability of super-high-speed computers. This class of
analysis involves all the complexities of nonlinear static analysis as well as
additional problems involved with iterative time step selection and contact
simulation at impact. Typical applications are related to crash evaluation
and energy management.
SL3151Ch04Frame Page 178 Thursday, September 12, 2002 6:11 PM
178
Six Sigma and Beyond
PROCEDURES INVOLVED
IN
FEA
The procedures involved in FEA include:
1. Problem definition: Specification of concerns and expected results
2. Planning of analysis: Making decisions regarding the applicability of
FEA, which code to use, and the size and the type of model to be
constructed
3. Digitizing: The translation of a drawing into line data that is available to
the modeler
4. Modeling: Creating the desired finite element model as planned (Many
sophisticated tools are available such as the PDGS-FAST system, PATRAN, and so on.)
5. Input of data: Creating, editing and storing a formatted data file that
includes a description of the model geometry, material properties, constraints, applied loading, and desired output
6. Execution: Processing the input data in either the batch or the interactive
mode through the finite element code residing on the computer system
and receiving the output in the form of a printout and/or post-processor
data
7. Interpretation of output: A study of the output to check the validity of the
input parameters as well as the solution of the structural problem
8. Feasibility considerations: Utilizing the output to make intelligent technical decisions about the acceptability of the structural design and the
scope for design enhancement
9. Parametric studies: Redesign using parametric variation (The easiest
changes to study are those involving different gages, materials, constraints,
and loading. Geometric changes require repetition of steps 3 through 8;
the same is true about remodeling of the existing geometry.)
10. Design optimization: An iterative process involving the repetition of steps
3 through 9 to optimize the design from considerations of weight, cost,
manufacturing feasibility, and durability
STEPS
IN
ANALYSIS PROCEDURE
The steps in the analysis procedure are:
1. Establish objective.
2. What type of analysis? What program?
Statics
Mechanical Loads
• Forces
• Displacements
• Pressure
• Temperatures
SL3151Ch04Frame Page 179 Thursday, September 12, 2002 6:11 PM
Simulation
3.
4.
5.
6.
179
Heat Transfer
• Conduction
• Convection
• 1-D radiation
Dynamics
Mode frequency
Mechanical load
• Transient (direct or reduced) linear
• Sinusoidal
Shock spectra
Heat transfer direct transient
Special features
Nonlinear
• Buckling
• Large displacement
• Elasticity
• Creep
• Friction, gaps
Substructuring
What is minimum portion of system or structure required?
Known forces or displacements at a point
Allows for separation
Structural symmetry
Isolation through test data
Cyclic symmetry
What are loading and boundary conditions?
Loading known
Loading can be calculated from simplistic analysis
Loading to be determined from test data
Support of excluded part of system established on modeled portion
Test data taken to establish stiffness of partial constraints
Determine model grid.
Choose element types.
Establish grid size to satisfy cost versus accuracy criterion.
Develop bulk data.
Establish coordinate systems.
Number node or order elements to minimize cost.
Develop node coordinates and element connectivity description.
Code load and B.C. description.
Check geometry description by plotting.
OVERVIEW
OF
FINITE ELEMENT ANALYSIS — SOLUTION PROCEDURE
The process of FEA may be summarized with a flow chart of linear static structural
analysis in seven steps. The steps are:
SL3151Ch04Frame Page 180 Thursday, September 12, 2002 6:11 PM
180
Six Sigma and Beyond
1. Represent continuous structure as a collection of discrete elements connected by node points.
2. Formulate element stiffness matrices from element properties, geometry,
and material.
3. Assemble all element stiffness matrices into global stiffness matrix.
4. Apply boundary conditions to constrain model (i.e., remove certain
degrees of freedom).
5. Apply loads to model (forces, moments, pressure, etc.).
6. Solve matrix equation {F} = [K]{u} for displacements.
7. Calculate element forces and stresses from displacement results.
INPUT
TO THE
FINITE ELEMENT MODEL
Once the user is satisfied with the model subdivision, the following classes of input
data must be prepared to provide a detailed description of the finite element model
to typical FEA software such as MSC/NASTRAN (1998):
Geometry: This refers to the locations of grid points and the orientations of
coordinate systems that will be used to record components of displacements
and forces at grid points.
Element connectivities: This refers to identification numbers of the grid points
to which each element is connected.
Element properties: Examples of element properties are the thickness of a
surface element and the cross-sectional area of a line element. Each element
type has a specific list of properties.
Material properties: Examples of material properties are Young’s modulus,
density, and thermal expansion coefficient. There are several material types
available in MSC/NASTRAN. Each has a specific list of properties.
Constraints: Constraints are used to specify boundary conditions, symmetry
conditions, and a variety of other useful relationships. Constraints are
essential because an unconstrained structure is capable of free-body motion,
which will cause the analysis to fail.
Loads and enforced displacements: Loads may be applied at grid points or
within elements.
OUTPUTS
FROM THE
FINITE ELEMENT ANALYSIS
Once the data describing the finite element model have been assembled and submitted to the computer, they will be processed by a software package such as MSC/NASTRAN to produce information requested by the user. The classes of output data are:
1. Components of displacements at grid points
2. Element data recovery: stresses, strains, strain energy, and internal forces
and moments
3. Grid point data recovery: applied loads, forces of constraint, and forces
due to elements
SL3151Ch04Frame Page 181 Thursday, September 12, 2002 6:11 PM
Simulation
181
It is the responsibility of the user to verify the accuracy of the finite element
analysis results. Some suggested checks to perform are:
Generate plots to visually verify the geometry.
Verify overall model response for loadings applied.
Check input loads with reaction forces.
Perform hand checks of results whenever possible.
Review and check results.
Plot deformation and stress contour.
Check equilibrium and reaction forces.
Check concentration region for fineness of grid (compare calculated stress
distribution with assumed element distribution).
Check peak deflection and/or stress for ballpark accuracy.
Special note: How a structure actually behaves under loading is determined by
four characteristics: (a) the shape of the structure, (b) the location and type of
constraints that hold the structure in place, (c) the loads applied to the structure —
their magnitude, location and direction, and (d) the characteristics of the materials
that comprise the structure. For example, glass, steel, and rubber have significantly
different characteristics and different stiffnesses.
ANALYSIS
OF
REDESIGNS
OF
REFINED MODEL
At this stage, generally a correlation is attempted even though it is very difficult
and presents many potential problems. These problems are about 60% associated
with analysis and 40% associated with the actual testing. Remember that correlations
at this stage commonly (over 50 projects) may run from 5 to 30%.
Obviously, the focus should be on testing and test-related correlation with real
world usage. Items of concern should be:
Loads:
• Isolation of single component of assembly
• Hard to put assumed load in controlled lab test (linear loads causing
moments)
Strain gages:
• Gage locations and orientation
• Single leg gages versus rosettes
• Improper gage lead hookup
Non-linearities:
• Plasticity
• Pin joint clearance
• Bolted joints
In a typical analysis, the related correlation issues/problems/concerns examples
are:
SL3151Ch04Frame Page 182 Thursday, September 12, 2002 6:11 PM
182
Six Sigma and Beyond
• Mesh size (for localized stress concentration, isolate concentration region
and refine mesh)
• Element type
• Load distribution and B.C. isolation
• Input error/bad data
• Weld details
Common problems that may be encountered in the FEA are:
• Part not to size
• Misunderstanding or interpretation of results
Therefore, to make sure that the FEA is worth the effort, the following steps are
recommended:
1. Initially, take simple, well-isolated components, with simple well-defined
loads.
2. Do not expect miracles.
3. Use a joint test/analysis program. It can improve the capabilities of each
step and serves as a check on techniques.
4. Work together. This is the key. The test results supplement weakness of
analysis and vice versa.
SUMMARY — FINITE ELEMENT TECHNIQUE: A DESIGN TOOL
•
•
•
•
•
Proven tool — approximate but very accurate if applied properly.
Fine enough grid to match true strain field.
Need to know loads accurately.
Are supports rigid? What spring stiffness?
Do not let FEA become just a research tool searching for an absolute
answer. Use in all stages of design cycle as relative comparison tool in
conjunction with test.
• FEA if nothing else forces someone to examine in detail a component design.
• A check on geometry itself.
• The experimenter must think in detail about loads and interaction with
rest of system
EXCEL’S SOLVER
Yet another simple simulation tool is found in the Tools (add in) category of the
Excel software program. Its simplicity is astonishing, and the results may be indeed
phenomenal. What is required is the transformation function. Once that is identified,
then the experimenter defines the constraints and the rest is computed by Solver.
DESIGN OPTIMIZATION
In dealing with DFSS, a frequent euphemism is “design optimization.” What is
design optimization? Design optimization is a technique that seeks to determine an
SL3151Ch04Frame Page 183 Thursday, September 12, 2002 6:11 PM
Simulation
183
optimum design. By “optimum design,” we mean one that meets all specified requirements but with a minimum expense of certain factors such as weight, surface area,
volume, stress, cost, and so on. In other words, the optimum design is one that is
as efficient and as effective as possible.
To calculate an optimum design, many methods can be followed. Here, however,
we focus on the ANSYS program, as defined by Moaveni (1999), which performs a
series of analysis-evaluation-modification cycles. That is, an analysis of the initial design
is performed, the results are evaluated against specified design criteria, and the design
is modified as necessary. This process is repeated until all specified criteria are met.
Design optimization can be used to optimize virtually any aspect of the design:
dimensions (such as thickness), shape (such as fillet radii), placement of supports,
cost of fabrication, natural frequency, material property, and so on. Actually, any
ANSYS item that can be expressed in terms of a parameter can be subjected to
design optimization. One example of optimization is the design of an aluminum
pipe with cooling fins where the objective is to find the optimum diameter, shape,
and spacing of the fins for maximum heat flow.
Before describing the procedure for design optimization, we will define some
of the terminology: design variables, state variables, objective function, feasible and
unfeasible designs, loops, and design sets. We will start with a typical optimization
problem statement:
Find the minimum-weight design of a beam of rectangular cross section subject
to the following constraints:
Total stress σ should not exceed σmax
[σ ≤ σmax]
Beam deflection δ should not exceed δmax
[δ ≤ δmax]
Beam height h is limited to hmax
[h ≤ hmax]
Design Variables (DVs) are independent quantities that can be varied in order
to achieve the optimum design. Upper and lower limits are specified on the design
variables to serve as “constraints.” In the above beam example, width and height
are obvious candidates for DVs, since they both cannot be zero or negative, so their
lower limit would be some value greater than zero.
State Variables (SVs) are quantities that constrain the design. They are also
known as “behavioral constraints” and are typically response quantities that are
functions of the design variables. Our beam example has two SVs: σ(the total stress)
and δ(the beam deflection). You may define up to 100 SVs in an ANSYS design
optimization problem.
The Objective Function is the quantity that you are attempting to minimize or
maximize. It should be a function of the DVs, i.e., changing the values of the DVs
should change the value of the objective function. In our beam example, the total
weight of the beam could be the objective function (to be minimized). Only one
objective function may be defined in a design optimization problem.
A design is simply a set of design variable values. A feasible design is one that
satisfies all specified constraints, including constraints on the SVs as well as constraints
SL3151Ch04Frame Page 184 Thursday, September 12, 2002 6:11 PM
184
Six Sigma and Beyond
on the DVs. If even one of the constraints is not satisfied, the design is considered
infeasible.
An optimization loop (or simply loop) is one pass through the analysis-evaluation-modification cycle. Each loop consists of the following steps:
1. Build the model with current values of DVs and analyze.
2. Evaluate the analysis results in terms of the SVs and objective function.
3. Modify the design by calculating new values of DVs. These new values are
calculated by ANSYS and are used to define the new version of the model.
At the end of each loop, new values of DVs, SVs, and the objective function
are available and are collectively referred to as a design set (or simply set).
HOW TO DO DESIGN OPTIMIZATION
Design optimization requires a thorough understanding of the concept of ANSYS
parameters, which are simply user-named variables to which you can assign numeric
values. The model must be defined in terms of parameters (which are usually the DVs),
and results data must be retrieved in terms of parameters (for SVs and the objective
function). The usual procedure for design optimization consists of six main steps:
1.
2.
3.
4.
Initialize the design variable parameters.
Build the model parametrically.
Obtain the solution.
Retrieve the results data parametrically and initialize the state variable
and objective function parameters.
5. Declare optimization variables and begin optimization.
6. Review and verify optimum results.
Details of these steps are beyond the scope of this volume. However, the reader
may find the information in Moaveni (1999).
UNDERSTANDING
THE
OPTIMIZATION ALGORITHM
Understanding the algorithm used by a computer program is always helpful, and
this is particularly true in the case of design optimization. Perhaps one of the most
important issues is the notion of approximation.
For simple mathematical functions that are continuously differentiable, minima
can be found by analytical techniques such as solving for points of zero slope. The
mathematical relationship between an arbitrary objective function and the DVs,
however, is generally not known, so the program has to establish the relationship
by curve fitting. This is done by calculating the objective function for several sets
of DV values (i.e., for several designs) and performing a least squares fit among the
data points. The resulting curve (or surface) is called an approximation. Each optimization loop generates a new data point, and the objective function is updated. It
is this approximation that is minimized, not the actual objective function.
SL3151Ch04Frame Page 185 Thursday, September 12, 2002 6:11 PM
Simulation
185
State variables are handled in the same manner. An approximation is generated
for each state variable and updated at the end of each loop. (Because approximations
are used for the objective function and SVs, the optimum design will be only as
good as the approximations.)
CONVERSION
TO AN
UNCONSTRAINED PROBLEM
State variables and limits on design variables are used to constrain the design and
make the optimization problem a constrained one. The ANSYS program converts
this problem to an unconstrained optimization problem because minimization techniques for the latter are more efficient. The conversion is done by adding penalties
to the objective function approximation to account for the imposed constraints. You
can think of penalties as causing an upturn of the objective function approximation
at the constraints. The ANSYS program uses extended interior penalty functions.
(For more information on penalty functions see sources in the selected bibliography
for this chapter.)
The search for a minimum is then performed on the unconstrained objective
function approximation using the Sequential Unconstrained Minimization Technique
(SUMT), which is explained in most texts on engineering design and optimization.
SIMULATION AND DFSS
In summary, simulation is of value in connection with DFSS because:
Design problems are discovered sooner.
• Shortens development time
• Provides better overall quality
• Permits early optimization of the design
Build-and-test is supplemented by computer simulations.
• Permits lower testing budgets
• Shortens development time
• Permits evaluation of alternative designs
• Minimizes overdesign by evaluating early in cycle
Therefore, with the aid of simulation we are capable of:
• Less time spent designing
• Less time spent testing
• Less time spent changing
Result: Better products …
in less time…
at a lower cost.
And that is what DFSS is all about.
SL3151Ch04Frame Page 186 Thursday, September 12, 2002 6:11 PM
186
Six Sigma and Beyond
REFERENCES
Buchanan G.R., Schaum’s Outline of Finite Element Analysis, McGraw-Hill Professional
Publishing, New York, 1994.
Cook, R., Finite Element Modeling for Stress Analysis, Wiley, New York, 1995.
Moaveni, S., Finite Element Analysis: Theory and Applications with ANSYS, Prentice Hall,
Upper Saddle River, NJ, 1999.
Schaeffer, H.G., MSC/NASTRAN Primer: Static and Normal Modes Analysis, MSC, New
York, 1998.
SELECTED BIBLIOGRAPHY
Adams, V. and Askenazi, A., Building Better Products with Finite Element Analysis, OnWord
Press, New York, 1998.
Belytschko, T., Liu, W.K., and Moran, B., Nonlinear Finite Elements for Continua and
Structures, Wiley, New York, 2000.
Hughes, T.J.R., The Finite Element Method: Linear Static and Dynamic Finite Element
Analysis, Dover Publications, New York, 2000.
Malkus, D.S. et al., Concepts and Applications of Finite Element Analysis, 4th ed., Wiley,
New York, 2001.
Rieger, M. and Steele, J., Basic Course in FEA Modeling, Machine Design, June 6, 1981,
pp. 7–8.
Rieger, M. and Steele, J., Basic Course in FEA Modeling, Machine Design, July 9, 1981, pp.
8–10.
Rieger, M. and Steele, J., Advanced Techniques in FEA Modeling, Machine Design, July 23,
1981, pp. 7–12.
Shih, R., Introduction to Finite Element Analysis Using I-DEAS Master Series 7, Schroff
Development Corp. Publications, New York, 1999.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 1, The Basis, ButterworthHeinsmann, London, 2000.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 2, Solid Mechanics,
Butterworth-Heinsmann, London, 2000.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 3, Fluid Dynamics,
Butterworth-Heinsmann, London, 2000.
SL3151Ch05Frame Page 187 Thursday, September 12, 2002 6:10 PM
5
Design for
Manufacturability/
Assembly
(DFM/DFA or DFMA)
When we talk about design for manufacturability/assembly (DFM/DFA or DFMA),
we describe a methodology that is concerned with reducing the cost of a product
through simplification of its design. In other words, we try to reduce the number of
individual parts that must be assembled and ultimately, increase the ease with which
these parts can be put together.
By focusing on these two items we are able to:
1. Design for a competitive advantage
2. Design for manufacturability, assembliability
3. Design for testability, serviceability, maintainability, quality, reliability,
work-in-process (wip), cost, profitability, and so on.
This, of course, brings us to the objectives of DFM/DFA, which are:
To maximize
a. Simplicity of design
b. Economy of materials, parts, and components
c. Economy of tooling/fixtures, process, and methods
d. Standardization
e. Assembliability
f. Testability
g. Serviceability
h. Integrity of product features
To minimize
a. Unique processes
b. Critical, precise processes
c. Material waste, or scrap due to process
d. Energy consumption
e. Generation of pollution, liquid or solid
f. Waste
g. Limited available materials, components, and parts
h. Limited available, proprietary, or long lead time equipment
i. Degree of ongoing product and production support
187
SL3151Ch05Frame Page 188 Thursday, September 12, 2002 6:10 PM
188
Six Sigma and Beyond
Producibility
Trade-off
Trade-offs
Reliability
Performance
a) Old Design
Goals: Reliability
Better performance
Life Cycle
Costs
Trade-offs
Producibility
Trade-offs
Reliability
Trade-offs
Trade-offs
Trade-offs
Performance
(b) New Design
Goals: Balanced Design
Low support cost
Low acquisition cost
FIGURE 5.1 Trade-off relationships between program objectives (balance design).
Therefore, one may describe the DFM/DFA process as a common-sense
approach consistent with the old maxim, “Get it done right the first time.” In
DFM/DFA, we strive to get it done right the first time with the most practical and
affordable methods in order to meet the customer’s expectations in terms of time,
process, costs, value, needs, and wants. This approach is quite different from the
old way of doing business. Figure 5.1 shows the old and new ways of design.
So, in a formal way we can say that design for manufacturing and assembly is
a way of focusing on designing the product with manufacturability and assembliability in mind, to ensure the product can be produced with an affordable manufacturing effort and cost and also, after the manufacturing process, to ensure that the
original designed product reliability can be maintained, if not enhanced. This
approach may seem time-consuming and not value added, but if we consider the
possible alternatives available we can appreciate the benefit of any DFM/DFA
initiative. For example, consider the following:
• What good is the design, if nobody can produce it?
• What good is the design, if nobody can produce it with an affordable
effort (in terms of manufacturing cost, scrap, rework, production
cycle/turn-around, wip, and so on)?
SL3151Ch05Frame Page 189 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
•
•
•
•
•
What
What
What
What
What
good
good
good
good
good
is
is
is
is
is
189
the product, if nobody can afford it?
the product, if we cannot market it in time?
the product, if it does not sell?
the product, if it is not profitable?
it, if it does not work?
By doing a DFM/DFA, we are able to take into consideration many inputs with
the intent of optimizing the design in terms of the following characteristics:
• Design/development lead time vs. marketing time
• Customer needs/wants vs. field application/performance vs. engineering
specifications
• Production launch efforts
• Manufacturing cost
• Flexibility and obsolescence of process and equipment
• Maintainability/serviceability of product
• Profitability
Specifically, we are looking for the:
1. DFA to minimize total product cost by targeting:
a. Part count — the major product cost driver
b. Assembly time
c. Part cost
d. Assembly process
2. DFM to minimize part cost by:
a. Optimizing manufacturing process
b. Optimizing material selection
c. Evaluating tooling and fabrication strategies
d. Estimating tooling costs
BUSINESS EXPECTATIONS AND THE IMPACT
FROM A SUCCESSFUL DFM/DFA
Perhaps one of the major reasons why we do a DFMA is that in the final analysis
we expect tremendous results with a measurable impact in the organization. Typical
expectations are:
•
•
•
•
•
•
•
Product development time improvement by 50–75%
Product design cost reduction by 25–50%
Product liability improvement by 10–25%
Product field performance chosen to customer’s needs/wants
Product production launch time reduction by 25–50%
Total manufacturing cost reduction by 25–75%
Reduction or even elimination of additional tooling/fixture cost
SL3151Ch05Frame Page 190 Thursday, September 12, 2002 6:10 PM
190
Six Sigma and Beyond
• Reduction, if not total elimination, of the engineering change notice by
75–99%
• Increase in engineering and technical personnel’s work morale, and also
letting them feel and assume ownership
• Ability to be competitive, be profitable, be successful
The impact, of course, becomes obvious. The entire organization is impacted
for the better — it becomes business focused. For example: marketing becomes
focused on the customer; engineering becomes focused on design; and manufacturing becomes focused on process. Specifically, the impact may be in the following
areas:
• Product closer to what customer expects
• Reduction of time to market
• Enhanced product liability, not just from original product design point of
view but also from a manufacturing process point of view
• Improved profit margins by reducing product cost
• Improved operating efficiency by reducing work-in-process
• Enhanced return on assets
• Reduced technical personnel turnover rate by improving group and individual satisfaction with the job/work
• Making the organization be profitable
Traditional Approach — In the past, product design/development, manufacturing process design/development, and equipment selection/capability assessment
were typically discrete activities — a sequential and discrete approach. That
approach may be shown as in Figure 5.2.
New Way — In order to let the manufacturing process and equipment have a head
start, all three activities of design, process, and equipment occur simultaneously — a
simultaneous equipment approach. This is where DFMA can help. This process may
be shown as in Figure 5.3.
The business strategy here becomes a pursuit to articulate the:
Customer needs, wants and expectations →
product/process engineering specification
by asking a series of specific questions such as:
•
•
•
•
•
•
•
What is the voice of the customer (VOC)?
What regulations have to be met?
What is the relative importance of requirement?
Which product characteristics impact the VOC?
Which process characteristics impact the VOC?
What price and profit margin impact to meet VOC?
Are there delivery schedule impacts?
SL3151Ch05Frame Page 191 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
Product selection and
development
assessment
Design/development
manufacturing process
191
Equipment selection and
capability assessment
Time
Marketing
specification
and function
confirmation
Engineering
product design
Mfg process design
Mfg production
Quality inspection
Product to customer
FIGURE 5.2 Sequential approach.
Product
design/development
Manufacturing process
design/development
assessment
Equipment design
capability
FIGURE 5.3 Simultaneous approach.
• Any competition? Targeted competitor?
• Continuing improvement opportunity?
• Future cost reduction opportunity to meet future customer price reduction
demands?
Figure 5.4 shows the modern way of addressing these concerns. The arrows
between product and process indicate possible alternatives. For example, if we
SL3151Ch05Frame Page 192 Thursday, September 12, 2002 6:10 PM
192
Six Sigma and Beyond
Product
alternative(s)
Voice
of the
customer
Process
alternative(s)
Business
decision
(cost and
investment)
Manufacturing
production
and quality
Product
FIGURE 5.4 Tomorrow’s approach … if not today’s.
examine the producibility for a textile component, we could look at the following
material considerations:
•
•
•
•
•
Natural
Synthetic
Properties
Processes
Applications
On the other hand, if we were to evaluate the manufacturing process we might
want to examine:
•
•
•
•
•
•
Pattern layout
Cutting
Sewing assembly
Types
Processes
Characteristics
THE ESSENTIAL ELEMENTS
FOR SUCCESSFUL DFM/DFA
The very minimum requirements for a successful DFMA are:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Form a charter that includes all key functions.
Establish the product plan.
Define product performance requirement.
Develop a realistic, agreed upon engineering specification.
Establish product’s character/features.
Define product architectural structure.
Develop a realistic, detailed project schedule.
Manage the project — schedule, performance, and results.
Make efforts to reduce costs.
Plan for continuing improvement.
SL3151Ch05Frame Page 193 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
193
The details of some of these elements are outlined below:
Form a DFMA charter
With any charter there are two primary responsibilities: (a) to identify the
roles and (b) to identify the functions.
i. Roles
A. Charter members — designer, manufacturing engineer, material/component engineer, product engineer, reliability/quality
engineer, and purchasing.
B. Team leader — program manager is a good candidate, but not
necessary. Any one of the charter members can be an adequate
team leader. Some companies/organizations assign an integrator
to be the DFMA leader.
ii. Charter’s functions
A. Determining the character of the product, to see what it is and
thus, what design and production methods are appropriate
B. Subjecting the product to a product function analysis, so that all
design decisions can be made with full knowledge of how the
item is supposed to work and so that all team members understand it well enough to contribute optimally
C. Carrying out a design for producibility, usability, and maintainability study to determine if these factors can be improved without impairing functionality
D. Designing an assembly process appropriate to the product’s particular character (This involves creating a suitable assembly
sequence, identifying subassemblies, control plan, and designing
each part so that its quality is compatible with the assembly/manufacturing method.)
E. Designing a factory system that fully involves workers in the
production strategy, operates on adequate inventory, and is integrated with suppliers’/vendors’ capabilities and manufacturing
processes
Establish product’s character/feature
• QFD approach
• Value analysis
• Effectiveness study on function and appearance/cosmetic
• Product character risk assessment
Define product architectural structure
a. Functional block approach
b. Hardware approach
c. Software approach
d. Component approach
Develop a project schedule
a. Agreed to by all functions on:
• Tasks
• Objectives
SL3151Ch05Frame Page 194 Thursday, September 12, 2002 6:10 PM
194
Six Sigma and Beyond
• Duration
• Responsibility
b. Specific performance test:
• Function
• Appearance
• Durability
c. Use project management techniques.
d. Concentrate on the concept of getting it done right the first time, not
only doing it right the first time.
e. Focus on the high leverage items — get some encouraging news first.
f. Locate and prioritize the resource.
g. Management commitment.
h. Individual commitment.
Manage the DFMA project
• Ensure regular and formal review of the status by charter members.
• Regularly prepare and formalize executive reports; get feedback.
• Ensure total team inputs and contributions, not only involvement.
• Utilize proven tools/methodologies.
• Make adjustment with team consensus.
• Ensure adequate resources with proper priorities.
• Control the progress of the project.
THE PRODUCT PLAN
It is imperative that the following considerations, all of which have a major impact
on the manufacturing process, must be discussed and resolved as early as possible
in the design cycle:
1. Nature of program — crash program, perfect design, or some other alternative
2. Product design itself
3. Production volume
4. Product life cycle
5. Funding
6. Cost of goods sold
Product Design
The focuses of marketing, engineering, manufacturing, and business/finance are
quite different, yet they all push for the same interest for the organization. Our task
then is to make sure that we balance out the different interests and priorities among
the four functions of an organization. How do we do that?
To make a long story short: How to decide between a crash program and a
perfect product? When we talk about perfect product we mean it from a definitional
perspective. There is no such a thing as a perfect product, but because of the operating
definition we choose, we can indeed call something a perfect product.
SL3151Ch05Frame Page 195 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
195
Criteria for Decision between Crash Program and Perfect Product
There are three issues here:
1. Opportunity cost
2. Development risk
3. Manufacturing risk
For a short life cycle product or a highly innovative product in a competitive
environment that changes rapidly, a company must react quickly to each new product
that enters the market. Getting the product to market fast is the name of the game.
However, being fast to the market is no advantage if the company chooses inadequate
technology, creates a product that cannot meet the potential customer’s
wants/needs/expectations, designs a product that cannot be manufactured, or must
set the price so high that nobody can afford the product.
The opportunity cost of missing a fast-moving market window, the risk of
entering a market with the wrong product, and the risk of introducing a product
nobody can produce pulls managers in opposite directions. So, the choice of a crash
program (CP) or a perfect product (PP) approach is a necessary step prior to any
product design taking place.
Two examples will make the point of a CP and a PP:
Case #1 — Crash Program
Company: IBM
Product: Personal computer
Environment: Forecasted annual growth rate of 60%. Competitors, i.e., Apple,
Tandy are controlling market developments and are beginning to cut into
IBM’s traditional office market.
Analysis: Opportunity cost is high. Development cost is low ($10 million
compared to IBM’s equity value of $18 billion). The technology of design
and process are stable and internally available.
Decision: Crash program approach — develop, design, manufacture, and market the product within 2 years.
Approach details: Deviate the standard eight phases design procedure. Give
the development team complete freedom in product planning; keep interference to a minimum; and allow the use of streamlined, relatively informational management system. Use a so-called zero procedure approach,
focusing on development speed rather than risk reduction of product, manufacturing, and so on.
Results: Introduce the product within 2 years. Customer acceptance is good.
Cost overrun by 15%. Cost of goods sold is about 5% unfavorable to the
original estimate. Market share is questionable. Long term effects — ???
(Does this sound familiar? Quite a few organizations take this approach and of
course, they fail.)
SL3151Ch05Frame Page 196 Thursday, September 12, 2002 6:10 PM
196
Six Sigma and Beyond
Case #2 — Perfect Product Design
Company: Boeing
Product: Boeing 727 replacement aircraft (767)
Environment: Replacement within ten years is inevitable (may be speeded up
to 5 years). Competitor, i.e., Airbus, has started its design. A new mid-range
aircraft may take 727 replacement market away due to the operating/fuel
inefficiency, comfortability, and Environmental Protection Agency (EPA)
restraints.
Analysis: Opportunity cost is high. (There is a need for 200–300 seat market;
727 is becoming obsolete.) Development cost is high (estimated $1.5 billion
compared to entire company equity of $1.4 billion). Development and
manufacturing risk is high. Technology and customer preferences are predictable but not yet crystallized. (Should it have two engineers or three?
Should its cockpit allow for two people or three? Cruise range? Fuel
consumption? Pricing?)
Decision: Perfect product design approach. Complete the development of all
new technologies of design and manufacturing processes in the early stages
of research and development (R and D). Test everything in sight, and move
product to launch only when success is nearly guaranteed. Eight-year design
lead time.
Approach details: Form an R and D team of 400 engineers/managers that
includes designer, manufacturing engineer, quality, purchasing, and marketing. (The team member number goes up to 1000 right before go-ahead.)
Apply concurrent engineering and DFMA process fully in the product R
and D stage.
Results: Introduce the 767 on schedule (which compares to Airbus’ 310 eight
months behind schedule). Although Boeing had missed the 300–350 seat
market and lost some of the 727 replacement market to Airbus 300, Boeing
got to keep 200–300 seat market with a successful 767. Development costs
were within budget and cost of goods sold was 4% favorable to the original
estimates. No recall record so far. Long term effects — likely good.
Most likely you are the in-betweens. The other approaches (see Figure 5.5)
include:
•
•
•
•
Quantum leap — parallel program
Acquisition
Joint venture
Leapfrog (Purchase a facility to maintain and manufacture current technology/design. Focus R and D on next generation technology/design.)
The Product Plan — Product Design Itself
Product design has dedicated (whether one wants to admit it or not) the future of
the product. About 95% of the material costs and 85% of the design/labor and
SL3151Ch05Frame Page 197 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
Crash program
Acquisition
197
Leapfrog exit
Step-by-step design approach
Joint venture
Opportunity
cost
Development risk and
manufacturing risk
FIGURE 5.5 The product development map/guide.
overhead costs are controlled by the design itself. Once the design is complete, about
85% of the manufacturing process has been locked in.
Design-related factors affecting the manufacturing process include:
•
•
•
•
•
•
•
Product size/weight
Reliability/quality requirement
Architectural structure
Fastener/joint methods
Parts/components/materials
Size, shape, and weight of parts/components
Appearance/cosmetic requirement
Other factors affecting the manufacturing process include:
•
•
•
•
•
•
•
•
Floor space
Material flow and process flow
Power, compressed air, a/c and heating, and facility
Quality plan
Manual operation mandatory
Mechanized operation or automation operation mandatory
System interfacing requirement
Manufacturing process concepts/philosophy — cpf vs. in-line vs. batch
vs. cellar approaches
SL3151Ch05Frame Page 198 Thursday, September 12, 2002 6:10 PM
198
Six Sigma and Beyond
• Management commitment
• Production volume
Volume requirements have the major influence on the choice of the manufacturing process.
• Product life cycle
As with volume requirements, product life has a significant influence on
the manufacturing process.
• Funding
Since most of mechanization and automation are heavily capitalized, funding plays a major role in determining the product plan, which has a significant influence on the manufacturing process.
• Cost of goods sold
What is affordable capital/tooling/fixture amortization?
What is the targeted cost of goods sold?
Define Product Performance Requirement
Minimum requirements are the collection and understanding of the following information:
•
•
•
•
Customer wants vs. customer needs vs. customer expectations
Field condition and environment
Performance standards
Durability
The result of this understanding will facilitate the development of realistic and
agreed upon specification(s). Some of the specific items that will guide realistic
specifications are:
•
•
•
•
•
•
•
•
Engineering interpretation of customer needs
Correlation between engineering specification and product specification
Reliability study in terms of MTBF
Manufacturing process reliability assessment in terms of maintaining original designed product standard
Manufacturing cost assessment
Option structure
Control plan
Qualification plan
AVAILABLE TOOLS AND METHODS FOR DFMA
Infinite tools and methodologies may be used to accomplish the goal of a DFMA
program. However, all of them fall into two categories: (a) approach alternatives
and (b) mechanics. Some of the most important ones are listed below:
Approach alternatives:
SL3151Ch05Frame Page 199 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
1.
2.
3.
4.
5.
6.
199
Ongoing program/project manager approach
Manufacturing engineering sign-off approach
Design engineering use simulation software package approach
Simultaneous engineering approach
Concurrent engineering approach
Integrator approach
Mechanics:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Quality function development (QFD)
Design of experiments (DOE)
Potential failure mode and effects analysis (FMEA)
Value engineering and value analysis (VE/VA)
Group technology (GT)
Geometric dimensioning and tolerancing (GD&T)
Dimensional assembly analysis (DAA)
Process capability study (Cpk, Ppk, Cp, Cr, ppm indices)
Just-in-time (JIT)
Qualitative assembly analysis (QAA)
COOKBOOKS
FOR
DFM/DFA
There are no cookbooks for DFMA. However, three organized instruction manuals
may be close to most engineers’ terms of guidelines. They are:
1. Mitsubishi method
2. U-MASS method
3. MIL-HDB-727 design guidance for producibility
All of the above methods utilize the principles of Taylor’s motion economy,
which have been proven to be quite helpful, especially in the DFA area. We identify
some of these principles here that may be profitably applied to shop and office work
alike. Although not all are applicable to every operation, they do form a basis or a
code for improving efficiency and reducing fatigue in manual work:
1. Smooth continuous curved motions of the hands are preferable to straightline motions involving sudden and sharp changes in direction.
2. Ballistic movements are faster, easier, and more accurate than restricted
(fixation) or “controlled” movements.
3. Work should be arranged to permit easy and natural rhythm wherever
possible.
Use of the Human Body
4. The two hands should begin as well as complete their motions at the same
time.
SL3151Ch05Frame Page 200 Thursday, September 12, 2002 6:10 PM
200
Six Sigma and Beyond
5. The two hands should not be idle at the same time except during rest
periods.
6. Motions of the arms should be made in opposite and symmetrical directions and should be made simultaneously.
7. Hand and body motions should be confined to the lowest classification
with which it is possible to perform the work satisfactorily.
8. Momentum should be employed to assist the worker wherever possible,
and it should be reduced to a minimum if it must be overcome by muscular
effort.
9. Eye fixations should be as few and as close together as possible.
Arrangement of the Work Place
10. There should be a definite and fixed place for all tools and materials.
11. Tools, materials, and controls should be located close to the point of use.
12. Gravity feed bins and containers should be used to deliver material close
to the point of use.
13. Drop deliveries should be used wherever possible.
14. Materials and tools should be located to permit the best sequence of
motions.
15. Provisions should be made for adequate conditions for seeing. Good
illumination is the first requirement for satisfactory visual perception.
16. The height of the work place and the chair should preferably be arranged
so that alternate sitting and standing at work are easily possible.
17. A chair of the type and height to permit good posture should be provided
for every worker.
Design of Tools and Equipment
18. The hands should be relieved of all work that can be done more advantageously by a jig, a fixture, or a foot-operated device.
19. Two or more tools should be combined wherever possible.
20. Tools and materials should be pre-positioned whenever possible.
21. Where each finger performs some specific movement, such as in typewriting, the load should be distributed in accordance with the inherent
capacities of the fingers.
22. Levers, crossbars, and hand wheels should be located in such positions
that the operator can manipulate them with the least change in body
position and with the greatest mechanical advantage.
MITSUBISHI METHOD
The Mitsubishi method was developed and fine-tuned by Japanese engineers in
Mitsubishi’s Kobe shipyard. The primary principle is the combination of QFD and
Taylor’s motion economy. The Mitsubishi method is very popular in Japan’s heavy
industries, i.e., shipbuilding industry, steel industry, and heavy equipment industry.
There is also evidence of some application of this method in Japan’s automotive,
SL3151Ch05Frame Page 201 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
201
motorcycle, and office equipment industries. More efforts are needed to promote
and share these techniques, and some effort is needed to fine-tune the Mitsubishi
method and make it practical to fit U.S. manufacturing companies’ cultures and
traditions.
The process is based on the following principles:
• The Mitsubishi method focuses on the product design’s reflection of the
customer’s desires and tastes. Thus, marketing people, design engineers,
and manufacturing staff must work together from the time a product is
first conceived.
• The Mitsubishi method is a kind of conceptual map that provides the
means for inter-functional planning and communications. People with
different problems and responsibilities can thrash out design priority while
referring to patterns of evidence on the house’s grid.
• The method involves 12 steps for each part in design/manufacturing, as
follows:
1. Customer attributes (CA) analysis — also called voice of customer
(VOC) evaluation — is performed.
2. Relative-importance weights of CA are determined.
3. Data is collected on customer evaluations of competitive products.
4. Engineering characteristics tell how to change the product.
5. Relationship matrix shows how engineering decisions affect customer
perceptions.
6. Objective measures evaluate competitive products.
7. Roof matrix facilitates engineering creativity.
8. QFD is finalized.
9. Parts development is based on manufacturing process planning and
handling planning (i.e., start the basic manufacturing process with
materials in liquid state, feeding raw materials with elevator feeder,
handling the wip with center board hopper, and continuing the forthcoming sequential operation with carousel assembly machine).
10. Manufacturing process and handling operation are based on the principles of motion economy.
11. Process planning is guided by parts/component characteristics, which
are based on engineering characteristics, and the latter are based on
customer attributes (compare to step #9).
12. Integrator coordinates/controls the project.
• Analysis procedure.
• Continuing improvement:
Voice of customer, design alternative, and process alternative continue to interface with each other. It is a dynamical situation — no ending improvement.
• Software package.
Table 5.1 shows an example of customer attributes (Cas) and bundles of CAs
for a car door. An example of relative importance weights of customer attributes is
SL3151Ch05Frame Page 202 Thursday, September 12, 2002 6:10 PM
202
Six Sigma and Beyond
TABLE 5.1
Customer Attributes for a Car Door
Primary
Good operation and use
Secondary
Tertiary
Easy to open and close
Easy to close from outside
Stays open on a hill
Easy to open from outside
Does not kick back
Easy to close from inside
Easy to open from inside
Does not leak in rain
No road noise
Does not leak in car wash
No wind noise
Does not drip water or snow when open
Does not rattle
Soft, comfortable
In right position
Material will not fade
Attractive (non-plastic look)
Easy to clean
No grease from door
Uniform gaps between matching panels
Isolation
Arm rest
Interior trim
Good appearance
Clean
Fit
TABLE 5.2
Relative Importance of Weights
Bundles
Customer Attributes
Easy to open and close door
Easy to close from outside
Stays open on a hill
Does not leak in rain
No road noise
A complete list totals
Isolation
Relative Importance
7
5
3
2
100%
shown in Table 5.2. An example of customer evaluations of competitive products is
shown in Table 5.3.
U-MASS METHOD
The U-MASS method is named for the University of Massachusetts, where it was
developed by two professors, Geoffrey Boothroyd and Peter Dewhurst, and their
graduate students. It is the most common DFM/DFA approach used in the U.S. The
SL3151Ch05Frame Page 203 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
203
TABLE 5.3
Customer’s Evaluations of Competitive Products
Customer
Attributes
Relative
Importance
Customer
Perceptions
Easy to close from outside
Stays open on a hill
Does not leak in rain
No road noise
7
5
3
2
Worst
Best
1
2
3
4
5
Worst
Best
1
2
3
4
5
Comparison is based on individual
attributes as compared to:
Our car door
Competitor A’s
Competitor B’s
And so on…
Bundles
Easy to open
and close door
Isolation
A complete list totals
100%
primary principle is the conventional motion and time study, while keeping in mind
the component counts and motion economy.
This method is heavily promoted in academic communities or institute-related
manufacturing companies located in the New England area, such as Digital Equipment
Corp. and Westinghouse Electric Company. Other companies are using it as well, such
as Ford Motor Co., DaimlerChrysler, and many others. Its appeal seems to be the
availability of the software that may be purchased from Boothroyd and Dewhurst. (Some
practitioners find the software very time-consuming in design efficiency calculation and
believe that more work is needed to fine tune its efficiency, as well as make it more
user friendly.) The process is based on the following principles:
1. Determine the theoretical minimum part count by applying minimum part
criteria.
2. Estimate actual assembly time using DFA database.
3. Determine DFA Index by comparing actual assembly time with theoretical
minimum assembly time.
4. Identify assembly difficulties and candidates for elimination that may lead
to manufacturing and quality problems.
MIL-HDBK-727
This method was developed by the U.S. Army material command and published by
the naval publications and forms center. The first edition was published in 1971,
and the latest revision was published in April 1984. The primary principle is Taylor’s
motion economy and some other design tools, i.e., DOE. This method is not too
popular. Not many people know about it, and it is not used very much outside of
the military. Some updates and revisions are needed to make it more practical to
general manufacturing companies.
SL3151Ch05Frame Page 204 Thursday, September 12, 2002 6:10 PM
204
Six Sigma and Beyond
FUNDAMENTAL DESIGN GUIDANCE
The core of the DFM/DFA process is to make sure that the design and assembly
are planned in terms of:
1.
2.
3.
4.
5.
6.
Simplicity (as opposed to complexity)
Standardization (commonality)
Flexibility
Capability
Suitability
Carryover
So, a designer designing a product should be cognizant of the effects on product
design. Some of these are:
• Materials selection is based on the targeted manufacturing process.
• The forms/shapes of parts are based on the targeted transportation, handling, and parts feeding system.
• Field environment can affect the production durability, which contributes
variation to the components/parts as well as the manufacturing process.
• Shelf life.
• Operating life.
• Product MTBF and MTBR.
In the development of the primary design, consideration must be given to whether
to start with a basic process or to start with secondary process with purchased raw
or semi-raw materials. If the decision is to start with a basic process, then the next
question will be — what kind of materials to start with? There are three options:
1. Start with materials in liquid state, i.e., casting.
2. Start with materials in plastic state, i.e., forging.
3. Start with materials in solid state, i.e., roll forming (sheet), extrusion (rod,
sheet), electroforming (powder), automatic screw machine work (rod).
If a secondary process is needed, either as a sequential operation of a basic
process or a fresh starting point, consideration must be given to the selection of the
most favorable forming and sizing operations. A number of factors relating to a
given design that need to be considered include:
1.
2.
3.
4.
5.
6.
7.
The
The
The
The
The
The
The
shape desired
characteristics of the materials
tolerance required
surface finish
quantity to be produced
average run size
cost
SL3151Ch05Frame Page 205 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
205
The focus then of a product design is to:
1. Minimize parts/components: The fewer parts/components and the fewer
manufacturing/assembly operations, the better, i.e.,
• Combine mating parts, unless isolation is needed.
• Eliminate screws and loose pieces. Replace screws with snap-on parts
or fasten rivet, if practical. If screws are a necessary evil, try to make
them all the same type and size.
• Do not use a screw to locate. Remember that a screw is a fastener.
2. Use common/popular components/parts: Off-the-shelf type components/parts usually are user friendly and less expensive. Tooling/setup
charges also can be avoidable beyond the pilot try headache, i.e.,
• Use fasteners with common/popular/standard length and diameter.
• Use common values of resistors, capacitors, diodes, etc.
• Use standard color chip of paints and coatings, if possible.
3. Design the parts to be symmetrical: If you must use customized unique
parts, try to design the parts to be symmetrical, and use a jigless assembly
method, if at all possible, i.e.,
• Avoid internal orientations.
• Design an external accentuated locating feature, if it cannot be internally symmetrical.
4. Design the parts to be self-aligned, self-locating, and self-locking, i.e.,
• Design locating pins and small snap protrusions on mating parts.
• Chamfers and tapers.
• Use mechanical entrapments and snap-on approach.
• Connect necessary wires/harnesses directly and use locking connectors.
• Make sure that parts are easy to grip.
• Avoid flexible parts — the more rigid the part, the more easily handled
and assembled.
• Avoid cables, if practical.
• Avoid complicated fastening process, if practical.
(Special note: If screws must be used, remember these rules:
• Shank to head ratio: l greater than or equal to 1.5; l greater than or
equal to 1.8 if tube feed
• Head design
• Thread consideration:
Tapped holes?
Thread cutting screws?
Thread forming screws?
• Quality screws)
5. Design for simple or no adjustment at all:
• Remember, adjustment is a non-value added operation. Minimum
adjustment — if necessary — with one-hand operation should be at most.
6. Modularize sub-assembly design:
• Modularize sub-assemblies. Assemble and test them prior to final
assembly.
SL3151Ch05Frame Page 206 Thursday, September 12, 2002 6:10 PM
206
Six Sigma and Beyond
Manufacturing System Schematic
Input
Activities
Output
Design drawings
Specifications, standards
Requirements
Materials
Manufacturing
Controlling
Planning
Scheduling
Products
Constraints
Personnel Policies
Quality Control/Assurance
Purchasing
FIGURE 5.6 Manufacturing system schematic.
THE MANUFACTURING PROCESS
Figure 5.6 shows a schematic of a manufacturing system. There are four categories
of manufacturing processes. They are:
1. Fabrication process — which can be further categorized as basic process,
secondary process, or finishing process. Typical types are:
• Single station
• Continuous production flow
• Pace production line
• Manufacturing cell approach
2. Assembly process — which can be further categorized as manual assembly, mechanical assembly, automatic assembly, or computer-aided assembly. Typical types are:
• Continuous transfer
• Intermittent transfer
• Indexing mechanisms
• Operator-paced free-transfer machine
3. Inspection or quality control process
• Inspection check point(s)
4. Material handling process
• Conveyors
• Tractors
SL3151Ch05Frame Page 207 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
207
• Fork lifts
• Parts/component feeding system:
• Vibratory bowl feeder
• Reciprocating tube hopper feeder
• Centerboard hopper feeder
• Reciprocating fork hopper feeder
• External gate hopper feeder
• Rotary disk feeder
• Centrifugal hopper feeder
• Revolving hook hopper feeder
• Stationary hook hopper feeder
• Bladed wheel hopper feeder
• Tumbling barrel hopper feeder
• Rotary centerboard hopper feeder
• Magnetic disk feeder
• Elevating hopper feeder
• Magnetic elevating hopper feeder
Approaches to manufacturing processes include the job shop approach, the
assembly line approach, and the one in, one out approach. Details of these processes
are as follows:
Singled station manufacturing process — job shop approach
Definition: Single fixture with one or more operations performed
Advantages:
• Capital investment — low
• Line balance — not needed
• Interference with other operations (downtime) — minimum, if any
• Flexibility — easy to expand or rearrange
• Employment fulfillment — high
Disadvantages:
• Multiple tooling/fixture investment — high
• Material handling — high
• Material flow — easy to congest at in/out
• Operation cycle time — long
• Operator skills — moderate
Continuous production flow manufacturing process — assembly line approach
Definition: Continuous, sequential motion assembly/manufacturing approach
Advantages:
• Work-in-process — low
• Manufacturing/assembly cycle time — low
• Material handling — very low, if not eliminated
• Material flow — good
• Operator skill/training — only in specialized areas
SL3151Ch05Frame Page 208 Thursday, September 12, 2002 6:10 PM
208
Six Sigma and Beyond
Disadvantages:
• Capital investment — high
• Preventative maintenance and corrective maintenance — absolute
necessity (If one part breaks down, the entire line is down.)
• Engineering, technician, and flow disciplines — absolute necessity
• Flexibility — low
• Production changeover — complicated
Pace production line — one in, one out
Definition: Same cycle time at all work stations, and likely all work pieces
transfer at the same time
Advantages:
• Work-in-process — very low and can be calculated
• Material handling — automatic
• Material flow — good
• Productivity — best
Disadvantages:
• Capital investment — high
• Preventative maintenance and corrective maintenance — absolute
necessity (If one part breaks down, the entire line is down.)
• Engineering, technician, and flow disciplines — absolute necessity
• Flexibility — very low
• Production changeover — difficult
MISTAKE PROOFING
DEFINITION
Mistake proofing by definition is a process improvement system that prevents personal injury, promotes job safety, prevents faulty products, and prevents machine
damage. It is also known as the Shingo method, Poka Yoke, error proofing, fail safe
design, and by many other names.
THE STRATEGY
Establish a team approach to mistake proof systems that will focus on both internal
and external customer concerns with the intention of maximizing value. This will
include quality indicators such as on-line inspection and probe studies.
The strategy involves:
• Concentrating on the things that can be changed rather than on the things
that are perceived as having to be changed to improve process performance
• Developing the training required to prepare team members
• Involving all the appropriate people in the mistake proof systems process
• Tracking quality improvements using in-plant and external data collection
systems (before/after data)
SL3151Ch05Frame Page 209 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
209
• Developing a “core team” to administer the mistake proof systems process
This core team will be responsible for tracking the status of the mistake
proof systems throughout the implementation stages.
• Creating a communication system for keeping plant management, local
union committee, and the joint quality committee informed of all
progress — as applicable
• Developing a process for sharing the information with all other departments and/or plants — as applicable
• Establishing the mission statement for each team and objectives that will
identify the philosophy of mistake proof systems as a means to improve
quality
A typical mission statement may read: to protect our customers by developing mistake proofing systems that will detect or eliminate defects
while continuing to pursue variation reduction within the process.
• Developing timing for completion of each phase of the process
• Establishing cross-functional team involvement with your customer(s)
Typical objectives may be to:
• Become more aware of quality issues that affect our customer
• Focus our efforts on eliminating these quality issues from the production
process
• Expose the conditions that cause mistakes
• Understand source investigation and recognize its role in preventing
defects
• Understand the concepts and principles that drive mistake prevention
• Recognize the three functional levels of mistake proofing systems
• Be knowledgeable of the relationships between mistake proof system
devices and defects
• Recognize the key mistake proof system devices
• Share the mistake proof system knowledge with all other facilities within
the organization
DEFECTS
Many things can and often do go wrong in our ever-changing and increasingly
complex work environment. Opportunities for mistakes are plentiful and often lead
to defective products. Defects are not only wasteful but result in customer dissatisfaction if not detected before shipment.
The philosophy behind mistake proof systems suggests that if we are going to
be competitive and remain competitive in a world market we cannot accept any
number of defects as satisfactory.
In essence, not even one defect can be tolerated. Mistake proof systems are a
simple method for making this philosophy become a daily practice. Simple concepts
and methods are used to accomplish this objective.
SL3151Ch05Frame Page 210 Thursday, September 12, 2002 6:10 PM
210
Six Sigma and Beyond
Humans tend to be forgetful, and as a result, we make mistakes. In a system
where blame is practiced and people are held accountable for their mistakes and
mistakes within the process, we discourage the worker and lower morale of the
individual, but the problem continues and remains unsolved.
MISTAKE PROOF SYSTEM IS
IN THE WORKPLACE
A
TECHNIQUE
FOR
AVOIDING ERRORS
The concept of error proof systems has been in existence for a long time, only we
have not attempted to turn it into a formalized process. It has often been referred to
as idiot proofing, goof proofing, fool proofing, and so on. These terms often have a
negative connotation that appears to attack the intelligence of the individual involved
and therefore are not used in today’s work environment. For this reason we have
selected the term “mistake proof system.” The idea behind a mistake proof system
is to reduce the opportunity for human error by taking over tasks that are repetitive
or actions that depend solely upon memory or attention. With this approach, we
allow the worker to maintain dignity and self-esteem without the negative connotation that the individual is an idiot, goof, or fool.
TYPES
OF
HUMAN MISTAKES
Forgetfulness
There are times when we forget things, especially when we are not fully concentrating or focusing. An example that can result in serious consequences is the failure
to lock out a piece of equipment or machine we are working on. To preclude this,
precautionary measures can be taken: post lock out instructions at every piece of
equipment and/or machine; have an ongoing program to continuously alert operators
of the danger.
Mistakes of Misunderstanding
Jumping to conclusions before we are familiar with the situation often leads to
mistakes. For example, visual aids are often prepared by engineers who are thoroughly familiar with the operation or process. Since the aid is completely clear from
their perspective, they may make the assumption (and often do) that the operator
fully understands as well. This may not be true. To preclude this, we may test this
hypothesis before we create an aid; provide training/education; standardize work
methods and procedures.
Identification Mistakes
Situations are often misjudged because we view them too quickly or from too far
away to clearly see them. One example of this type of mistake is misreading the
identification code on a component of a piece of equipment and replacing that
component with the wrong part. To prevent these errors, we might improve legibility
SL3151Ch05Frame Page 211 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
211
of the data/information; provide training; improve the environment (lighting); reduce
boredom of the job, thus increasing vigilance and attentiveness.
Amateur Errors
Lack of experience often leads to mistakes. Newly hired workers will not know the
sequence of operations to perform their tasks and often, due to inadequate training,
will perform those tasks incorrectly. To prevent amateur errors, provide proper
training; utilize skill building techniques prior to job assignment; use work standardization.
Willful Mistakes
Willful errors result when we choose to ignore the rules. One example of this type
of error is placing a rack of material outside the lines painted on the floor that clearly
designate the proper location. The results can be damage to the vehicle or the material
or perhaps an unsafe work condition. To prevent this situation, provide basic education and/or training; require strict adherence to the rules.
Inadvertent Mistakes
Sometimes we make mistakes without even being aware of them. For example, a
wrong part might be installed because the operator was daydreaming. To minimize
this, we may standardize the work, through discipline if necessary.
Slowness Mistakes
When our actions are slowed by delays in judgment, mistakes are often the result.
For example, an operator unfamiliar with the operation of a fork lift might pull the
wrong lever and drop the load. Methods to prevent this might be: skill building;
work standardization.
Lack of Standards Mistakes
Mistakes will occur when there is a lack of suitable work standards or when workers
do not understand instructions. For example, two inspectors performing the same
inspection may have different views on what constitutes a reject. To prevent this,
develop operation definitions of what the product is expected to be that are clearly
understood by all; provide proper training and education.
Surprise Mistakes
When the function or operation of a piece of equipment suddenly changes without
warning, mistakes may occur. For example, power tools that are used to supply
specific torque to a fastener will malfunction if an adequate oil supply is not
maintained in the reservoir. Errors such as these can often be prevented by work
standardization; having a total productive maintenance system in place.
SL3151Ch05Frame Page 212 Thursday, September 12, 2002 6:10 PM
212
Six Sigma and Beyond
Intentional Mistakes
Mistakes are sometimes made deliberately by some people. These fall in the category
of sabotage. Disciplinary measures and basic education are the only deterrents to
these types of mistakes.
There are many reasons for mistakes to happen. However, almost all of these
can be prevented if we diligently expend the time and effort to identify the basic
conditions that allow them to occur, such as:
• When they happen
• Why they happen
and then determine what steps are needed to prevent these mistakes from recurring —
permanently.
The mistake proof system approach and the methods used give you an opportunity to prevent mistakes and errors from occurring.
DEFECTS
AND
ERRORS
Mistakes are generally the cause of defects. Can mistakes be avoided? To answer this
question requires us to realize that we have to look at errors from two perspectives:
1. Errors are inevitable: People will always make mistakes. Accepting this
premise makes one question the rationale of blaming people when mistakes are committed. Maintaining this “blame” attitude generally results
in defects. Also, quite often errors are overlooked when they occur in the
production process. To avoid blame, the discovery of defects is postponed
until the final inspection, or worse yet, until the product reaches the
customer.
2. Errors can be eliminated: If we utilize a system that supports (a) proper
training and education and (b) fostering the belief that mistakes can be
prevented, then people will make fewer mistakes. This being true, it is
then possible that mistakes by people can be eliminated.
Sources of mistakes may be any one of the six basic elements of a process:
1.
2.
3.
4.
5.
6.
Measurement
Material
Method
Manpower
Machinery
Environment
Each of these elements may have an effect on quality as well as productivity.
To make quality improvements, each element must be investigated for potential
mistakes of operation. To reduce defects, we must recognize that defects are a
SL3151Ch05Frame Page 213 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
213
TABLE 5.4
Examples of Mistakes and Defects
Mistake
Resulting Defects
Failure to put gasoline in the snow blower
Failure to close window of unit being tested
Failure to reset clock for daylight savings time
Failure to show operator how to properly assemble
components
Proper weld schedule not maintained on welding
equipment
Low charged battery placed in griptow
Snow blower will not start
Seats and carpet are wet
Late for work
Defective or warped product
Bad welds, rejectable and/or scrap material
Griptow will not pull racks resulting in lost
production, downtime, etc.
consequence of the interaction of all six elements and the actual work performed in
the process. Furthermore, we must recognize that the role of inspection is to audit
the process and to identify the defects. It is an appraisal system and it does nothing
for prevention. Product quality is changed only by improving the quality of the
process. Therefore, the first step toward elimination of defects is to understand the
difference between defects and mistakes (errors):
Defects are the results.
Mistakes are the causes of the results.
Therefore, the underlying philosophy behind the total elimination of defects
begins with distinguishing between mistakes and defects. Examples of mistakes and
defects are shown in Table 5.4.
MISTAKE TYPES
AND
ACCOMPANYING CAUSES
The following categories with the associated potential causes are given as examples, rather than exhaustive lists:
Assembly mistakes
Inadequate training
Symmetry (parts mounted backwards)
Too many operations to perform
Multiple parts to select from with poor or no identification
Misread or unfamiliar with parts/products
Tooling broken and/or misaligned
New operator
Processing mistakes
Part of process omitted (inadvertent/deliberate)
Fixture inadequate (resulting in parts being set into incorrectly)
Symmetrical parts (wrong part can be installed)
SL3151Ch05Frame Page 214 Thursday, September 12, 2002 6:10 PM
214
Six Sigma and Beyond
Irregular shaped/sized part (vendor/supplier defect)
Tooling damaging part as it is installed
Carelessness (wrong part or side installed)
Process/product requirements not understood (holes punched in wrong location)
Following instructions for wrong process (multiple parts)
Using incorrect tooling to complete operations (impact versus torque
wrench)
Inclusion of wrong part or item
Part codes wrong/missing
Parts for different products/applications mixing together
Similar parts confused
Misreading prints/schedules/bar codes etc.
Operations mistakes
Process elements assigned to too many operators
Operator error
Consequential results
Setup mistakes
Improper alignment of equipment
Process or instructions for setup not understood or out of date
Jigs and fixtures mislocated or loose
Fixtures or holding devices will accept mislocated components
Assembly omissions — missing parts
Special orders (high or low volume parts missing)
No inspection capability (hidden parts omitted)
Substitutions (unexpected deviations from normal production)
Misidentified build parameters (heavy duty versus standard)
Measurement or dimensional mistakes
Flawed measuring device
Operator skill in measuring
Inadequate system for measuring
Using “best guess” system
Processing omissions
Operator fatigue (part assembled incorrectly/omitted)
Cycle time (incomplete/poor weld)
Equipment breakdown (weld omitted)
New operator
Tooling omitted
Automation malfunction
Instructions for operation incomplete/missing
Job not set up for changeover
Operator not trained/improper training
Sequence violation
Mounting mistakes
Symmetry (parts can be installed backwards)
Tooling wrong/inadequate
SL3151Ch05Frame Page 215 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
215
Operator dependency (parts installed upside down)
Fixtures or holding devices accept mispositioned parts
Miscellaneous mistakes
Inadequate standards
Material misidentified
No controls on operation
Counting system flawed/operating incorrectly
Print/specifications incorrect
SIGNALS
THAT
ALERT
Signals that “alert” are conditions present in a process that commonly result in
mistakes. Some signals that alert are:
•
•
•
•
•
•
•
•
•
•
•
•
Many parts/mixed parts
Multiple steps needed to perform operation
Adjustments
Tooling changes
Critical conditions
Lack of or ineffective standards
Infrequent production
Extremely high volume
Part symmetry
Asymmetry
Rapid repetition
Environmental
• Housekeeping
• Material handing
• Poor lighting
• Foreign matter and debris
• Other
Ten of the most common types of mistakes are:
Assembly mistakes
Inclusion of wrong part or item
Setup mistakes
Measurement mistakes
Mounting mistakes
APPROACHES
TO
Processing mistakes
Operations mistakes
Assembly omissions (missing parts)
Process omissions
Miscellaneous
MISTAKE PROOFING
As we already mentioned, any mistake proofing system is a process that focuses on
producing zero defects by eliminating the human element from assembly. There are
two approaches to this — see Figure 5.7.
SL3151Ch05Frame Page 216 Thursday, September 12, 2002 6:10 PM
216
Six Sigma and Beyond
Operation #1
Operation #2
Ship to
Customer
Reactive Systems
Proactive Systems
Focus on defect
identification
Alerts (signals) operator
that failure has occurred
Provides immediate
feedback to operator
Points to area of cause of
defect
Points to apparent cause
(symptom of defect
stops production until
defective item removed or
repaired)
Protects customer from
receiving defective product
Does not prevent mistakes
or defects from
recurring
Focus on defect prevention
Utilizes source inspection to
detect when a mistake is about to
occur before a defect is produced
Halts production before mistake
occurs
Utilizes ideal Mistake Proofing
Methods that eliminate the
possibility of mistakes so that
defective product cannot be
produced
Performs 100% inspection
without inspection costs
Prevents defects and mistakes
from occurring
FIGURE 5.7 Approaches to mistake proofing.
1. Reactive systems (defect detection)
This approach relies on halting production in order to sort out the good
from the bad for repair or scrap.
2. Proactive systems (defect prevention)
This approach seeks to eliminate mistakes so that defective products are
not produced, production downtime is reduced, costs are lowered, and
customer satisfaction is increased.
Major Inspection Techniques
Figure 5.8 shows major inspection techniques. Source inspection utilizing mistake
proofing system devices is the most logical method of defect prevention.
Mistake Proof System Devices
Mistake proof system “devices” are simple and inexpensive. There are essentially
two types of devices used:
1. Detectors (sensors) — to detect mistakes that have occurred or are about
to occur
2. Preventers — to prevent mistakes from occurring
SL3151Ch05Frame Page 217 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
Operation #1
Operation #2
Source Inspection
A defect is a result of a
mistake. Source inspection
looks at the cause(s) for the
mistake, rather than the
actual defect.
By conducting inspection
at the source, mistakes can
be corrected before they
become defects.
Inspection utilizing Mistake
Proofing System Devices
to automatically inspect for
mistakes or defective
operating conditions is an
effective low-cost solution
for eliminating defects and
resulting defective product.
Informative
Inspection
Looks at the cause(s) of
defects and feeds this
information back to the
appropriate personnel/process
so that defects can be
reduced/eliminated
217
Ship to
Customer
Inspect Finished
Product
Sort “good” from “bad”
BAD
Scrap
GOOD
Repair
Judgment Inspection
Distinguishes good product from
bad. This method prevents
defective product from being
delivered to the customer but:
Does nothing to prevent
production of defective
products
FIGURE 5.8 Major inspection techniques.
Devices Used as “Detectors of Mistakes”
When used as detectors (sensors), these devices:
1. Provide prompt feedback (signals) to the operator that a mistake has
occurred or is about to occur
2. Initiate an action or actions to prevent further mistakes from occurring
Devices Used as “Preventers of Mistakes”
When used to prevent mistakes, these devices prevent mistakes from occurring or
initiate an action or actions to prevent mistakes from occurring.
SL3151Ch05Frame Page 218 Thursday, September 12, 2002 6:10 PM
218
Six Sigma and Beyond
Operation #1
Operation #2
First
Function
Eliminates the
mistake at the
source before
it occurs
Ship to
customer
Third Function
Detects a defect that
has occurred before it
is sent to the next
operation or shipped to
the customer
Second
Function
Detects mistakes as
they are occurring,
but before they result
in defects
FIGURE 5.9 Functions of mistake-proofing devices.
EQUATION
FOR
SUCCESS
To be successful with a mistake proofing initiative one must keep in mind the
following equation:
Source investigation + Mistake proofing = Defect free system
However, to reach the state of defect free system, in addition to signals and
inspection we must also incorporate appropriate sensors to identify, stop, and/or
correct a problem before it goes to the next operation. Sensors are very important
in mistake proofing, so let us look at them little closer.
A sensor is an electrical device that detects and responds to changes in a given
characteristic of a part, assembly, or fixture — see Figure 5.9. A sensor can, for
example, verify with a high degree of accuracy the presence and position of a part
on an assembly or fixture and can identify damage or wear. Some examples of types
of sensors and typical uses are:
Welding position indicators: Determine changes in metallic composition, even
on joints that are invisible to the surface
Fiber sensors: Observe linear interruptions utilizing fiber optic beams
Metal passage detectors: Determine if parts have a metal content or mixed
metal content, for example in resin materials
Beam sensors: Observe linear interruptions using electronic beams
Trimetrons: Exclude or detect preset measurement values using a dial gauge
(Value limits can be set on plus or minus sides, as well as on nominal
values.)
Tap sensors: Identify incomplete or missing tap screw machining
Color marking sensors: Identify differences in color or colored marking
SL3151Ch05Frame Page 219 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
219
Area sensors: Determine random interruptions over a fixed area
Double feed sensors: Identify when two products are fed at the same time
Positioning sensors: Determine correct/incorrect positioning
Vibration sensors: Identify product passage, weld position, broken wires, loose
parts, etc.
Displacement sensors: Identify thickness, height, warpage, surface irregularities, etc.
Typical Error Proofing Devices
Some of the most common mistake proofing devices used are:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Sensors
Sequence restrictors
Odd part out method
Limit or microswitches, proximity detectors
Templates
Guide rods or pins
Stoppers or gates
Counters
Standardized methods of operation and/or material usage
Detect delivery chute
Critical condition indicators
Probes
Mistake proof your mistake proof system
and so on
REFERENCES
Boothroyd, G. and Dewhurst, P., Product Design for Assembly, Boothroyd Dewhurst, Inc.,
Wakefield, RI, 1991.
MIL-HDBK-727, Design Guidance for Producibility, U.S. Army Material Command, Washington, DC, 1986.
Mitsubishi, Mitsubishi Design Engineering Handbook, Mitsubishi, Kobe, Japan, 1976.
Munro, A., S. Munro and Associates, Inc., Design for Manufacture, training manual, 1992.
SELECTED BIBLIOGRAPHY
Anon., How To Achieve Error Proof Manufacturing: Poka-Yoke and Beyond: A Technical
Video Tutorial, SAE International, undated (may be ordered online for $895 [$25
preview copy]).
Anon., 21st Century Manufacturing Enterprise Strategy, An Industry Led View, Volumes 1
and 2, Iacocca Institute, Lehigh University, PA, 1991.
Anon., Mistake-Proofing for Operators: The ZQC System, The Productivity Press Development Team, Productivity Press, Portland, OR, 1997.
SL3151Ch05Frame Page 220 Thursday, September 12, 2002 6:10 PM
220
Six Sigma and Beyond
Anon., Manufacturing Management Handbook for Program Manager, ABN Fort Belvoir, VA,
1982.
Anon., Product Engineering Design Mannual, Litton Industries, Beverly Hills, CA, 1978.
Azuma, L. and Tada, M., A case history development of a foolproofing interface documentation system, IEEE Transactions on Software Engineering, 19, 765–773, 1993.
Bandyopadhyay, J.K., Poka Yoke systems to ensure zero defect quality manufacturing, International Journal of Management, 10(1), 29–33, 1993.
Barkers, R., Motion and Time Study: Design and Measurement of Work, Cot Loge Book
Company, Los Angeles, 1970.
Barkman, W.E., In-Process Quality Control for Manufacturing, Marcel Dekker, New York,
1989. (Preface and Chapter 3 are of particular interest.)
Bayer, P.C., Using Poka Yoke (mistake proofing devices) to ensure quality, IEEE 9th Applied
Power Electronics Conference Proceedings, 1, 201–204, 1994.
Bodine, W.E., The Trend: 100 Percent Quality Verification, Production, June 1993, pp. 54–55.
Bosa, R., Despite fuzzy logic and neural networks, operator control is still a must, CMA, 69,
7, 995.
Boothroyd, G. and Murch, P., Automatic Assembly, Marcel Dekker, New York, 1982.
Brehmer, B., Variable errors set a limit to adaptation, Ergonomics 33, 1231–1239, 1990.
Brall, J., Product Design for Manufacturing, McGraw-Hill, New York, 1986.
Casey, S., Set Phasers on Stun and Other True Tales of Design, Technology, and Human
Error, Aegean, Santa Barbara, CA, 1993.
Chase, R.B., and Stewart, D.M., Make Your Service Fail-safe, Sloan Management Review,
Spring 1994, pp. 35–44.
Chase, R.B. and Stewart, D.M., Designing Errors Out, Productivity Press, Portland, OR,
1995. Note of interest: Productivity Press has discontinued sales of this book (a very
sad outcome). Some copies may be available in local bookstores. It is both more
readable and broader in application than Shingo but does not have a catalog of
examples as Shingo does.
Damian, J., “Agile Manufacturing” Can Revive U.S. Competitiveness, Industry Study Says —
A Modest Proposal, Electronics, Feb. 1992, pp. 34, 42–44.
Dove, R., Agile and Otherwise — Measuring Agility: The Toll of Turmoil, Production, Jan.
1995, pp. 12–15.
Dove, R., Agile and Otherwise — The Challenges of Change, Production, Feb. 1995, pp.
14–16.
Gross, N., This Is What the U.S. Must Do To Stay Competitive, Business Week, Dec. 1991,
pp. 21–24.
Grout, J.R., Mistake-Proofing Production, working paper 75275–0333, Cox School of Business, Southern Methodist University, Dallas, 1995.
Grout, J.R. and Downs, B.T., An Economic Analysis of Inspection Costs for Failsafing
Attributes, working paper 95–0901, Cox School of Business, Southern Methodist
University, Dallas, 1995.
Grout, J.R. and Downs. B.T., Fail-safing and Measurement Control Charts, 1995 Proceedings,
Decision Sciences Institute Annual Meetings, Boston, MA, 1995.
Henricks, M., Make No Mistake, Entrepreneur, Oct. 1996, pp. 86–89. (Last quote should
read “average net savings of around $2500 a piece...” not average cost.)
Hinckley, C.M. and Barkan, P., The role of variation, mistakes, and complexity in producing
nonconformities, Journal of Quality Technology 27(3), 242–249, 1995.
Jaikumar, R., Manufacturing a’la Carte Agile Assembly Lines, Faster Development Cycles,
200 Years to CIM, IEEE Spectrum, 76–82, Sept. 1993.
SL3151Ch05Frame Page 221 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA)
221
Kaplan, G., Manufacturing a’la Carte Agile Assembly Lines, Faster Development Cycles,
Introduction, IEEE Spectrum, 46–51, Sept. 1993.
Kelly, K., Your Job Managing Error is Out of Control, Addison-Wesley, New York, 1994.
Kletz, T., Plant Design for Safety: A User-Friendly Approach, Hemisphere Publishing Corp.,
New York, 1991.
Lafferty, J.P., Cpk of 2 Not Good Enough for You? Manufacturing Engineering, Oct. 1992,
p. 10.
Ligus, R.G., Enterprise Agility: Jazz in the Factory, Industrial Engineering, Nov. 1994, pp.
19–23.
Lucas Engineering and Systems Ltd., Design for Manufacture Reference Tables, University
of Hull, Hull, England, Lucas Industries, Jan. 1994.
Manji, J.F., Sharpen, C., Your Competitive Edge Today and Into the 21st Century, CALS El
Journal, Date unknown, pp. 56–61.
Marchwinski, C., Ed., Company Cuts the Risk of Defects During Assembly and Maintenance,
MfgNet: The Manufacturer’s Internet Newsletter, Productivity, Inc. Norwalk, CT, 1996.
Marchwinski, C., Ed., Mistake-proofing, Productivity, 17(3), 1–6, 1995.
Marchwinski, C., Ed., (1997). SPC vs. ZQC. Productivity, 18(1), 1–4 1997. (Note: ZQC is
another name for mistake proofing. It stands for Zero Quality Control.)
McClelland, S., Poka-Yoke and the Art of Motorcycle Maintenance, Sensor Review, 9(2), 63,
1989.
Monden, Y., Toyota Production System, Industrial Engineering and Management Press, Norcross, GA, 1983, pp. 10, 137–154.
Munro, A., S. Munro and Associates, Inc., Design for Manufacture, training manual, 1994.
Munro, A., S. Munro and Associates, Inc., Trainers for Design for Manufacture, analysis,
undated.
Myers, M., Poka/Yoke-ing Your Way to Success, Network World, Sept. 11, 1995, p. 39.
Nakajo, T. and Qume, H., The principles of foolproofing and their application in manufacturing, Reports of Statistical Application Research, Union of Japanese Scientists and
Engineers, 32(2), 10–29, 1985.
Niebel, C. and Baldwin, J., Designing for Production, Irwin, Homewood, IL, 1963.
Nieber, C. and Draper, G., Product Design and Process Engineering, McGraw-Hill, New
York, 1974.
Noaker, P.M., The Search for Agile Manufacturing, Manufacturing Engineering, Nov. 1994,
pp. 57–63.
Norman, D.A., The Design of Everyday Things, Doubleday, New York, 1989.
O’Connor, L., Agile Manufacturing in a Responsive Factory, Mechanical Engineering, July
1994, pp. 43–46.
Otto, K. and Wood, K., Product Design: Techniques in Reverse Engineering and New Product
Development, Prentice Hall, Upper Saddle River, NJ, 2001.
Port, O., Moving Past the Assembly Line — “Agile” Manufacturing Systems May Bring a
U.S. Revival, Business Week/Re-Inventing America, 1992, pp. 17–20.
Reason, J., Human Error, Cambridge University Press, New York, 1990.
Robinson, A.G. and Schroeder, D.M., The limited role of statistical quality control in a zero
defects environment, Production and Inventory Management Journal, 31(3), 60–65,
1990.
Robinson, A.G., Ed., Modern Approaches to Manufacturing Improvement: The Shingo System,
Productivity Press, Portland, OR, 1991.
Shandle, J., Sandia Labs Launches Agile Factory Program, Electronics, Mar. 8, 1993, pp.
48–49.
SL3151Ch05Frame Page 222 Thursday, September 12, 2002 6:10 PM
222
Six Sigma and Beyond
Sheridan, J.H., A Vision of Agility, Industry Week, Mar. 21, 1994, pp. 22–24.
Shingo, S., Zero Quality Control: Source Inspection and the Poka-Yoke System, Trans. A.P.
Dillion, Productivity Press, Portland, OR, 1986.
Shingo S., A Study of the Toyota Production System from an Industrial Engineering Viewpoint,
Productivity Press, Portland, OR, 1989, online excerpts.
Steven, S. and Bowen., H.K., Decoding the DNA of the Toyota Production System, Harvard
Business Review, Sept./Oct, 1999, pp. 97–106.
Texas Instruments, Design to Cost: An Introduction, Corporate Engineering Council, Texas
Instruments, Inc., Dallas, 1977.
Trucks, H.E., Designing for Economical Production, SME, Dearborn, MI, 1974.
Tsuda, Y., Implications of fool proofing in the manufacturing process, in Quality Through
Engineering Design, Kuo, W., Ed., Elsevier, New York, 1993.
Vasilash, G.S., Re-engineering, Re-energizing, Objects and Other Issues of Interest, Production, Jan. 1995, pp. 38–41.
Vasilash, G.S., On training for mistake-proofing, Production, Mar. 1995, pp. 42–44.
Ward, C., What Is Agility? Industrial Engineering, Nov. 1994, pp. 38–44.
Warm, J.S., An introduction to vigilance, in Sustained Attention in Human Performance,
Warm, J.S., Ed., Wiley, New York, 1984.
Weimer, G., Is an American Renaissance at Hand? Industry Week, May 1992, pp. 14–17.
Weimer, G., U.S.A. 2006: Industry Leader or Loser, Industry Week, Jan. 20, 1992, pp. 31–34.
SL3151Ch06Frame Page 223 Thursday, September 12, 2002 6:09 PM
6
Failure Mode and
Effect Analysis (FMEA)
This chapter has been developed to assist and instruct design, manufacturing, and
assembly engineers in the development and execution of a potential Failure Mode
and Effect Analysis (FMEA) for design considerations, manufacturing, assembly
processes, and machinery.
An FMEA is a methodology that helps identify potential failures and recommends corrective action(s) for fixing these failures before they reach the customer.
A concept (system) FMEA is conducted as early as possible to identify serious
problems with the potential concept or design. A design FMEA is conducted prior
to production and involves the listing of potential failure modes and causes. An
FMEA identifies actions required to prevent defects and thus keeps products that
may fail or not be fit from reaching the customer. Its purpose is to analyze the
product’s design characteristics relative to the planned manufacturing or assembly
process to ensure that the resultant product meets customer needs and expectations.
When potential failure modes are identified, corrective action can be initiated to
eliminate them or continuously reduce their potential occurrence. The FMEA also
documents the rationale for the manufacturing or assembly process involved.
Changes in customer expectations, regulatory requirements, attitudes of the
courts, and the industry’s needs require disciplined use of a technique to identify
and prevent potential problems. That disciplined technique is the FMEA.
A process FMEA is an analytical technique that identifies potential productrelated process failure modes, assesses the potential customer effects of the failures,
identifies the potential manufacturing or assembly process causes, and identifies
significant process variables to focus controls for prevention or detection of the
failure conditions. (Also, process FMEAs can assist in developing new machine or
equipment processes. The methodology is the same; however, the machine or equipment being designed would be considered the product.)
A machinery FMEA is a methodology that helps in the identification of possible
failure modes and determines the cause for and effect of these failures. The focus
of the machinery FMEA is to eliminate any safety issues and to resolve them
according to specified procedures between customer and supplier. In addition, the
purpose of this particular FMEA is to review both design and process with the intent
to reduce risk.
All FMEAs utilize occurrence and detection probability in conjunction with
severity criteria to develop a Risk Priority Number (RPN) for prioritization of
corrective action considerations. This is a major departure in methodology from the
Failure Mode and Critical Analysis (FMCA), which focuses primarily on the severity
of the failure as a priority characteristic.
223
SL3151Ch06Frame Page 224 Thursday, September 12, 2002 6:09 PM
224
Six Sigma and Beyond
In its most rigorous form, an FMEA summarizes the engineer’s thoughts while
developing a process. This systematic approach parallels and formalizes the mental
discipline that an engineer normally uses to develop processing requirements.
DEFINITION OF FMEA
FMEA is an engineering “reliability tool” that:
1. Helps to define, identify, prioritize, and eliminate known and/or potential
failures of the system, design, or manufacturing process before they reach
the customer, with the goal of eliminating the failure modes or reducing
their risks
2. Provides structure for a cross-functional critique of a design or a process
3. Facilitates inter-departmental dialog (It is much more than a design
review.)
4. Is a mental discipline “great” engineering teams go through, when critiquing what might go wrong with the product or process
5. Is a living document that reflects the latest product and process actions
6. Ultimately helps prevent and not react to problems
7. Identifies potential product- or process-related failure modes before they
happen
8. Determines the effect and severity of these failure modes
9. Identifies the causes and probability of occurrence of the failure modes
10. Identifies the controls and their effectiveness
11. Quantifies and prioritizes the risks associated with the failure modes
12. Develops and documents action plans that will occur to reduce risk
TYPES OF FMEAS
There are many types of FMEAs (see Figure 6.1). However, the main ones are:
• System/Concept — S/CFMEA. These are driven by system functions. A
system is an organized set of parts or subsystems to accomplish one or
more functions. System FMEAs are typically done very early, before
specific hardware has been determined.
• Design — DFMEA. A design FMEA is driven by part or component
functions. A design/part is a unit of physical hardware that is considered
a single replaceable part with respect to repair. Design FMEAs are typically done later in the development process when specific hardware has
been determined.
• Manufacturing or Process — PFMEA. A process FMEA is driven by
process functions and part characteristics. A manufacturing process is a
sequence of tasks that is organized to produce a product. A process FMEA
can involve fabrication as well as assembly.
SL3151Ch06Frame Page 225 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
225
Types of FMEA
Design
FMEA
Process
FMEA
Component
Subsystem
System
System
FMEA
Machinery
FMEA
Focus:
Design changes to
lower life cycle
costs
Objective:
Improve the
reliability and
maintain ability of
the machinery and
equipment
Focus:
Minimize failure
effects on the
system
Objective:
Maximize system
quality,
reliability cost,
and
maintain ability
Machines
Methods
Material
Manpower
Measurement
Environment
Focus:
Minimize
production process
failure effects on
the system
Objective:
Maximize the
system quality,
reliability, cost,
maintain ability,
and productivity
FIGURE 6.1 Types of FMEA.
• Machinery — MFMEA is driven by low volume machinery and equipment
where large-scale testing is impractical prior to production and manufacture of the machinery and equipment. The MFMEA focuses on design
changes to lower life cycle costs by improving the reliability and maintainability of the machinery and equipment.
Note: Service, software, and environmental FMEAs are additional variations.
However, in this chapter we will focus only on design, process, and machinery
FMEAs. The other FMEAs follow the same rationale as the design and process
FMEAs.
IS FMEA NEEDED?
If any answer to the following questions is positive, then you need an FMEA:
•
•
•
•
•
Are
Are
Are
Are
Are
customers becoming more quality conscious?
reliability problems becoming a big concern?
regulatory requirements harder to meet?
you doing too much problem solving?
you addicted to problem solving?
SL3151Ch06Frame Page 226 Thursday, September 12, 2002 6:09 PM
226
Six Sigma and Beyond
“Addiction” to problem solving is a very important consideration in the application of an active FMEA program. When the thrill and excitement of solving
problems become dominant, your organization is addicted to problem solving rather
than preventing the problem to begin with. A proper FMEA will help break your
addiction by:
• Reducing the percentage of time devoted to problem solving
• Increasing the percentage of time in problem prevention
• Increasing the efficiency of resource allocation
Note: The emphasis is always on reducing complexity and engineering changes.
BENEFITS OF FMEA
When properly conducted, product and process FMEAs should lead to:
1. Confidence that all risks have been identified early and appropriate actions
have been taken
2. Priorities and rationale for product and process improvement actions
3. Reduction of scrap, rework, and manufacturing costs
4. Preservation of product and process knowledge
5. Reduction of field failures and warranty cost
6. Documentation of risks and actions for future designs or processes
By way of comparison of FMEA benefits and the quality lever, Figure 6.2 may
help.
In essence, one may argue that the most important benefit of an FMEA is that
it helps identify hidden costs, which are quite often greater than visible costs. Some
of these costs may be identified through:
1.
2.
3.
4.
5.
Customer dissatisfaction
Development inefficiencies
Lost repeat business (no brand loyalty)
High employee turnover
And so on
FMEA HISTORY
This type of thinking has been around for hundreds of years. It was first formalized
in the aerospace industry during the Apollo program in the 1960s. The initial
automotive adoption was in the 1970s in the area of safety issues. FMEA was
required by QS-9000 and the advanced product quality planning process in 1994
for all automotive suppliers. It has now been adopted by many other industries.
SL3151Ch06Frame Page 227 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
227
Payback: Effort
Product design fix
100:1
Process design fix
10:1
Production fix
1:1
Customer fix
1:10
Planning and
definition
Product design
and development
Mfg process
design and
development
Product and
process
validation
Production
FIGURE 6.2 Payback effort.
INITIATION OF THE FMEA
Regardless of the type, all FMEAs should be conducted as early as possible. FMEA
studies can be carried out at any stage during the development of a product or
process. However, the ideal time to start the FMEA is:
• When new systems, designs, processes, or machines are being designed,
but before they are finalized
• When systems design, process, or machine modifications are being contemplated
• When new applications are used for the systems, designs, processes, or
machines
• When quality concerns become visible
• When safety issues are of concern
Note: Once the FMEA is initiated, it becomes a living document, is updated as
necessary, and is never really complete.
Therefore:
• “FMEA-type thinking” is central to reliability and continual improvement
in products and manufacturing processes to remain competitive in our
global marketplace. It must be understood that an FMEA conducted after
production serves as a reactive tool, and the user has not taken full
advantage of the FMEA process.
SL3151Ch06Frame Page 228 Thursday, September 12, 2002 6:09 PM
228
Six Sigma and Beyond
• A typical system FMEA should begin even before the program approval
stage. The design FMEA should start right after program approval and
continue to be updated through prototypes. A process FMEA should begin
just before prototypes and continue through pilot build and sometimes
into product launching. As for the MFMEA, it should also start at the
same time as the design FMEA. It is imperative for a user of an FMEA
to understand that sometimes information is not always available. During
these situations, users must do the best they can with what they have,
recognizing that the document itself is indeed a living document and will
change as more information becomes available.
• History has shown that a majority of product warranty campaigns and
automotive recalls could have been prevented by thorough FMEA studies.
GETTING STARTED
Just as with anything else, before the FMEA begins there are some assumptions and
preparations that must be taken care of. These are:
1.
2.
3.
4.
Know your customers and their needs.
Know the function.
Understand the concept of priority.
Develop and evaluate conceptual designs/processes based on your customer’s needs and business strategy.
5. Be committed to continual improvement.
6. Create an effective team.
7. Define the FMEA project and scope.
1. UNDERSTAND YOUR CUSTOMERS
AND
THEIR NEEDS
A product or a process may perform functions flawlessly, but if the functions are
not aligned with the customer’s needs, you may be wasting your time. Therefore,
you must:
• Determine all (internal or external) relevant customers.
• Understand the customer’s needs better than the customers understand
their own needs.
• Document the customer’s needs and develop concepts. For example, customers need:
• Chewable toothpaste
• Smokeless cigarettes
• Celery-flavored gum
• ????
In FMEA, a customer is anyone/anything that has functions/needs from your
product or manufacturing process. An easy way to determine customer needs is to
understand the Kano model — see Figure 6.3.
SL3151Ch06Frame Page 229 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
229
Satisfied
Excitement needs
Performance needs
Did not do
it at all
Did it
very well
Basic needs
Time
Dissatisfied
FIGURE 6.3 Kano model.
The model facilitates understanding of all the customer needs, including:
Excitement needs: Generally, these are the unspoken “wants” of the customer.
Performance needs: Generally, these are the spoken “needs” of the customer.
They serve as the neutral requirements of the customer.
Basic needs: Generally, these are the unspoken “needs” of the customer. They
serve as the very minimum of requirements.
It is important to understand that these needs are always in a state of change.
They move from basic needs to performance to excitement depending on the product
or expectation, as well as value to the customer. For example:
SYSTEM customers may be viewed as: other systems, whole product, government regulations, design engineers, and end user.
DESIGN customers may be viewed as: higher assembly, whole product, design
engineers, manufacturing engineers, government engineers, and end user.
PROCESS customers may be viewed as: the next operation, operators, design
and manufacturing engineering, government regulations, and end user.
MACHINE customers may be viewed as: higher assembly, whole product,
design engineers, manufacturing engineers, government regulations, and
end user.
Another way to understand the FMEA customers is through the FMEA team,
which must in no uncertain terms determine:
1. Who the customers are
2. What their needs are
3. Which needs will be addressed in the design/process
SL3151Ch06Frame Page 230 Thursday, September 12, 2002 6:09 PM
230
Six Sigma and Beyond
The appropriate and applicable response will help in developing both the function
and effects.
2. KNOW
THE
FUNCTION
The dictionary definition of a function is: The natural, proper, or characteristic action
of any thing. This is very useful because it implies performance. After all, it is
performance that we are focusing in the FMEA.
Specifically, a function from an FMEA perspective is the task that a system,
part, or manufacturing process performs to satisfy a customer. To understand the
function and its significance, the team conducting the FMEA must have a thorough
list of functions to evaluate. Once this is done, the rest of the FMEA process is a
mechanical task.
For machinery, the function may be analyzed through a variety of methodologies
including but not limited to:
•
•
•
•
•
•
•
•
Describing the design intent either through a block diagram or a P-diagram
Identifying an iterative process in terms of what can be measured
Describing the ideal function — what the machine is supposed to do
Identifying relationships in verb–noun statements — function tree analysis
Considering environmental and safety conditions
Accounting for all R & M parameters
Accounting for the machine’s performance conditions
Analyzing all other measurable engineering attributes
3. UNDERSTAND
THE
CONCEPT
OF
PRIORITY
One of the outcomes of an FMEA is the prioritization of problems. It is very
important for the team to recognize the temptation to address all problems, just
because they have been identified. That action, if taken, will diminish the effectiveness of the FMEA. Rather, the team should concentrate on the most important
problems, based on performance, cost, quality, or any characteristic identified on an
a priori basis through the risk priority number.
4. DEVELOP AND EVALUATE CONCEPTUAL DESIGNS/PROCESSES BASED
ON CUSTOMER NEEDS AND BUSINESS STRATEGY
There are many methods to assist in developing concepts. Some of the most common
are:
1.
2.
3.
4.
Brainstorming
Benchmarking
TRIZ (the theory of inventive problem solving)
Pugh concept selection (an objective way to analyze and select/synthesize
alternative concepts)
SL3151Ch06Frame Page 231 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
231
Eval. Criteria
Stubble length
Pain level
Mfg. Costs
Price/Use
Etc
Etc
Razor
D
A
T
U
M
A B
C D
E F
+
S
+
S
S
S
+
-
-
Totals
+
S
2
1
1
1
3
3
1
+
+
S
S
S
S
S
+
+
S
+
-
1
3
1
2
1
4
2
2
G
H
Legend:
Evaluation Criteria: These are the criteria that we are comparing the razor
with the other approaches.
Datum: These are the basic razor characteristics that we are comparing the
other concepts to.
A: Chemical
D: Duct tape
G: Straight edge
B: Electric
E: Epilady
H: ?
C: Electrolysis
F: Laser beam
+ : Better than the basic razor requirement
- : Worse than the basic razor requirement
S : Same as the basic razor requirement
FIGURE 6.4 A Pugh matrix — shaving with a razor.
Figure 6.4 shows what a Pugh matrix may look like for the concept of “shaving”
with a base that of a “razor.”
5. BE COMMITTED
TO
CONTINUAL IMPROVEMENT
Everyone in the organization and especially management must be committed to
continual improvement. In FMEA, that means that once recommendations have been
made to increase effectiveness or to reduce cost, defects, or any other characteristic,
a proper corrective action must be developed and implemented, provided it is sound
and it complements the business strategy.
6. CREATE
AN
EFFECTIVE FMEA TEAM
Perhaps one of the most important issues in dealing with the FMEA is that an FMEA
must be done with a team. An FMEA completed by an individual is only that
individual’s opinion and does not meet the requirements or the intent of an FMEA.
The elements of an effective FMEA team are:
• Expertise in subject (five to seven individuals)
• Multi-level/consensus based
SL3151Ch06Frame Page 232 Thursday, September 12, 2002 6:09 PM
232
Six Sigma and Beyond
• Representing all relevant stakeholders (those who have ownership)
• Possible change in membership as work progresses
• Cross-functional and multidisciplinary (One person’s best effort cannot
approach the knowledge of an effective cross-functional and multidisciplinary team.)
• Appropriate and applicable empowerment
The structure of the FMEA team is based on:
Core team
The experts of the project and the closest to the project. They facilitate
honest communication and encourage active participation. Support
membership may vary depending on the stage of the project.
Champion/sponsor
• Provides resources and support
• Attends some meetings
• Supports team
• Promotes team efforts and implements recommendations
• Shares authority/power with team
• Kicks off team
• Higher up in management the better
Team leader
A team leader is the “watchdog” of the project. Typically, this function
falls upon the lead engineer. Some of the ingredients of a good team
leader are:
• Possesses good leadership skills
• Is respected by team members
• Leads but does not dominate
• Maintains full team participation
Recorder
Keeps documentation of team’s efforts. The recorder is responsible for coordinating meeting rooms and times as well as distributing meeting
minutes and agendas.
Facilitator
The “watchdog” of the process. The facilitator keeps the team on track and
makes sure that everyone participates. In addition, it the facilitator’s responsibility to make sure that team dynamics develop in a positive environment. For the facilitator to be effective, it is imperative for the
facilitator to have no stake on the project, possess FMEA process expertise, and communicate assertively.
Important considerations for a team include:
• Continuity of members
• Receptive and open-minded
• Committed to success
SL3151Ch06Frame Page 233 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
•
•
•
•
•
233
Empowered by sponsor
Cross-functionality
Multidiscipline
Consensus
Positive synergy
Ingredients of a motivated FMEA team include:
•
•
•
•
•
•
•
•
•
Realistic agendas
Good facilitator
Short meetings
Right people present
Reach decisions based on consensus
Open minded, self initiators, volunteers
Incentives offered
Ground rules established
One individual responsible for coordination and accountability of the
FMEA project (Typically for the design, the design engineer is that person
and for the process, the manufacturing engineer has that responsibility.)
To make sure the effectiveness of the team is sustained throughout the project,
it is imperative that everyone concerned with the project bring useful information
to the process. Useful information may be derived due to education, experience,
training, or a combination of these.
At least two areas that are usually underutilized for useful information are
background information and surrogate data. Background information and supporting
documents that may be helpful to complete system, design, or process FMEAs are:
•
•
•
•
•
•
•
•
•
•
Customer specifications (OEMs)
Previous or similar FMEAs
Historical information (warranty/recalls etc.)
Design reviews and verification reports
Product drawings/bill of material
Process flow charts/manufacturing routing
Test methods
Preliminary control and gage plans
Maintenance history
Process capabilities
Surrogate data are data that are generated from similar projects. They may help
in the initial stages of the FMEA. When surrogate data are used, extra caution should
be taken.
Potential FMEA team members include:
• Design engineers
• Manufacturing engineers
SL3151Ch06Frame Page 234 Thursday, September 12, 2002 6:09 PM
234
Six Sigma and Beyond
•
•
•
•
•
•
•
•
•
Quality engineers
Test engineers
Reliability engineers
Maintenance personnel
Operators (from all shifts)
Equipment suppliers
Customers
Suppliers
Anyone who has a direct or indirect interest
• In any FMEA team effort the individuals must have interaction with
manufacturing and/or process engineering while conducting a design
FMEA. This is important to ensure that the process will manufacture
per design specification.
• On the other hand, interaction with design engineering while conducting a process or assembly FMEA is important to ensure that the design
is right.
• In either case, group consensus will identify the high-risk areas that
must be addressed to ensure that the design and/or process changes
are implemented for improved quality and reliability of the product
Obviously, these lists are typical menus to choose an appropriate team for your
project. The actual team composition for your organization will depend upon your
individual project and resources.
Once the team is chosen for the given project, spend 15–20 minutes creating a
list of the biggest (however you define “biggest”) concerns for this product or
process. This list will be used later to make sure you have a complete list of functions.
7. DEFINE
THE
FMEA PROJECT
AND
SCOPE
Teams must know their assignment. That means that they must know:
•
•
•
•
What they are working on (scope)
What they are not working on (scope)
When they must complete the work
Where and how often they will meet
Two excellent tools for such an evaluation are (1) block diagram for system, design,
and machinery and (2) process flow diagram for process. In essence, part of the responsibility to define the project and scope has to do with the question “How broad is our
focus?” Another way to say this is to answer the question “How detailed do we have
to be?” This is much more difficult than it sounds and it needs some heavy discussion
from all the members. Obviously, consensus is imperative. As a general rule, the focus
is dependent upon the project and the experience or education of the team members.
Let us look at an example. It must be recognized that sometimes due to the
complexity of the system, it is necessary to narrow the scope of the FMEA. In other
words, we must break down the system into smaller pieces — see Figure 6.5.
SL3151Ch06Frame Page 235 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
Master
cylinder
Pedals
and
linkages
Hydraulics
Brake
System
Back
plate and
hardware
235
Cylinder, fluid
bladder, etc
Pedal, rubber
cover, cotter pins, etc.
Rubber hose, metal
tubing, proportioning
valve, fitting, etc.
Back plate, springs, washer,
clips, etc.
Caliper system
Pistons, cylinder, casting,
plate, etc.
Rotor and studs
Rotor hat, rotor, studs,
etc.
Pads and
hardware
Friction material, substrate,
rivets, clip etc.
OUR
FMEA
SCOPE
FIGURE 6.5 Scope for DFMEA — braking system.
THE FMEA FORM
There are many forms to develop a typical FMEA. However, all of them are basically
the same in that they are made up of two parts, whether they are for system, design,
process, or machinery. A typical FMEA form consists of the header information and
the main body.
There is no standard information that belongs in the header of the form, but
there are specific requirements for the body of the form.
In the header, one may find the following information — see Figure 6.7. However, one must remember that this information may be customized to reflect one’s
industry or even the organization:
•
•
•
•
•
•
•
•
•
•
Type of FMEA study
Subject description
Responsible engineer
FMEA team leader
FMEA core team members
Suppliers
Appropriate dates (original issue, revision, production start, etc.)
FMEA number
Assembly/part/detail number
Current dates (drawings, specifications, control plan, etc.)
The form may be expanded to include or to be used for such matters as:
Safety: Injury is the most serious of all failure effects. As a consequence, safety
is handled either with an FMEA or a fault tree analysis (FTA) or critical
SL3151Ch06Frame Page 236 Thursday, September 12, 2002 6:09 PM
236
Six Sigma and Beyond
Yes
Good?
No
No
M
Inpect
print
H
Apply
paste
L
M
Wash board
Load
board
L
L Run, package
and ship
Our scope
Dispense
paste
H
Set up
machine
H
Load
screen
L
Load
sqeegee
L
Load tool
plate
L
Develop
program
Legend:
L: Low risk
M: Medium risk
H: High risk
Note: Just as in design FMEA, sometimes it is necessary to “narrow the scope”
of the process FMEA.
FIGURE 6.6 Scope for PFMEA — printed circuit board screen printing process.
FMEA WORKSHEET
System FMEA ____Design FMEA ____Process FMEA ____FMEA Number ____
Subject: ______________Team Leader.________________Page ____ of _____
Part/Proc. ID No. __________Date Orig. _____________Date Rev. __________
Key Date. ____________Team Members: ___________________
FIGURE 6.7 Typical FMEA header.
Failure Mode Analysis
Action Plan
Action Results
FIGURE 6.8 Typical FMEA body.
S
Part name or Potential Potential S C Potential O Current D RPN Recommended Target Actual Actions S O D R Remarks
L cause of
P
effect of
controls
action and
finish finish taken
process step failure
A failure
N
failure
responsibility date date
and function mode
S mode
mode
Description
SL3151Ch06Frame Page 237 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
237
SL3151Ch06Frame Page 238 Thursday, September 12, 2002 6:09 PM
238
Six Sigma and Beyond
analysis (FMCA). In the traditional FTA, the starting point is the list of
hazard or undersized events for which the designer must provide some
solution. Each hazard becomes a failure mode and thus it requires an
analysis.
Effect of downtime: The FMEA may incorporate maintenance data to study
the effects of downtime. It is an excellent tool to be used in conjunction
with total preventive maintenance.
Repair planning: An FMEA may provide preventive data to support repair
planning as well as predictive maintenance cycles.
Access: In the world of recycling and environmental conscience, the FMEA
can provide data for tear downs as well as information about how to get at
the failed component. It can be used with mistake proofing for some very
unexpected positive results.
A typical body of an FMEA form may look like Figure 6.8. The details of this
form will be discussed in the following pages. We begin with the first part of the
form; that is the description in the form of:
Part name/process step and function (verb/noun)
In this area the actual description is written in concise,
exact and simple language.
DEVELOPING
THE
FUNCTION
A fundamental principle in writing functions is the notion that they must be written
either in action verb format or as a measurable noun. Remember, a function is a
task that a component, subsystem, or product must perform, described in language
that everyone understands. Stay away from jargon. To identify appropriate functions,
leading questions such as the following may help:
•
•
•
•
What does the product/process do?
How does the product/process do that?
If a product feature or process step is deleted, what functions disappear?
If you were this task, what are you supposed to accomplish? Why do you
exist?
The priority of asking function questions for a system/part FMEA is:
1. A system view
2. A subsystem view
3. A component view
Typical functions are:
SL3151Ch06Frame Page 239 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
HOW?
239
Primary
supporting
function
Supporting
functions
Primary
supporting
function
Secondary
supporting
function
Primary
supporting
function
Task
function
Ensure
dependability
Ensure
convenience
WHY?
Secondary
enhancing
function
Please senses
Delight customer
Enhancing
functions
Tertiary
supporting
function
Tertiary
supporting
function
Tertiary
enhancing
function
Tertiary
enhancing
function
Tertiary
enhancing
function
FIGURE 6.9 Function tree process.
•
•
•
•
•
Position
Support
Seal in, out
Retain
Lubricate
ORGANIZING PRODUCT FUNCTIONS
After the brainstorming is complete, a function tree — see Figure 6.9 — can be used
to organize the functions. This is a simple tree structure to document and help
organize the functions, as follows:
Purposes of the function tree
a. To document all the functions
b. To improve team communication
c. To document complexity and improve team understanding of all the
functions
Steps
a. Brainstorm all the functions.
b. Arrange functions into function tree.
c. Test for completeness of function (how/why).
Building the function tree
Ask:
What does the product/process do?
Which component/process step does that?
SL3151Ch06Frame Page 240 Thursday, September 12, 2002 6:09 PM
240
Six Sigma and Beyond
How does it do that?
• Primary functions provide a direct answer to this question without conditions or ambiguity.
• Secondary functions explain how primary functions are performed.
• Continue until the answer to “how” requires using a part name,
labor operation, or activity.
• Ask “why” in the reverse direction.
• Add additional functions as needed.
The function tree process can be summarized as follows:
1. Identify the task function.
Place on the far left side of a chart pad.
2. Identify the supporting functions.
Place on the top half of the pad.
3. Identify enhancing functions.
Place on the bottom half of the pad.
4. Build the function tree.
Include the secondary/tertiary functions.
Place these to the right of the primary functions.
5. Verify the diagram: Ask how and why.
For an example of a function tree for a ball point pen (tip), see Figure 6.10.
FAILURE MODE ANALYSIS
The second portion of the FMEA body form deals with the failure mode analysis.
A typical format is shown in Figure 6.11.
Understanding Failure Mode
Failure mode (a specific loss of a function) is the inability of a component/subsystem/system/process/part to perform to design intent. In other words, it may
potentially fail to perform its function(s). Design failure mode is a technical description of how the system, subsystem, or part may not adequately perform its function.
Process failure mode is a technical description of how the manufacturing process
may not perform its function, or the reason the part may be rejected.
Failure Mode Questions
The process of brainstorming failure modes may include the following questions:
DFMEA
• Considering the conditions in which the product will be used, how can
it fail to perform its function?
• How have similar products failed in the past?
SL3151Ch06Frame Page 241 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
241
Super pen makes marks on varied surfaces
User grasps barrel and moves pen axially while
simultaneously pressing down on the tip at
a vector to the 180 degree plane
Axial Force Function
Vector Force Function
The inside diameter of the
barrel tip end transmits axial
force to the tip system
housing sheath O.D.
The end of the barrel and
the barrel I.D. (tip end)
simultaneously apply force
to the tip system housing
end and sheath.
The tip system housing tip
retainer I.D. transmits axial
force to the ball housing
The tip assembly housing
transmits the vector force to
the O.D. of the ball housing
The ball housing I.D. (ball)
transmits axial force
on the ball
The ball transmits axial force
to the marking surface,
however, the marking surface
is stationary, which causes
the ball rotational motion
The ball housing transmits
the vector force to the ball,
the ball moves up into the
ball housing creating a gap
between the ball and ball
housing
The ink flows through the
ink tube contacting the ball
surface
The ball rotates through the
ink supply, picking up a film
of ink on the ball surface
The ink is transferred from the
ball surface to the marking
surface
The ink remains on the
marking surface (3mm width)
area, drying in 3 seconds
FIGURE 6.10 Example of ballpoint pen.
PFMEA
• Considering the conditions in which the process will be used, what
could possibly go wrong with the process?
• How have similar processes failed in the past?
• What might happen that would cause a part to be rejected?
SL3151Ch06Frame Page 242 Thursday, September 12, 2002 6:09 PM
242
Six Sigma and Beyond
Potential
Failure
Mode
Potential
Effects of
Failure
Mode
S
E
V
E
R
I
T
Y
C
Potential
L
Causes of
A
S Failure Mode
S
O
C
C
U
R
R
E
N
C
E
Current
Controls
D
E
T
E
C
T
I
O
N
Risk Priority
Number
(RPN)
Identify
the
Potential
Failure
FIGURE 6.11 FMEA body.
Determining Potential Failure Modes
Failure modes are when the function is not fulfilled in five major categories. Some
of these categories may not apply. As a consequence, use these as “thought provokers” to begin the process and then adjust them as needed:
1.
2.
3.
4.
5.
6.
Absence of function
Incomplete, partial, or decayed function
Related unwanted “surprise” failure mode
Function occurs too soon or too late
Excess or too much function
Interfacing with other components, subsystems or systems. There are four
possibilities of interfacing. They are (a) energy transfer, (b) information
transfer, (c) proximity, and (d) material compatibility.
Failure mode examples using the above categories and applied to the pen case
include:
1. Absence of function:
• DFMEA: Make marks
• PFMEA: Inject plastic
2. Incomplete, partial or decayed function:
• DFMEA: Make marks
• PFMEA: Inject plastic
3. Related unwanted “surprise” failure mode
• DFMEA: Make marks
• PFMEA: Inject plastic
4. Function occurs too soon or too late
• DFMEA: Make marks
• PFMEA: Inject plastic
5. Excess or too much function
• DFMEA: Make marks
• PFMEA: Inject plastic
SL3151Ch06Frame Page 243 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
243
General examples of failure modes include:
Design FMEA:
No power
Water leaking
Open circuit
Releases too early
Noise
Vibration
Does not cut
Failed to open
Partial insulation
Loss of air
No spark
Insufficient torque
Paper jams
And so on
Process FMEA:
Four categories of process failures:
1.
2.
3.
4.
Fabrication failures
Assembly failures
Testing failures
Inspection failures
Typical examples of these categories are:
•
•
•
•
•
•
•
•
•
•
•
•
Warped
Too hot
RPM too slow
Rough surface
Loose part
Misaligned
Poor inspection
Hole too large
Leakage
Fracture
Fatigue
And so on
Note: At this stage, you are ready to transfer the failure modes in the FMEA
form — see Figure 6.12.
FAILURE MODE EFFECTS
A failure mode effect is a description of the consequence/ramification of a system,
part, or manufacturing process failure. A typical failure mode may have several
effects depending on which customer(s) are considered. Consider the effects/consequences on all the “customers,” as they are applicable, as in the following FMEAs:
SFMEA
• System
• Other systems
SL3151Ch06Frame Page 244 Thursday, September 12, 2002 6:09 PM
244
Six Sigma and Beyond
Potential Potential S
Failure
Effects E
V
Mode
of
E
Failure R
Mode
I
C
L
A
S
S
Potential
Causes of
Failure
Mode
T
Y
O
C
C
U
R
R
E
N
C
E
Current
Controls
D
E
T
E
C
T
I
O
N
Risk Priority
Number
(RPN)
Does Not
Transfer Ink
Partial Ink
and so on
FIGURE 6.12 Transferring the failure modes to the FMEA form.
• Whole product
• Government regulations
• End user
DFMEA
• Part
• Higher assembly
• Whole product
• Government regulations
• End user
PFMEA
• Part
• Next operation
• Equipment
• Government regulations
• Operators
• End user
Effects and Severity Rating
Effects and severity are very related items. As the effect increases, so does the
severity. In essence, two fundamental questions have to be raised and answered:
1. What will happen if this failure mode occurs?
2. How will customers react if these failures happen?
• Describe as specifically as possible what the customer(s) might notice
once the failure occurs.
• What are the effects of the failure mode?
• How severe is the effect on the customers?
The progression of function, cause, failure mode, effect, and severity can be
illustrated by the following series of questions:
In function: What is the individual task intended by design?
In failure mode: What can go wrong with this function?
SL3151Ch06Frame Page 245 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
245
In cause: What is the “root cause” of the failure mode?
In effect: What are the consequences of this failure mode?
In severity: What is the seriousness of the effect?
The following are some examples of DFMEA and PFMEA effects:
Customer gets wet
System failure
Loss of efficiency
Reduced life
Degraded performance
Cannot assemble
Violate Gov. Reg. XYZ
Damaged equipment
Loss of performance
Scrap
Rework
Becomes loose
Hard to load in next operation
Operator injury
Noise, rattles
And so on
Special Note: Please note that the effect remains the same for both DFMEA and
PFMEA.
Severity Rating (Seriousness of the Effect)
Severity is a numerical rating — see Table 6.1 for design and Table 6.2 for process —
of the impact on customers. When multiple effects exist for a given failure mode,
enter the worst-case severity on the worksheet to calculate risk. (This is the excepted
method for the automotive industry and for the SAE J1739 standard. In cases where
severity varies depending on timing, use the worst-case scenario.
Note: There is nothing special about these guidelines. They may be changed to
reflect the industry, the organization, the product/design, or the process. For example,
the automotive industry has its own version and one may want to review its guidelines
in the AIAG (2001). To modify these guidelines, keep in mind:
1.
2.
3.
4.
5.
List the entire range of possible consequences (effects).
Force rank the consequences from high to low.
Resolve the extreme values (rating 10 and rating 1).
Fill in the other ratings.
Use consensus.
At this point the information should be transferred to the FMEA form — see
Figure 6.13. The column identifying the “class” is the location for the placement of
the special characteristic. The appropriate response is only “Yes” or “No.” A Yes in
this column indicates that the characteristic is special, a No indicates that the
characteristic is not special. In some industries, special characteristics are of two
types: (a) critical and (b) significant. “Critical” refers to characteristics associated
with safety and/or government regulations, and “significant” refers to those that
affect the integrity of the product. In design, all special characteristics are potential.
In process they become critical or significant depending on the numerical values of
severity and occurrence combinations.
SL3151Ch06Frame Page 246 Thursday, September 12, 2002 6:09 PM
246
Six Sigma and Beyond
TABLE 6.1
DFMEA — Severity Rating
Effect
Description
None
No effect noticed by customer; the failure will not have any
perceptible effect on the customer
Very minor effect, noticed by discriminating customers; the
failure will have little perceptible effect on discriminating
customers
Minor effect, noticed by average customers; the failure will have
a minor perceptible effect on average customers
Very low effect, noticed by most customers; the failure will have
some small perceptible effect on most customers
Primary product function operational, however at a reduced level
of performance; customer is somewhat dissatisfied
Primary product function operational, however secondary
functions inoperable; customer is moderately dissatisfied
Failure mode greatly affects product operation; product or
portion of product is inoperable; customer is very dissatisfied
Primary product function is non-operational but safe; customer
is very dissatisfied.
Failure mode affects safe product operation and/or involves
nonconformance with government regulation with warning
Failure mode affects safe product operation and/or involves
nonconformance with government regulation without warning
Very minor
Minor
Very low
Low
Moderate
High
Very high
Hazard with warning
Hazard with no warning
FAILURE CAUSE
AND
Rating
1
2
3
4
5
6
7
8
9
10
OCCURRENCE
The analysis of the cause and occurrence is based on two questions:
1. What design or process choices did we already make that may be responsible for the occurrence of a failure?
2. How likely is the failure mode to occur because of this?
For each failure mode, the possible mechanism(s) and cause(s) of failure are
listed. This is an important element of the FMEA since it points the way toward
preventive/corrective action. It is, after all, a description of the design or process
deficiency that results in the failure mode. That is why it is important to focus on
the “global” or “root” cause. Root causes should be specific and in the form of a
characteristic that may be controlled or corrected. Caution should be exerted not to
overuse “operator error” or “equipment failure” as a root cause even though they
are both tempting and make it easy to assign “blame.”
You must look for causes, not symptoms of the failure. Most failure modes have
more than one potential cause. An easy way to probe into the causes is to ask:
What design choices, process variables, or circumstances could result in the failure
mode(s)?
SL3151Ch06Frame Page 247 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
247
TABLE 6.2
PFMEA — Severity Rating
Effect
Description
None
No effect noticed by customer; the failure will not have any effect
on the customer
Very minor disruption to production line; a very small portion
of the product may have to be reworked; defect noticed by
discriminating customers
Minor disruption to production line; a small portion (much <5%)
of product may have to be reworked on-line; process up but
minor annoyances
Very low disruption to production line; a moderate portion
(<10%) of product may have to be reworked on-line; process
up but minor annoyances
Low disruption to production line; a moderate portion (<15%)
of product may have to be reworked on-line; process up but
minor annoyances
Moderate disruption to production line; a moderate portion
(>20%) of product may have to be scrapped; process up but
some inconveniences
Major disruption to production line; a portion (>30%) of product
may have to be scrapped; process may be stopped; customer
dissatisfied
Major disruption to production line; close to 100% of product
may have to be scrapped; process unreliable; customer very
dissatisfied
May endanger operator or equipment; severely affects safe
process operation and/or involves noncompliance with
government regulation; failure will occur with warning
May endanger operator or equipment; severely affects safe
process operation and/or involves noncompliance with
government regulation; failure occurs without warning
Very minor
Minor
Very low
Low
Moderate
High
Very high
Hazard with warning
Hazard with no warning
Rating
1
2
3
4
5
6
7
8
9
10
DFMEA failure causes are typically specific system, design, or material characteristics.
PFMEA failure causes are typically process parameters, equipment characteristics, or environmental or incoming material characteristics.
Popular Ways (Techniques) to Determine Causes
Ways to determine failure causes include the following:
• Brainstorm
• Whys
• Fishbone diagram
SL3151Ch06Frame Page 248 Thursday, September 12, 2002 6:09 PM
248
Six Sigma and Beyond
Potential
Failure
Mode
Potential
Effects
of
Failure
Mode
S
E
V
E
R
I
T
Y
Does not
Pan does 8
transfer ink not work;
customer
tries and
eventually
tears
paper
and
scraps
the pen
Partial ink
and so on
Old pen
stops
writing,
customer 7
scraps
pen
Customer
has to
retrace
C
Potential
L
Causes of
A Failure Mode
S
S
O
C
C
U
R
R
E
N
C
E
Current
Controls
D
E
T
E
C
T
I
O
N
Risk Priority
Number
(RPN)
N
N
Writing
or
drawing
looks bad
and so on
FIGURE 6.13 Transferring severity and classification to the FMEA form.
• Fault Tree Analysis (FTA; a model that uses a tree to show the cause-andeffect relationship between a failure mode and the various contributing
causes. The tree illustrates the logical hierarchy branches from the failure
at the top to the root causes at the bottom.)
• Classic five-step problem-solving process
a. What is the problem?
b. What can I do about it?
c. Put a star on the “best” plan.
d. Do the plan.
e. Did your plan work?
• Kepner Trego (What is, what is not analysis)
• Discipline GPS – see Volume II
• Experience
• Knowledge of physics and other sciences
• Knowledge of similar products
SL3151Ch06Frame Page 249 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
249
TABLE 6.3
DFMEA — Occurrence Rating
Occurrence
Description
Frequency
Remote
Low
Failure is very unlikely
Relatively few failures
Moderate
Occasional failures
High
Repeated failures
Very high
Failure is almost inevitable
< 1 in 1,500,000
1 in 150,000
1 in 15,000
1 in 2000
1 in 400
1 in 80
1 in 20
1 in 8
1 in 3
>1 in 2
Rating
1
2
3
4
5
6
7
8
9
10
• Experiments — When many causes are suspect or specific cause is
unknown
• Classical
• Taguchi methods
Occurrence Rating
The occurrence rating is an estimated number of frequencies or cumulative number
of failures (based on experience) that will occur in our design concepts for a given
cause over the intended life of the design. For example: cause of staples falling out
= soft wood. The likelihood of occurrence is a 9 if we pick balsa wood but a 2 if
we choose oak.
Just as with severity, there are standard tables for occurrence — see Table 6.3
for design and Table 6.4 for process — for each type of FMEA. The ratings on these
tables are estimates based on experience or similar products or processes. Nonstandard occurrence tables may also be used, based on specific characteristics.
However, reliability expertise is needed to construct occurrence tables. (Typical
characteristics may be historical failure frequencies, Cpks, theoretical distributions,
and reliability statistics.)
At this point the data for causes and their ratings should be transferred to the
FMEA form — see Figure 6.14.
Current Controls and Detection Ratings
Design and process controls are the mechanisms, methods, tests, procedures, or
controls that we have in place to prevent the cause of the failure mode or detect the
failure mode or cause should it occur. (The controls currently exist.)
Design controls prevent or detect the failure mode prior to engineering release.
Process controls prevent or detect the failure mode prior to the part or assembly
leaving the area.
SL3151Ch06Frame Page 250 Thursday, September 12, 2002 6:09 PM
250
Six Sigma and Beyond
TABLE 6.4
PFMEA — Occurrence Rating
Occurrence
Description
Remote
Failure is very unlikely; no failures associated
with similar processes
Few failures; isolated failures associated with
like processes
Occasional failures associated with similar
processes, but not in major proportions
Low
Moderate
High
Very high
Repeated failures; similar processes have often
failed
Process failure is almost inevitable
Frequency
Cpk
Rating
<1 in 1,500,000
>1.67
1
1 in 150,000
1 in 15,000
1 in 2,000
1 in 400
1 in 80
1 in 20
1 in 8
1 in 3
>1 in 2
1.50
1.33
1.17
1.00
0.83
0.67
2
3
4
5
6
7
8
9
10
0.51
0.33
A good control prevents or detects causes or failure modes.
• As early as possible (ideally before production or prototypes)
• As early as possible
• Using proven methods
So, the next step in the FMEA process is to:
• Analyze planned controls for your system, part, or manufacturing process
• Understand the effectiveness of these controls to detect causes or failure
modes
Detection Rating
Detection rating — see Table 6.5 for design and Table 6.6 for process — is a numerical rating of the probability that a given set of controls will discover a specific cause
or failure mode to prevent bad parts from leaving the operation/facility or getting
to the ultimate customer. Assuming that the cause of the failure did occur, assess
the capabilities of the controls to find the design flaw or prevent the bad part from
leaving the operation/facility. In the first case, the DFMEA is at issue. In the second
case, the PFMEA is of concern.
When multiple controls exist for a given failure mode, record the best (lowest)
to calculate risk. In order to evaluate detection, there are appropriate tables for
both design and process. Just as before, however, if there is a need to alter them,
remember that the change and approval must be made by the FMEA team with
consensus.
At this point, the data for current controls and their ratings should be transferred
to the FMEA form — see Figure 6.15. There should be a current control for every
cause. If there is not, that is a good indication that a problem might exist.
SL3151Ch06Frame Page 251 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
Potential
Failure
Mode
Potential
Effects
of
Failure
Mode
S
E
V
E
R
I
T
Y
Does not
Pan does 8
transfer ink not work;
customer
tries and
eventually
tears
paper
and
scraps
the pen
Old pen
stops
7
Partial ink writing,
customer
scraps
pen
Customer 7
has to
retrace
Writing
or
drawing
looks bad
251
C
Potential
L
Causes of
A Failure Mode
S
S
N
O
C
C
U
R
R
E
N
C
E
Ball housing 2
I.D. deform
Current
Controls
D
E
T
E
C
T
I
O
N
Risk Priority
Number
(RPN)
Ink viscosity 9
too high
Debris build- 5
up
N
Inconsistent
ball rolling
due to
deformed
housing
2
Ball does not 7
always pick
up ink due to
ink viscosity
Housing I.D. 1
variation due
to mfg
and so on
and so on
and so on
FIGURE 6.14 Transferring causes and occurrences to the FMEA form.
UNDERSTANDING
AND
CALCULATING RISK
Without risk, there is very little progress. Risk is inevitable in any system, design,
or manufacturing process. The FMEA process aids in identifying significant risks,
then helps to minimize the potential impact of risk. It does that through the risk
priority number or as it is commonly known, the RPN index. In the analysis of the
RPN, make sure to look at risk patterns rather than just a high RPN. The RPN is
the product of severity, occurrence, and detection or:
SL3151Ch06Frame Page 252 Thursday, September 12, 2002 6:09 PM
252
Six Sigma and Beyond
TABLE 6.5
DFMEA Detection Table
Detection
Description
Almost certain
Design control will almost certainly detect the potential cause of
subsequent failure modes
Very high chance the design control will detect the potential cause of
subsequent failure mode
High chance the design control will detect the potential cause of
subsequent failure mode
Moderately high chance the design control will detect the potential cause
of subsequent failure mode
Moderate chance the design control will detect the potential cause of
subsequent failure mode
Low chance the design control will detect the potential cause of
subsequent failure mode
Very low chance the design control will detect the potential cause of
subsequent failure mode
Remote chance the design control will detect the potential cause of
subsequent failure mode
Very remote chance the design control will detect the potential cause
of subsequent failure mode
There is no design control or control will not or cannot detect the
potential cause of subsequent failure mode
Very high
High
Moderately high
Moderate
Low
Very low
Remote
Very remote
Very uncertain
Rating
1
2
3
4
5
6
7
8
9
10
Risk = RPN = S × O × D
Obviously the higher the RPN the more the concern. A good rule-of-thumb
analysis to follow is the 95% rule. That means that you will address all failure modes
with a 95% confidence. It turns out the magic number is 50, as indicated in this
equation: [(S = 10 × O = 10 × D = 10) – (1000 × .95)]. This number of course is
only relative to what the total FMEA is all about, and it may change as the risk
increases in all categories and in all causes.
Special risk priority patterns require special attention, through specific action
plans that will reduce or eliminate the high risk factor. They are identified through:
1. High RPN
2. Any RPN with a severity of 9 or 10 and occurrence > 2
3. Area chart
The area chart — Figure 6.16 — uses only severity and occurrence and therefore
is a more conservative approach than the priority risk pattern mentioned previously.
At this stage, let us look at our FMEA project and calculate and enter the RPN —
see Figure 6.17. It must be noted here that this is only one approach to evaluating
risk. Another possibility is to evaluate the risk based on the degree of severity first,
SL3151Ch06Frame Page 253 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
253
TABLE 6.6
PFMEA Detection Table
Detection
Description
Almost certain
Process control will almost certainly detect or prevent the potential cause
of subsequent failure mode
Very high chance process control will detect or prevent the cause of
subsequent failure mode
High chance the process control will detect or prevent the potential cause
of subsequent failure mode.
Moderately high chance the process control will detect or prevent the
potential cause of subsequent failure mode
Moderate chance the process control will detect or prevent the potential
cause of subsequent failure mode
Low chance the process control will detect or prevent the potential cause
of subsequent failure mode
Very low chance the process control will detect or prevent the potential
cause of subsequent failure mode
Remote chance the process control will detect or prevent the potential
cause of subsequent failure mode
Very remote chance the process control will detect or prevent the potential
cause of subsequent failure mode
There is no process control or control will not or cannot detect the potential
cause of subsequent failure mode
Very high
High
Moderately high
Moderate
Low
Very low
Remote
Very remote
Very uncertain
Rating
1
2
3
4
5
6
7
8
9
10
in which case the engineer tries to eliminate the failure; evaluate the risk based on
a combination of severity (values of 5–8) and occurrence (>3) second, in which case
the engineer tries to minimize the occurrence of the failure through a redundant
system; and to evaluate the risk through the detection of the RPN third, in which
case the engineer tries to control the failure before the customer receives it.
ACTION PLANS AND RESULTS
The third portion of the FMEA form deals with the action plans and results analysis.
A typical format is shown in Figure 6.18.
The idea of this third portion of the FMEA form is to generate a strategy that
reduces severity and occurrence and makes the detection effective to reduce the total
RPN:
Reducing the severity rating (or reducing the severity of the failure mode effect)
• Design or manufacturing process changes are necessary.
• This approach is much more proactive than reducing the detection
rating.
Reducing the occurrence rating (or reducing the frequency of the cause)
• Design or manufacturing process changes are necessary.
• This approach is more proactive than reducing the detection rating.
SL3151Ch06Frame Page 254 Thursday, September 12, 2002 6:09 PM
254
Potential
Failure
Mode
Six Sigma and Beyond
Potential
Effects
of
Failure
Mode
S
E
V
E
R
I
T
Y
Does not
Pan does 8
transfer ink not work;
customer
tries and
eventually
tears
paper
and
scraps
the pen
Old pen
stops
7
Partial ink writing,
customer
scraps
pen
Customer
has to
7
retrace
Writing
or
drawing
looks bad
C
Potential
L
Causes of
A Failure Mode
S
S
N Ball housing
I.D. deform
Ink viscosity 9 Test # X
too high
D
E
T
E
C
T
I
O
N
Risk Priority
Number
(RPN)
2
10
Debris build- 5 Design review 7
Prototype test #
up
XY
N Inconsistent
ball rolling
due to
deformed
housing
2 Test # X
10
Ball does not 7 None
always pick
up ink due to
ink viscosity
10
Housing I.D. 1 None
variation due
to mfg
10
and so on
and so on
O
Current
C
Controls
C
U
R
R
E
N
C
E
2 Life test
Test # X
and so on
and so on
FIGURE 6.15 Transferring current controls and detection to the FMEA form.
Reducing the detection rating (or increasing the probability of detection)
• Improving the detection controls is generally costly, reactive, and does
not do much for quality improvement, but it does reduce risk.
• Increased frequency of inspection, for example, should only be used
as a last resort. It is not a proactive corrective action.
CLASSIFICATION
AND
CHARACTERISTICS
Different industries have different criteria for classification. However, in all cases
the following characteristics must be classified according to risk impact:
SL3151Ch06Frame Page 255 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
Occurrence
Severity
10
9
8
7
6
5
4
3
2
1
1
255
High
Priority
Medium
Priority
Low
Priority
2
3
4
5
6
7
8
9
10
FIGURE 6.16 Area chart.
• Severity 9, 10: Highest classification (critical)
These product- or process-related characteristics:
• May affect compliance with government or federal regulations
(EPA, OSHA, FDA, FCC, FAA, etc.)
• May affect safety of the customer
• Require specific actions or controls during manufacturing to ensure
100% compliance
• Severity between 5 and 8 and occurrence greater than 3: Secondary classification (significant)
These product- or process-related characteristics:
• Are non-critical items that are important for customer satisfaction
(e.g., fit, finish, durability, appearance)
• Should be identified on drawings, specifications, or process instructions to ensure acceptable levels of capability
• High RPN: Secondary classification (see Table 6.7)
Product Characteristics/“Root Causes”
Examples include size, form, location, orientation, or other physical properties such
as color, hardness, strength, etc.
Process Parameters/“Root Causes”
Examples include pressure, temperature, current, torque, speeds, feeds, voltage,
nozzle diameter, time, chemical concentrations, cleanliness of incoming part, ambient temperature, etc.
DRIVING
THE
ACTION PLAN
For each recommended action, the FMEA team must:
SL3151Ch06Frame Page 256 Thursday, September 12, 2002 6:09 PM
256
Six Sigma and Beyond
Potential
Failure
Mode
Potential
Effects
of
Failure
Mode
S
E
V
E
R
I
T
Y
Does not Pan does 8
transfer ink Pan does
not work;
customer
tries and
eventually
tears
paper
and
scraps
the pen
Old pen
stops
writing,
Partial ink customer
scraps
pen
7
Customer
7
has to
retrace
Writing
or
drawing
looks bad
C
Potential
L
Causes
A
of
S
Failure
Mode
S
O
Current
C
Controls
C
U
R
R
E
N
C
E
N Ball housing 2 Life test
Test # X
I.D. deform
Risk Priority
Number
(RPN)
2
32
Ink viscosity 9 Test # X
10
too high
Design review
Debris build- 5 Prototype test # 7
up
XY
N Inconsistent
ball rolling
due to
deformed
housing
2 Test # X
Ball does not 7 None
always pick
up ink due to
ink viscosity
1 None
Housing I.D.
variation due
to mfg
and so on
and so on
D
E
T
E
C
T
I
O
N
and so on
720
280
10
140
10
490
10
70
and so on
and so on
FIGURE 6.17 Transferring the RPN to the FMEA form.
• Plan for implementation of recommendations
• Make sure that recommendations are followed, demonstrate improvement,
and are completed
Implementation of action plans requires answering the classic questions…
• WHO … (will take the lead)
• WHAT… (specifically is to be done)
SL3151Ch06Frame Page 257 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
Action Plan
Recommended
Actions and
Responsibility
257
Action Results
Target
Finish
Date
Actual
Finish
Date
Actions S O D RPN
Taken
Remarks
FIGURE 6.18 Action plans and results analysis.
TABLE 6.7
Special Characteristics for Both Design and Process
FMEA Type
Classification
Purpose
Criteria
Design
YC
Severity = 9–10
Does not apply
Design
YS
Severity = 5–8
Occurrence = 4–10
Does not apply
Design
Blank
Severity < 5
Does not apply
Process
Process
Inverted delta
SC
A potential critical
characteristic
(Initiate PFMEA)
A potential significant
characteristic
(Initiate PFMEA)
Not a potential critical or
significant characteristic
A critical characteristic
A significant characteristic
Required
Required
Process
HI
High impact
Process
Process
OS
Blank
Operator safety
Not a special characteristic
Severity = 9–10
Severity = 5–8
Occurrence = 4–10
Severity = 5–8
Occurrence = 4–10
Severity = 9–10
Other
•
•
•
•
Control
Emphasis
Safety sign-off
Does not apply
WHERE … (will the work get done)
WHY… (this should be obvious)
WHEN… (should the actions be done)
HOW… (will we start)
Additional points concerning the action plan include the following:
• Accelerate implementation by getting buy-in (ownership).
• It is important to draw out and address objections.
• When plans address objections in a constructive way, stakeholders feel ownership in plans and actions. Ownership aids in successful implementation.
SL3151Ch06Frame Page 258 Thursday, September 12, 2002 6:09 PM
258
Six Sigma and Beyond
• Typical questions that begin a fruitful discussion are:
• Why are we…?
• Why not this…?
• What about this…?
• What if…?
• Timing and actions must be reviewed on a regular basis to:
• Maintain a sense of urgency
• Allow for ongoing facilitation
• Ensure work is progressing
• Drive team members to meet commitments
• Surface new facts that may affect plans
• Fill in the actions taken.
• The “Action Taken” column should not be filled out until the actions
are totally complete.
• Record final outcomes in the Action Plan and Action Results sections
of the FMEA form. Remember, because of the actions you have taken
you should expect changes in severity, occurrence, detection, RPN, and
new characteristic designations. Of course, these changes may be individual or in combination. The form will look like Figure 6.19.
LINKAGES AMONG DESIGN AND PROCESS FMEAS
AND CONTROL PLAN
FMEAs are not islands unto themselves. They have continuity, and the information
must be flowing throughout the design and process FMEAs as well as to the control
plan. A typical linkage is shown in Figure 6.20.
In addition to the control plan, the FMEA is also linked with robustness. To
appreciate these linkages in FMEA, we must recall that design for six sigma (DFSS)
must be a robust process. In fact, to see the linkages of this robustness we may begin
with a P diagram (see Volume V) and identify its components. It turns out that the
robustness in the FMEA usage is to make sure that the part, subsystem, or system
is going to perform its intended function, in spite of problems in both manufacturing
and environment. Of particular interest are the error states, control factors, and noise
factors. Error states may help in identifying the failures, noise factors may help us
in identifying causes, and control factors may help us in identifying the recommendations. The signal and response become the functions or the starting point of the
FMEA.
The linkages then help generate the inputs and outputs of the FMEA. Typical
inputs are:
System (concept) inputs
P diagram
Boundary diagram
Interface matrix
Potential design verification tests
SL3151Ch06Frame Page 259 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
Description Failure Action
Mode Plan
Analysis
RPN Recommended Target
Action and Finish
Responsibility Date
32 No action
required
720 DOE - Taguchi 3/22/99
259
Action
Results
Actual
Finish
Date
Action
Taken
2/15/99 Optimize
ink
formula
8
DR R Remarks
P
N
2 2 32 None
8
1 10
80
5 1
40
1 4
28
1 10
70
SO
280 Develop accel. 2/18/99 2/3/99
Test
8
procedure
test (thermal
revised
vibration)
4/3/99
D. Robins
140 Develop new
test # ABC
2/2/99 2/2/99
Test
implemented
2/2/99
7
5/3/99 4/30/99 Optimized
DOE - Taguchi
7
ink
optimize
formula
viscosity
on
C. Abrams
4/30/99
70 Evaluate
TBD
machining
process
and so
on and so on
490
7
FIGURE 6.19 Transferring action plans and action results on the FMEA form.
Surrogate data for reliability and robustness considerations
Corporate requirements
Benchmarking results
Customer functionality in terms of engineering specifications
Regulatory requirements review
Design inputs
P diagram
Boundary diagram
Interface matrix
Customer functionality in terms of engineering specifications
Regulatory requirements review
Process inputs
P diagram
Process flow diagram
Special characteristics from the DFMEA
Process characteristics
Regulatory requirements review
SL3151Ch06Frame Page 260 Thursday, September 12, 2002 6:09 PM
260
Six Sigma and Beyond
Design FMEA
Quality
Function
Deployment
Function Failure Effect Severity Class Cause Controls Rec. Action
System
Design
Specifications
Sign-Off Report
Design Verification
Plan and Report
Process FMEA
Part
Characteristic
1
2
3
4
etc.
C
Function Failure Effect Controls L
Normal A
S
S
Cause
Controls Reaction
Special
Remove the
classification
symbol
Dynamic Control
Plan
Part Drawing
(Inverted Delta
and Special
Characteristics)
FIGURE 6.20 FMEA linkages.
Machinery inputs
P diagram
Boundary diagram
Interface matrix
Customer functionality in terms of engineering specifications
Regulatory requirements review
GETTING THE MOST FROM FMEA
Common team problems that may make it difficult to get the most from FMEA
include:
• Poor team composition (not cross-functional or multidisciplinary)
• Low expertise in FMEA
• Not multi-level
SL3151Ch06Frame Page 261 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
•
•
•
•
•
•
•
•
261
• Low experience/expertise in product
• One-person FMEA
Lack of management support
Not enough time
Too detailed, could go on forever
Arguments between team members (Opinions should be based on facts
and data.)
Lack of team enthusiasm/motivation
Difficulty in getting team to start and stay with the process
Proactive vs. reactive (a “before the event” not “after the fact” exercise)
Doing it for the wrong reason
Common procedural problems include:
• Confusion about, poorly defined or incomplete functions, failure modes,
effects, or causes
• Subgroup discussion
• Using symptoms or superficial causes instead of root causes
• Confusion about ratings as estimates and not absolutes (It will take time
to be consistent.)
• Confusion about the relationship between causes, failure modes, and
effects
• Using “customer dissatisfied” as failure effect
• Shifting design concerns to manufacturing and vice-versa
• Doing FMEAs by hand
• Dependent on the engineers’ “printing skills”
• RPNs or criticality cannot be ranked easily
• Hard to update
• Much space taken up by complicated FMEAs
• Time consuming
• Resistance to being the “recorder” when done manually
• Inefficient means of storing and retrieving info
Note: With FMEA software these are all eliminated.
• Working non-systematically on the form (It is suggested that the failure
analysis should progress from left to right, with each column being completed before the next is begun.)
• Resistance of individuals to taking responsibility for recommended
actions
• Doing a reactive FMEA as opposed to a proactive FMEA (FMEAs are
best applied as a problem prevention tool, not problem solving tool,
although one may use them for both. However, the value of a reactive
FMEA is much less.)
• Not having robust FMEA terminology (A robust communication process is one that delivers its “function” [imparting knowledge and understanding] without being affected by “noise factors” [varying degrees
of training]. Simply stated, the process should be as clear as possible
with minimum possibility for misunderstanding.)
SL3151Ch06Frame Page 262 Thursday, September 12, 2002 6:09 PM
262
Six Sigma and Beyond
Stages of Learning
Unconscious incompetence
Stages of FMEA Maturity
Never heard of FMEA
Conscious incompetence
We talked about it
Conscious competence
Customer made us do it
Unconscious competence
Some small successes
Proper and regular use
FIGURE 6.21 The learning stages.
Institutionalizing FMEA in your company is challenging, and its success is
largely dependent upon the culture in the organization as well as the reason it is
being utilized. Below are some main considerations:
•
•
•
•
Selecting pilot projects (Start small and build successes.)
Identifying team participants
Developing and promoting FMEA successes
Developing templates (databases of failure modes, functions, controls,
etc.)
• Addressing training needs
Figure 6.21 shows the learning stages (the direction of the arrows indicates the
increasing level) in a company that is developing maturity in the use of FMEA.
SYSTEM OR CONCEPT FMEA
A concept FMEA is used to analyze concepts at very early stages with new ideas.
Concept FMEAs can be design, process, or even machinery oriented. However, in
practical terms, most of them are done on a system or subsystem level.
The process of the system or concept FMEA is practically the same as that of
a design FMEA. In fact, the evaluation guidelines are exactly the same as those of
DFMEA. The difference is that in the system FMEA a great effort is made to identify
gross failures with high severities. If these problems cannot be overcome, then the
project most likely will be killed. If the failures can be fixed through reasonable
design changes, then the project moves to a second stage and the design FMEA
takes over.
DESIGN FAILURE MODE AND EFFECTS ANALYSIS
(DFMEA)
The Design Failure Mode and Effects Analysis (Design FMEA) is a method for identifying potential or known failure modes and providing follow-up and corrective actions.
SL3151Ch06Frame Page 263 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
263
OBJECTIVE
The design FMEA is a disciplined analysis of the part design with the intent to
identify and correct any known or potential failure modes before the manufacturing
stage begins. Once these failure modes are identified and the cause and effects are
determined, each failure mode is then systematically ranked so that the most severe
failure modes receive priority attention. The completion of the design FMEA is the
responsibility of the individual product design engineer. This individual engineer is
the most knowledgeable about the product design and can best anticipate the failure
modes and their corrective actions.
TIMING
The design FMEA is initiated during the early planning stages of the design and is
continually updated as the program develops. The design FMEA must be totally
completed prior to the first production run.
REQUIREMENTS
The requirements for a design FMEA include:
1. Forming a team
2. Completing the design FMEA form
3. FMEA risk ranking guidelines
DISCUSSION
The effectiveness of an FMEA is dependent on certain key steps in the analysis
process, as follows:
Forming the Appropriate Team
A typical team for conducting a design FMEA is the following:
•
•
•
•
•
•
•
Design engineer(s)
Test/development engineer
Reliability engineer
Materials engineer
Field service engineer
Manufacturing/process engineer
Customer
A design and a manufacturing engineer are required to be team members. Others
may participate as needed or as the project calls for their knowledge or experience.
The leader for the design FMEA is typically the design engineer.
SL3151Ch06Frame Page 264 Thursday, September 12, 2002 6:09 PM
264
Six Sigma and Beyond
Describing the Function of the Design/Product
There are three types of functions:
1. Task functions: These functions describe the single most important reason
for the existence of the system/product. (Vacuum cleaner? Windshield
wiper? Ballpoint pen?)
2. Supporting functions: These are the “sub” functions that are needed in
order for the task function to be performed.
3. Enhancing functions: These are functions that enhance the product and
improve customer satisfaction but are not needed to perform the task
function.
After computing the function tree or a block diagram, transfer functions to the
FMEA worksheet or some other form of a worksheet to retain. Add the extent of
each function (range, target, specification, etc.) to test the measurability of the
function.
Describing the Failure Mode Anticipated
The team must pose the question to itself, “How could this part, system or design
fail? Could it break, deform, wear, corrode, bind, leak, short, open, etc.?” The team
is trying to anticipate how the design being considered could possibly fail; at this
point, it should not make the judgment as to whether it will fail but should concentrate
on how it could fail.
The purpose of a design FMEA (DFMEA) is to analyze and evaluate a design on
its ability to perform its functions. Therefore, the initial assumption is that parts are
manufactured and assembled according to plan and in compliance with specifications.
Once failure modes are determined under this assumption, then determine
other failure modes due to purchased materials, components, manufacturing processes, and services.
Describing the Effect of the Failure
The team must describe the effect of the failure in terms of customer reaction or in
other words, e.g., “What does the customer experience as a result of the failure mode
of a shorted wire?” Notice the specificity. This is very important, because this will
establish the basis for exploratory analysis of the root cause of the function. Would
the shorted wire cause the fuel gage to be inoperative or would it cause the dome
light to remain on?
Describing the Cause of the Failure
The team anticipates the cause of the failure. Would poor wire insulation cause the
short? Would a sharp sheet metal edge cut through the insulation and cause the
short? The team is analyzing what conditions can bring about the failure mode. The
more specific the responses are, the better the outcome of the FMEA.
SL3151Ch06Frame Page 265 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
265
The purpose of a design FMEA (DFMEA) is to analyze and/or evaluate a design
on its ability to perform its functions (part characteristics). Therefore, the initial
assumption in determining causes is that parts are made and assembled according
to plan and in compliance with specifications, including purchased materials, components, and services. Then and only then, determine causes due to purchased
materials, components, and services.
Some cause examples include:
Brittle material
Weak fastener
Corrosion
Low hardness
Too small of a gap
Wrong bend angle
Stress concentration
Ribs too thin
Wrong material selection
Poor stitching design
High G forces
Part interference
Tolerance stack-up
Vibration
Oxidation
And so on
Estimating the Frequency of Occurrence of Failure
The team must estimate the probability that the given failure is going to occur. The
team is assessing the likelihood of occurrence, based on its knowledge of the system,
using an evaluation scale of 1 to 10. A 1 would indicate a low probability of
occurrence whereas a 10 would indicate a near certainty of occurrence.
Estimating the Severity of the Failure
In estimating the severity of the failure, the team is weighing the consequence
of the failure. The team uses the same 1 to 10 evaluation scale. A 1 would indicate
a minor nuisance, while a 10 would indicate a severe consequence such as “loss of
brakes” or “stuck at wide open throttle” or “loss of life.”
Identifying System and Design Controls
Generally, these controls consist of tests and analyses that detect failure modes or
causes during early planning and system design activities. Good system controls
detect faults or weaknesses in system designs. Design controls consist of tests and
analyses that detect failure causes or failure modes during design, verification, and
validation activities. Good design controls detect faults or weaknesses in component
designs.
SL3151Ch06Frame Page 266 Thursday, September 12, 2002 6:09 PM
266
Six Sigma and Beyond
Special notes:
• Just because there is a current control in place that does not mean that it
is effective. Make sure the team reviews all the current controls, especially
those that deal with inspection or alarms.
• To be effective (proactive), system controls must be applied throughout
the pre-prototype phase of the Advanced Product Quality Planning
(APQP) process.
• To be effective (proactive), design controls must be applied throughout
the pre-launch phase of the APQP process.
• To be effective (proactive), process controls should be applied during the
post-pilot build phase of APQP and continue during the production phase.
If they are applied only after production begins, they serve as reactive
plans and become very inefficient.
Examples of system and design controls include:
Engineering analysis
• Computer simulation
• Mathematical modeling/CAE/FEA
• Design reviews, verification, validation
• Historical data
• Tolerance stack studies
• Engineering reviews, etc.
System/component level physical testing
• Breadboard, analog tests
• Alpha and beta tests
• Prototype, fleet, accelerated tests
• Component testing (thermal, shock, life, etc.)
• Life/durability/lab testing
• Full scale system testing (thermal, shock, etc)
• Taguchi methods
• Design reviews
Estimating the Detection of the Failure
The team is estimating the probability that a potential failure will be detected before
it reaches the customer. Again, the 1 to 10 evaluation scale is used. A 1 would
indicate a very high probability that a failure would be detected before reaching the
customer. A 10 would indicate a very low probability that the failure would be
detected, and therefore, be experienced by the customer. For instance, an electrical
connection left open preventing engine start might be assigned a detection number
of 1. A loose connection causing intermittent no-start might be assigned a detection
number of 6, and a connection that corrodes after time causing no start after a period
of time might be assigned a detection number of 10.
SL3151Ch06Frame Page 267 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
267
Detection is a function of the current controls. The better the controls, the more
effective the detection. It is very important to recognize that inspection is not a very
effective control because it is a reactive task.
Calculating the Risk Priority Number
The product of the estimates of occurrence, severity, and detection forms a risk
priority number (RPN). This RPN then provides a relative priority of the failure
mode. The higher the number, the more serious is the mode of failure considered.
From the risk priority numbers, a critical items summary can be developed to
highlight the top priority areas where actions must be directed.
Recommending Corrective Action
The basic purpose of an FMEA is to highlight the potential failure modes so that
the responsible engineer can address them after this identification phase. It is imperative that the team provide sound corrective actions or provide impetus for others
to take sound corrective actions. The follow-up aspect is critical to the success of
this analytical tool. Responsible parties and timing for completion should be designated in all corrective actions.
Strategies for Lowering Risk: (System/Design) — High Severity or Occurrence
To reduce risk, you may change the product design to:
•
•
•
•
Eliminate the failure mode cause or decouple the cause and effect
Eliminate or reduce the severity of the effect
Make the cause less likely or impossible to occur
Eliminate function or eliminate part (functional analysis)
Some “tools” to consider:
•
•
•
•
•
Quality Function Deployment (QFD)
Fault Tree Analysis (FTA)
Benchmarking
Brainstorming
TRIZ, etc.
Evaluate ideas using Pugh concept selection. Some specific examples:
•
•
•
•
Change material, increase strength, decrease stress
Add redundancy
Constrain usage (exclude features)
Develop fail safe designs, early warning system
Strategies for Lowering Risk: (System/Design) — High Detection Rating
Change the evaluation/verification/tests to:
SL3151Ch06Frame Page 268 Thursday, September 12, 2002 6:09 PM
268
Six Sigma and Beyond
• Make failure mode easier to perceive
• Detect causes prior to failure
Some “tools” to consider:
•
•
•
•
Benchmarking
Brainstorming
Process control (automatic corrective devices)
TRIZ, etc.
Evaluate ideas using Pugh concept selection. Some specific examples:
•
•
•
•
Change testing and evaluation procedures
Increase failure feedback or warning systems
Increase sampling in testing or instrumentation
Increase redundancy in testing
PROCESS FAILURE MODE AND
EFFECTS ANALYSIS (FMEA)
The Process Failure Mode and Effects Analysis (process FMEA) is a method for
identifying potential or known processing failure modes and providing problem
follow-up and corrective actions.
OBJECTIVE
The process FMEA is a disciplined analysis of the manufacturing process with the
intent to identify and correct any known or potential failure modes before the first
production run occurs. Once these failure modes are identified and the cause and
effects are determined, each failure mode is then systematically ranked so that the
most severe failure modes receive priority attention. The completion of the process
FMEA is the responsibility of the individual product process engineer. This individual process engineer is the most knowledgeable about the process structure and can
best anticipate the failure modes and their effects and address the corrective actions.
TIMING
The process FMEA is initiated during the early planning stages of the process before
machines, tooling, facilities, etc., are purchased. The process FMEA is continually
updated as the process becomes more clearly defined. The process FMEA must be
totally completed prior to the first production run.
REQUIREMENTS
The requirements for a process FMEA are as follows:
SL3151Ch06Frame Page 269 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
269
1. Form team
2. Complete the process FMEA form
3. FMEA risk ranking guidelines
DISCUSSION
The effectiveness of an FMEA on a process is dependent on certain key steps in the
analysis, including the following:
Forming the Team
A typical team for the process/assembly FMEA is the following:
•
•
•
•
•
•
•
•
Design engineer
Manufacturing or process engineer
Quality engineer
Reliability engineer
Tooling engineer
Responsible operators from all shifts
Supplier
Customer
A design engineer, a manufacturing engineer, and representative operators are
required to be team members. Others may participate as needed or as the project
calls for their knowledge or experience. The leader for the process FMEA is typically
the process or manufacturing engineer.
Describing the Process Function
The team must identify the process or machine and describe its function. The team
members should ask of themselves, “What is the purpose of this operation?” State
concisely what should be accomplished as a result of the process being performed.
Typically, there are three areas of concern. They are:
1. Creating/constructing functions: These are the functions that add value to
the product. Examples include cutting, forming, painting, drying, etc.
2. Improving functions: These are the functions that are needed in order to
improve the results of the creating function. Examples include deburring,
sanding, cleaning, etc.
3. Measurement functions: These are functions that measure the success of
the other functions. Examples include SPC, gauging, inspections, etc.
Manufacturing Process Functions
Just as products have functions, manufacturing processes also have functions. The
goal is to concisely list the function(s) for each process operation. The first step in
improving any process is to make the current process visible by developing a process
SL3151Ch06Frame Page 270 Thursday, September 12, 2002 6:09 PM
270
Six Sigma and Beyond
flow diagram (a sequential flow of operations by people and/or equipment). This
helps the team understand, agree, and define the scope. Three important questions
exist for any existing process:
1. What do you think is happening?
2. What is actually happening?
3. What should be happening?
Special reminder for manufacturing process functions: Remember, if the process
flow diagram is too extensive for a “timely” FMEA, a risk assessment may be done
on each process operation to narrow the scope.
The PFMEA Function Questions
Each manufacturing step typically has one or more functions. Determine what
functions are associated with each manufacturing process step and then ask:
1. What does the process step do to the part?
2. What are you doing to the part/assembly?
3. What is the goal, purpose, or objective of this process step?
For example, consider the pen assembly process (see Figure 6.22), which involves
the following steps:
1.
2.
3.
4.
5.
6.
7.
8.
Inject ink into ink tube (0.835 cc)
Insert ink tube into tip assembly housing (12 mm)
Insert tip assembly into tip assembly housing (full depth until stop)
Insert tip assembly housing into barrel (full depth until stop)
Insert end cap into barrel (full depth until stop)
Insert barrel into cap (full depth until stop)
Move to dock (to dock within 8 seconds)
Package and ship (12 pens per box)
Note: At the end of this function analysis you are ready to transfer the information to the FMEA form.
Remember that another way to reduce the complexity or scope of the FMEA is
to prioritize the list of functions and then take only the ones that the team collectively
agrees are the biggest concerns.
Describing the Failure Mode Anticipated
The team must pose the question to itself, “How could this process fail to complete
its intended function? Could the resulting workpiece be oversize, undersize, rough,
eccentric, misassembled, deformed, cracked, open, shorted, leaking, porous, damaged, omitted, misaligned, out of balance, etc.?” The team members are trying to
anticipate how the workpiece might fail to meet engineering requirements; at this
point in their analysis they should stress how it could fail and not whether it will fail.
SL3151Ch06Frame Page 271 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
271
Ink
Ink
tube
Inject ink
into ink tube
Insert ink tube
into tip
assembly
Tip
assembly
housing
Tip
assembly
Insert tip
assembly into tip
assembly
housing
Insert tip
assembly housing
Barrel
Insert end cap into
barrel
End cap
Insert barrel into cap
Cap
Move to dock
Package and ship
FIGURE 6.22 Pen assembly process.
The purpose of a process FMEA (PFMEA) is to analyze and evaluate a process
on its ability to perform its functions. Therefore, the initial assumptions are:
1. The design intent meets all customer requirements.
2. Purchased materials and components comply with specifications.
Once failure modes are determined under these assumptions, then determine
other failure modes due to:
1. Design flaws that cause or lead to process problems
2. Problems with purchased materials, components, or services
Describing the Effect(s) of the Failure
The team must describe the effect of the failure on the component or assembly. What
will happen as a result of the failure mode described? Will the component or
SL3151Ch06Frame Page 272 Thursday, September 12, 2002 6:09 PM
272
Six Sigma and Beyond
assembly be inoperative, intermittently operative, always on, noisy, inefficient, surging, not durable, inaccurate, etc.? After considering the failure mode, the engineer
determines how this will manifest itself in terms of the component or assembly
function. The open circuit causes an inoperative gage. The rough surface will cause
excessive bushing wear. The scratched surface will cause noise. The porous casting
will cause external leaks. The cold weld will cause reduced strength, etc. In some
cases the process engineer (the leader) must interface with the product design
engineer to correctly describe the effect(s) of a potential process failure on the
component or total assembly.
Describing the Cause(s) of the Failure
The engineer anticipates the cause of the failure. The engineer is describing what
conditions can bring about the failure mode. Locators are not flat and parallel. The
handling system causes scratches on a shaft. Inadequate venting and gaging can
cause misruns, porosity, and leaks. Inefficient die cooling causes die hot spots.
Undersize condition can be caused by heat treat shrinkage, etc.
The purpose of a process FMEA (PFMEA) is to analyze or evaluate a process
on its ability to perform its functions (part characteristics). Therefore, the initial
assumptions in determining causes are:
• The design intent meets all customer requirements.
• Purchased materials, components, and services comply with specifications.
Then and only then, determine causes due to:
• Design flaws that cause or lead to process problems
• Problems with purchased materials, components, or services
Typical causes associated with process FMEA include:
Fatigue
Poor surface preparation
Improper installation
Low torque
Improper maintenance
Inadequate clamping
Misuse
High RPM
Abuse
Inadequate venting
Unclear instructions
Tool wear
Component interactions
Overheating
And so on
SL3151Ch06Frame Page 273 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
273
Estimating the Frequency of Occurrence of Failure
The team must estimate the probability that the given failure mode will occur. This
team is assessing the likelihood of occurrence, based on their knowledge of the
process, using an evaluation scale of 1 to 10. A 1 would indicate a low probability
of occurrence, whereas a 10 would indicate a near certainty of occurrence.
Estimating the Severity of the Failure
In estimating the severity of the failure, the team is weighing the consequence (effect)
of the failure. The team uses the same 1 to 10 evaluation scale. A 1 would indicate
a minor nuisance, while a 10 would indicate a severe consequence such as “motor
inoperative, horn does not blow, engine seizes, no drive, etc.”
Identifying Manufacturing Process Controls
Manufacturing process controls consist of tests and analyses that detect causes or
failure modes during process planning or production. Manufacturing process controls
can occur at the specific operation in question or at a subsequent operation. There
are three types of process controls, those that:
1. Prevent the cause from happening
2. Detect causes then lead to corrective actions
3. Detect failure modes then lead to corrective actions
Manufacturing process controls should be based on process dominance factors.
Dominance factors are process elements that generate significant process variation.
Dominance factors are the predominant factors that contribute to problems in a
process. Most processes have one or two dominant sources of variation. Depending
on the source, there are tools that may be used to track these as well as monitor
them. Table 6.8 gives a cross reference of the dominance factors and the tools that
may be used for tracking them. The following list provides some very common
dominance factors:
•
•
•
•
•
•
•
•
Setup
Machine
Operator
Component or material
Tooling
Preventive maintenance
Fixture/pallet/work holding
Environment
Special note: Controls should target the dominant sources of variation.
Manufacturing process control examples include:
SL3151Ch06Frame Page 274 Thursday, September 12, 2002 6:09 PM
274
Six Sigma and Beyond
TABLE 6.8
Manufacturing Process Control Matrix
Dominance Factor
Attribute Data
Variable Date
Setup
Check sheet
Checklist
p or c chart
Check sheet
X-bar/R chart
X-MR chart
Run chart
X-bar/R chart
X-MR chart
X-bar/R chart
X-MR chart
Check sheet
Supplier information
Tool logs
Capability study
X-MR chart
Time to failure chart
Supplier information
X-MR chart
Machine
Operator
Component/material
Tool
Preventive maintenance
Fixture/pallet/work holding
Environment
Check sheet
Run chart
Check sheet
Supplier information
Tool logs
Check sheet
p or c chart
Time to failure chart
Supplier information
Time to failure chart
Check sheet
p or c chart
Check sheet
Time to failure chart
X-bar/R chart
X-MR chart
Run chart
X-MR chart
Statistical Process Control (SPC)
• X-bar/R control charts (variable data)
• Individual X-moving range charts (variable data)
• p; n; u; c charts (attribute data)
Non-statistical control
• Check sheets, checklists, setup procedures, operational definitions/
instruction sheets
• Preventive maintenance
• Tool usage logs/change programs (PM)
• Mistake proofing/error proofing/Poka Yoke
• Training and experience
• Automated inspection
• Visual inspection
It is very important to recognize that inspection is not a very effective control
because it is a reactive task.
Estimating the Detection of the Failure
The detection is directly related to the controls available in the process. So the better
the controls, the better the detection. The team in essence is estimating the probability
SL3151Ch06Frame Page 275 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
275
that a potential failure will be detected before it reaches the customer. The team
members use the 1 to 10 evaluation scale. A 1 would indicate a very high probability
that a failure would be detected before reaching the customer. A 10 would indicate
a very low probability that the failure would be detected, and therefore, be experienced by the customer. For instance, a casting with a large hole would be readily
detected and would be assessed as a 1. A casting with a small hole causing leakage
between two channels only after prolonged usage would be assigned a 10. The team
is assessing the chances of finding a defect, given that the defect exists.
Calculating the Risk Priority Number
The product of the estimates of occurrence, severity, and detection forms a risk
priority number (RPN). This RPN then provides a relative priority of the failure
mode. The higher the number, the more serious is the mode of failure considered.
From the risk priority numbers, a critical items summary can be developed to
highlight the top priority areas where actions must be directed.
Recommending Corrective Action
The basic purpose of an FMEA is to highlight the potential failure modes so that
the engineer can address them after this identification phase. It is imperative that
the engineer provide sound corrective actions or provide impetus for others to take
sound corrective actions. The follow-up aspect is critical to the success of this
analytical tool. Responsible parties and timing for completion should be designated
in all corrective actions.
Strategies for Lowering Risk: (Manufacturing) — High Severity or Occurrence
Change the product or process design to:
• Eliminate the failure cause or decouple the cause and effect
• Eliminate or reduce the severity of the effect (recommend changes in
design)
Some “tools” to consider:
•
•
•
•
Benchmarking
Brainstorming
Mistake proofing
TRIZ, etc.
Evaluate ideas using Pugh concept selection. Some specific examples:
•
•
•
•
•
Developing a robust design (insensitive to manufacturing variations)
Changing process parameters (time, temperature, etc.)
Increasing redundancy, adding process steps
Altering process inputs (materials, components, consumables)
Using mistake proofing (Poka Yoke), reducing handling
SL3151Ch06Frame Page 276 Thursday, September 12, 2002 6:09 PM
276
Six Sigma and Beyond
Strategies for Lowering Risk: (Manufacturing) — High Detection Rating
Change the process controls to:
• Make failure mode easier to perceive
• Detect causes prior to failure mode
Some “tools” to consider:
• Benchmarking
• Brainstorming, etc.
Evaluate ideas using Pugh concept selection. Some specific examples:
•
•
•
•
•
Change testing and inspection procedures/equipment.
Improve failure feedback or warning systems.
Add sensors/feedback or feed forward systems.
Increase sampling and/or redundancy in testing.
Alter decision rules for better capture of causes and failures (i.e., more
sophisticated tests).
At this stage, now you are ready to enter a brief description of the recommended
actions, including the department and individual responsible for implementation, as
well as both the target and finish dates, on the FMEA form. If the risk is low and
no action is required write “no action needed.”
For each entry that has a designated characteristic in the class[ification] column,
review the issues that impact cause/occurrence, detection/control, or failure mode.
Generate recommended actions to reduce risk. Special RPN patterns suggest that certain
characteristics/root causes are important risk factors that need special attention.
Guidelines for process control system:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Select the process.
Conduct the FMEA on the process.
Conduct gage system analysis.
Conduct process potential study.
Develop control plan.
Train operators in control methods.
Implement control plan.
Determine long-term process capability.
Review the system for continual improvement.
Develop audit system.
Institute improvement actions.
After FMEA:
SL3151Ch06Frame Page 277 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
277
1. Review the FMEA.
2. Highlight the high-risk areas based on the RPN.
3. Identify the critical and major characteristics based on your classification
criteria.
4. Ensure that a control plan exists and is being followed.
5. Conduct capability studies.
6. Work on processes that have Cpk of less or equal to 1.33.
7. Work on processes that have Cpk greater than 1.33 to reduce variation and
reach a Cpk of greater or equal to 2.0.
MACHINERY FMEA (MFMEA)
A machinery FMEA is a systematic approach that applies the traditional tabular
method to aid the thought process used by simultaneous engineering teams to identify
the machine’s potential failure modes, potential effects, and potential causes of the
potential failure modes and to develop corrective action plans that will remove or
reduce the impact of the potential failure modes. Generally, the delivery of a MFMEA
is the responsibility of the supplier who generates a functional MFMEA for system
and subsystem levels. This is in contrast to a DFMEA where the responsibility is
still on the supplier but now the focus is to generate transfer mechanisms, spindles,
switches, cylinders, exclusive of assembly-level equipment.
A typical MFMEA follows a hierarchical model in that it divides the machine
into subsystems, assemblies, and lowest replaceable units. For example:
Level 1: System level — generic machine
Level 2: Subsystem level — electrical, mechanical, controls
Level 3: Assembly level — fixtures/tools, material handling, drives
Level 4: Component level
And so on
IDENTIFY
THE
SCOPE
OF THE
MFMEA
Use the boundary diagram. Once the diagram has been completed, you can focus
the MFMEA on the low MTBF and reliability values.
IDENTIFY
THE
FUNCTION
Define the function in terms of an active verb and a noun. Use a functional diagram
or the P diagram to find the ideal function. Always focus on the intent of the system,
subsystem, or component under investigation.
FAILURE MODE
A failure is an event when the equipment/machinery is not capable of producing
parts at specific conditions when scheduled or is not capable of producing parts or
SL3151Ch06Frame Page 278 Thursday, September 12, 2002 6:09 PM
278
Six Sigma and Beyond
performing scheduled operations to specifications. Machinery failure modes can
occur in three ways:
• Component defect (hard failure)
• Failure observation (potential)
• Abnormality of performance (constitutes the equipment as failed)
POTENTIAL EFFECTS
The consequence of a failure mode on the subsystem is described in terms of safety
and the big seven losses. (The big seven losses may be identified through warranty
or historical data.)
Describe the potential effects in terms of downtime, scrap, and safety issues. If
a functional approach is used, then list the causes first before developing the effects
listing. Associated with the potential effects is the severity, which is a rating corresponding to the seriousness of the effect of a potential machinery failure mode.
Typical descriptions are:
Downtime
• Breakdowns: Losses that are a result of a functional loss or function
reduction on a piece of machine requiring maintenance intervention.
• Setup and adjustment: Losses that are a result of set procedures. Adjustments include the amount of time production is stopped to adjust
process or machine to avoid defect and yield losses, requiring operator
or job setter intervention.
• Startup losses: Losses that occur during the early stages of production
after extended shutdowns (weekends, holidays, or between shifts),
resulting in decreased yield or increased scrap and defects.
• Idling and minor stoppage: Losses that are a result of minor interruptions in the process flow, such as a process part jammed in a chute or
a limit switch sticking, etc., requiring only operator or job setter intervention. Idling is a result of process flow blockage (downstream of the
focus operation) or starvation (upstream of the focus operation). Idling
can only be resolved by looking at the entire line/system.
• Reduced cycle: Losses that are a result of differences between the ideal
cycle time of a piece of machinery and its actual cycle time.
Scrap
• Defective parts: Losses that are a result of process part quality defects
resulting in rework, repair, or scrap.
• Tooling: Losses that are a result of tooling failures/breakage or deterioration/wear (e.g., cutting tools, fixtures, welding tips, punches, etc.).
Safety
• Safety considerations: Immediate life or limb threatening hazard or
minor hazard.
SL3151Ch06Frame Page 279 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
279
SEVERITY RATING
Severity is comprised of three components:
• Safety of the machinery operator (primary concern)
• Product scrap
• Machinery downtime
A rating should be established for each effect listed. Rate the most serious effect.
Begin the analysis with the function of the subsystem that will affect safety, government regulations, and downtime of the equipment. A very important point here
is the fact that a reduction in severity rating may be accomplished only through a
design change. A typical rating is shown in Table 6.9.
It should be noted that these guidelines may be modified to reflect specific
situations. Also, the basis for the criteria may be changed to reflect the specificity
of the machine and its real world usage.
CLASSIFICATION
The classification column is not typically used in the MFMEA process but should
be addressed if related to safety or noncompliance with government regulations.
Address the failure modes with a severity rating of 9 or 10. Failure modes that affect
worker safety will require a design change. Enter “OS” in the class column. OS
(operator safety) means that this potential effect of failure is critical and needs to
be addressed by the equipment supplier. Other notations can be used but should be
approved by the equipment user.
POTENTIAL CAUSES
The potential causes should be identified as design deficiencies. These could translate as:
• Design variations, design margins, environmental, or defective components
• Variation during the build/install phases of the equipment that can be
corrected or controlled
Identify first level causes that will cause the failure mode. Data for the development of the potential causes of failure can be obtained from:
• Surrogate MFMEA
• Failure logs
• Interface matrix (focusing on physical proximity, energy transfer, material,
information transfer)
• Warranty data
• Concern reports (things gone wrong, things gone right)
• Test reports
• Field service reports
Criteria Severity
Very high severity: affects operator,
plant, or maintenance personnel
safety and/or effects
noncompliance with government
regulations without warning
Hazardous
High severity: affects operator,
with warning plant or maintenance personnel
safety and/or effects
noncompliance with government
regulations with warning
Very high
Downtime of 8+ hours or the
production of defective parts for
over 2 hours
High
Downtime of 2–4 hours or the
production of defective parts for
up to 2 hours
Hazardous
without
warning
Effect
Failure occurs
every hour
Failure occurs
every shift
Failure occurs
every day
Failure occurs
every week
10
9
8
7
Rank
Probability
of Failure
R(t) 37%
7
8
9
10
Rank
1 in 80
1 in 24
1 in 8
1 in 1
Alternate
Criteria for
Occurrence
Low
Very low
Detection
Machinery control will
isolate the cause and failure
mode after the failure has
occurred, but will not
prevent the failure from
occurring
Team’s discretion depending
on machine and situation
Team’s discretion depending
on machine and situation
Present design controls
cannot detect potential
cause or no design control
available
Criteria for Detection
7
8
9
10
Rank
280
R(t) = 20%
R(t) = 5%
R(t) < 1 or some
MTBF
Criteria for
Occurrence
TABLE 6.9
Machinery Guidelines for Severity, Occurrence, and Detection
SL3151Ch06Frame Page 280 Thursday, September 12, 2002 6:09 PM
Six Sigma and Beyond
Downtime of 60–120 min or the
production of defective parts for
up to 60 min
Downtime of 30–60 min with no
production of defective parts or the
production of defective parts for
up to 30 min
Downtime of 15–30 min with no
production of defective parts
Downtime up to 15 min with no
production of defective parts
Process parameter variability not
within specification limits.
Adjustments may be done during
production; no downtime and no
defects produced
Process parameter variability
within specification limits;
adjustments may be performed
during normal maintenance
Moderate
Very minor
None
Minor
Very low
Low
Criteria Severity
Effect
1
2
3
4
5
6
Rank
R(t) = 60%
Criteria for
Occurrence
Failure occurs
every 5 years
Failure occurs
every 2 years
R(t) = 98%
R(t) = 95%
Failure occurs R(t) = 85%
every 6 months
Failure occurs R(t) = 90%
every year
Failure occurs R(t) = 78%
every 3 months
Failure occurs
every month
Probability
of Failure
TABLE 6.9
Machinery Guidelines for Severity, Occurrence, and Detection
1
2
3
4
5
6
Rank
1 in 25,000
1 in 10,000
1 in 5000
1 in 2500
1 in 1000
1 in 350
Alternate
Criteria for
Occurrence
Team’s discretion depending
on machine and situation
Machinery controls will
prevent an imminent failure
and isolate the cause
Team’s discretion depending
on machine and situation
Machinery controls will
provide an indicator of
imminent failure
Team’s discretion depending
on machine and situation
Criteria for Detection
Very high Machinery controls not
required; design controls
will detect a potential cause
and subsequent failure
almost every time
High
Medium
Detection
1
2
3
4
5
6
Rank
SL3151Ch06Frame Page 281 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
281
SL3151Ch06Frame Page 282 Thursday, September 12, 2002 6:09 PM
282
Six Sigma and Beyond
OCCURRENCE RATINGS
Occurrence is the rating corresponding to the likelihood of the failure mode occurring
within a certain period of time — see Table 6.8. The following should be considered
when developing the occurrence ratings:
• Each cause listed requires an occurrence rating.
• Controls can be used that will prevent or minimize the likelihood that the
failure cause will occur but should not be used to estimate the occurrence
rating.
Data to establish the occurrence ratings should be obtained from:
•
•
•
•
Service data
MTBF data
Failure logs
Maintenance records
SURROGATE MFMEAS
Current Controls
Current controls are described as being those items that will be able to detect the
failure mode or the causes of failure. Controls can be either design controls or
process controls.
A design control is based on tests or other mechanisms used during the design
stage to detect failures. Process controls are those used to alert the plant personnel
that a failure has occurred. Current controls are generally described as devices to:
•
•
•
•
Prevent the cause/mechanism failure mode from occurring
Reduce the rate of occurrence of the failure mode
Detect the failure mode
Detect the failure mode and implement corrective design action
Detection Rating
Detection rating is the method used to rate the effectiveness of the control to detect
the potential failure mode or cause. The scale for ranking these methods is based
on a 1 to 10 scale — see Table 6.8.
RISK PRIORITY NUMBER (RPN)
The RPN is a method used by the MFMEA team to rank the various failure modes
of the equipment. This ranking allows the team to attack the highest probability of
failure and remove it before the equipment leaves the supplier floor.
The RPN typically:
• Has no value or meaning (Ratings and RPNs in themselves have no value
or meaning. They should be used only to prioritize the machine’s potential
SL3151Ch06Frame Page 283 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
283
design weakness [failure mode] for consideration of possible design
actions to eliminate the failures or make them maintainable.)
• Is used to prioritize potential design weaknesses (root causes) for consideration of possible design actions
• Is the product of severity, occurrence and detection (RPN = S × O × D)
Special note on risk identification: Whereas it is true that most organizations
using FMEA guidelines use the RPN for identifying the risk priority, some do not
follow that path. Instead, they use a three path approach based on:
Step 1: severity
Step 2: criticality
Step 3: detection
This means that regardless of the RPN, the priority is based on the highest
severity first, especially if it is a 9 or a 10, followed by the criticality, which is the
product of severity and occurrence, and then the RPN.
RECOMMENDED ACTIONS
• Each RPN value should have a recommended action listed.
• The actions are designed to reduce severity, occurrence, and detection
ratings.
• Actions should address in order the following concerns:
• Failure modes with a severity of 9 or 10
• Failure mode/cause that has a high severity occurrence rating
• Failure mode/cause/design control that has a high RPN rating
• When a failure mode/cause has a severity rating of 9 or 10, the design
action must be considered before the engineering release to eliminate
safety concerns.
DATE, RESPONSIBLE PARTY
• Document the person, department, and date for completion of the recommended action.
• Always place the responsible party’s name in this area.
ACTIONS TAKEN/REVISED RPN
• After each action has been taken, document the action.
• Results of an effective MFMEA will reduce or eliminate equipment downtime.
• The supplier is responsible for updating the MFMEA. The MFMEA is a
living document. It should reflect the latest design level and latest design
actions.
• Any equipment design changes need to be communicated to the MFMEA
team.
SL3151Ch06Frame Page 284 Thursday, September 12, 2002 6:09 PM
284
Six Sigma and Beyond
REVISED RPN
• Recalculate S, O, and D after the action taken has been completed. Always
remember that only a change in design can change the severity. Occurrence
may be changed by a design change or a redundant system. Detection
may be changed by a design change or better testing or better design
control.
• MFMEA — A team needs to review the new RPN and determine if
additional design actions are necessary.
SUMMARY
In summary, the steps in conducting the FMEA are as follows:
1.
2.
3.
4.
5.
6.
7.
8.
Select a project and scope.
If DFMEA, construct a block diagram.
If PFMEA, construct a process flow diagram.
Select an entry point based on the block or process flow diagram.
Collect the data.
Analyze the data.
Calculate results (results must be data driven).
Evaluate/confirm/measure the results.
• Better off
• Worse off
• Same as before
9. Do it all over again.
SELECTED BIBLIOGRAPHY
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Potential
Failure Mode and Effect Analysis (FMEA) Reference Manual, 2nd ed., distributed by
the Automotive Industry Action Group (AIAG), Southfield, MI, 1995.
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Advanced
Product Quality Planning and Control Plan, distributed by the Automotive Industry
Action Group (A.I.A.G.), Southfield, MI, 1995.
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Potential
Failure Mode and Effect Analysis (FMEA) Reference Manual, 32nd ed., Chrysler
Corporation, Ford Motor Company, and General Motors Corporation. Distributed by
the Automotive Industry Action Group (AIAG), Southfield, MI, 2001.
The Engineering Society for Advancing Mobility Land Sea Air and Space, Potential Failure
Mode and Effects Analysis in Design FMEA and Potential Failure Mode and Effects
Analysis in Manufacturing and Assembly Processes (Process FMEA) Reference Manual, SAE: J1739, The Engineering Society for Advancing Mobility Land Sea Air and
Space, Warrendale, PA, 1994.
SL3151Ch06Frame Page 285 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA)
285
The Engineering Society for Advancing Mobility Land Sea Air and Space, Reliability and
Maintainability Guideline for Manufacturing Machinery and Equipment, SAE Practice Number M-110, The Engineering Society for Advancing Mobility Land Sea Air
and Space, Warrendale, PA, 1999.
Ford Motor Company, Failure Mode Effects Analysis: Training Reference Guide, Ford Motor
Company — Ford Design Institute. Dearborn, MI, 1998.
Kececioglu, D., Reliability Engineering Handbook, Vol. 1–2, Prentice Hall, Englewood Cliffs,
NJ, 1991.
Stamatis, D.H., Advanced Quality Planning, Quality Resources, New York, 1998.
Stamatis, D.H., Failure Mode and Effect Analysis: FMEA from Theory to Execution, Quality
Press, Milwaukee, 1995.
SL3151Ch06Frame Page 286 Thursday, September 12, 2002 6:09 PM
SL3151Ch07Frame Page 287 Thursday, September 12, 2002 6:07 PM
7
Reliability
Reliability n — may be relied on; trustworthiness, authenticity, consistency; infallibility, suggesting the complete absence of error, breakdown, or poor performance.
In other words, when we speak of a reliable product, we usually expect such
adjectives as dependable and trustworthy to apply. But to measure product reliability,
we must have a more exact definition. The definition of reliability then, is: the
probability that a product will perform its intended function in a satisfactory manner
for a specified period of time when operating under specified conditions.
Thus, the reliability of a system expresses the length of failure-free time that
can be expected from the equipment. Higher levels of reliability mean less failure
of the system and consequently less downtime. To measure reliability it is necessary
to:
• Relate probability to a precise definition of success or satisfactory performance
• Specify the time base or operating cycles over which such performance
is to be sustained
• Specify the environmental or use conditions that will prevail
Note: Theoretically, every product has a designed-in reliability function. This
reliability function (or curve) expresses the system reliability at any point in time.
As time increases the curve must drop, eventually reaching zero.
PROBABILISTIC NATURE OF RELIABILITY
We cannot say exactly when a particular product will fail, but we can say what
percentage of the products in use will fail by certain times. This is analogous to the
reasoning used by insurance companies in defining mortality. We can state reliability
in various ways:
• The probability that a product will be performing its intended function at
5000 hours of use is 0.95.
• The reliability at 5000 hours is 0.95 or 95%.
• If we place 1000 units in use, 950 will still be operating with no failures
at 5000 hours.
Or to cite another example:
• The reliability at 8000 hours is 0.80.
• The unreliability at 8000 hours is 0.20.
287
SL3151Ch07Frame Page 288 Thursday, September 12, 2002 6:07 PM
288
Six Sigma and Beyond
From a service point of view, we may be interested in repair frequency and then
we say that 20% of the units will have to be repaired by 8000 hours. Or the repair
per hundred units (R/100) is 20 at 8000 hours. The important point is that the
reliability is a metric expressing the probability of maintaining intended function
over time and is measurable as a percentage.
PERFORMING THE INTENDED
FUNCTION SATISFACTORILY
A product fails when it ceases to function in a way that is satisfactory to the customer.
Products rarely fail suddenly in the way that a light bulb does. Rather, they deteriorate
over time. This eventually leads to unsatisfactory performance from the customer’s
standpoint. Unsatisfactory performance can result from:
•
•
•
•
•
•
Excess vibrations
Excess noise
Intermittent operation
Drift
Catastrophic failure
And many other possibilities
Unsatisfactory performance must be clearly spelled out. The customer’s perspective must be recognized in this process. There will usually be various levels of
failure based on the customer’s perceived level of severity. The levels of severity
are frequently grouped into two categories such as:
• Major
• Minor
The severity of the failure to the customer must be documented and recognized
in a Failure Definition and Scoring Criterion that precisely delineates how each
incident on a system or equipment will be handled in regards to reliability and
maintainability calculations. Such documents should be developed early in a design
and development program so that all concerned are aware of the consequences of
incidents that occur during product testing and in field use.
The design team must be able to use the failure definition and scoring criterion
to address product trade-offs. If the severity of a failure to the customer can be
lowered by design changes, the failure definition and scoring criterion should promote such trade-offs.
SPECIFIED TIME PERIOD
Products deteriorate with use and even with age when dormant. Longer lengths of
usage imply lower reliability. For design purposes, target usage periods must be
identified. Typical usage periods are:
SL3151Ch07Frame Page 289 Thursday, September 12, 2002 6:07 PM
Reliability
289
1. Warranty period(s): A warranty is a contract supplied with the product
providing the user with a certain amount of protection against product
failure.
2. Expected customer life: Customers have a reasonably consistent belief as
to how long a product should last. This belief can be determined through
a market survey.
3. Durability life: This is a measure of useful life, defining the number of
operating hours (or cycles) until overhaul is required.
SPECIFIED CONDITIONS
Different environments promote different failure modes and different failure rates
for a product. The environmental factors that the product will encounter must be
clearly defined. The levels (and rate of change) at which we want to address these
environmental factors must also be defined.
ENVIRONMENTAL CONDITIONS PROFILE
The environmental profile must include the level and rate of change for each environmental factor considered. Environmental factors include but are not limited to:
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Temperature
Humidity
Vibration
Shock
Corrosive materials
Immersion
Pressures, vacuum
Salt spray
Dust
Cement floors/basements
Ice/snow
Lubricants
Perfumes
Magnetic fields
Nuclear radiation
Weather
Contamination
Antifreeze
Gasoline fumes
Rust inhibitors/under coatings
Rain
Soda pop/hot coffee
Sunlight
Electrical discharges
And so on
SL3151Ch07Frame Page 290 Thursday, September 12, 2002 6:07 PM
290
Six Sigma and Beyond
Not all of these environmental conditions would be appropriate for a particular
product. Each product must be considered in its individual operating environment
and scenario. Environment must consider the environment induced from operating
the product, the environment induced from external factors, and the environment
induced by delivering the product to the customer.
RELIABILITY NUMBERS
The reliability number attached to a product changes with:
• Usage and environmental conditions
• Customer’s perception of satisfactory performance
At any product age (t) for a population of N products, the reliability at time t
denoted by R(t) is
R(t) = Number of survivors/N, which is equal to
R(t) = 1 – (Number of failures/N) = 1 – Unreliability
This is the reliability of this population of products at time t. The real world
estimation of reliability is usually much more difficult due to products being sold
over time with each having a different usage profile. Calendar time is known but
product life on each product is not, while warranty systems monitor and record only
failure.
INDICATORS USED
TO
QUANTIFY PRODUCT RELIABILITY
Several metrics are in common use to indicate product reliability. Some of these
actually quantify unreliability. Some of the metrics follow:
• MTBF — The mean time between failures, also MTTF, MMBF, MCTF.
MTBF = 120 hours means that on the average a failure will occur with
every 120 hours of operation.
• Failure rate — The rate of failures per unit of operating time. λ =
0.05/hour means that one failure will occur with every 20 hours of operation, on the average.
• R/100 (or R/1000) — The number of warranty claims per 100 (or 1000)
products sold. R/100 = 7 means that there are seven warranty claims for
every 100 products sold.
• Reliability number — The reliability of the product at some specific time.
R = 90% means that 9 out of 10 products work successfully for the
specified time.
SL3151Ch07Frame Page 291 Thursday, September 12, 2002 6:07 PM
Reliability
291
RELIABILITY AND QUALITY
Customers and product engineers frequently use the terms reliability and quality
interchangeably. Ultimately, the customer defines quality. Customers want products
that meet or exceed their needs and expectations, at a cost that represents value.
This expectation of performance must be met throughout the customer’s expected
life for the particular product. Quality is usually recognized as a more encompassing
term including reliability. Some quality characteristics are:
Psychological
• Taste
• Beauty, style
• Status
Technological
• Hardness
• Vibration
• Noise
• Materials (bearings, belts, hoses, etc.)
Time-oriented
• Reliability
• Maintainability
Contractual
• Warranty
Ethical
• Honesty of repair shop
• Experience and integrity of sales force
PRODUCT DEFECTS
Quality defects are defined as those that can be located by conventional inspection
techniques. (Note: for legal reasons, it is better to identify these defects as nonconformances.) Reliability defects are defined as those that require some stress applied
over time to develop into detectable defects.
What causes product failure over time? Some possibilities are:
•
•
•
•
•
•
•
•
•
Design
Manufacturing
Packaging
Shipping
Storage
Sales
Installation
Maintenance
Customer duty cycle
SL3151Ch07Frame Page 292 Thursday, September 12, 2002 6:07 PM
292
Six Sigma and Beyond
CUSTOMER SATISFACTION
The ultimate goal of a product is to satisfy a customer from all aspects of cost,
performance, reliability, and maintainability. The customer trades off these parameters when making a decision to buy a product. Assuming that we are designing a
product for a certain market segment, cost is determined within limits. The tradeoffs are as follows:
1. Performance parameters are the designed-in system capabilities such as
acceleration, top speed, rate of metal removal, gain, ability to carry a 5ton payload up a 40 degree grade without overheating, and so on.
2. The reliability of equipment expresses the length of failure-free time that
can be expected from the equipment. Higher levels of reliability mean
less failure of the equipment and consequently less downtime and loss of
use. Although we will attach reliability numbers to products, it should be
recognized that the customer’s perspective interprets reliability as the
ability of a product to perform its intended function for a given period of
time without failure. This concept of failure-free operation is becoming
more and more fixed in the mind of the customer. This is true whether
the customer is purchasing an automobile, a machine tool, a computer
system, a refrigerator, or an automatic coffee maker.
3. Maintainability is defined as the probability that a failed system is restored
to operable condition in a specified amount of downtime.
4. Availability is the probability that at any time, the system is either operating satisfactorily or is ready to be operated on demand, when used under
stated conditions. The availability might also be looked at as the ability
of equipment, under combined aspects of its reliability, maintainability,
and maintenance support, to perform its required function at a stated
instant of time. This availability includes the built-in equipment features
as well as the maintenance support function. Availability combines reliability and maintainability into one measure. There are different kinds of
availability that are calculated in different ways — see Von Alven (1964)
and ANSI/IEEE (1988). The most popular availabilities are achieved
availability and inherent availability.
a. Achieved availability includes all diagnostic, repair, administrative, and
logistic times. This availability is dependent on the maintenance support system. Achieved availability can be calculated as
A = Operating Time/(Operating Time + Unscheduled Time)
b. Inherent availability only includes operating time and active repair
time addressing the built-in capabilities of the equipment. Inherent
availability is calculated as
A=
MTBF
MTBF + MTTR
SL3151Ch07Frame Page 293 Thursday, September 12, 2002 6:07 PM
Reliability
293
Infant mortality
Failure rate
Normal life
Wear out
Time
FIGURE 7.1 Bathtub curve.
where MTTR = mean time-to-repair and the MTTR is for the active repair time.
5. Active repair time is that portion of downtime when the technicians are
working on the system to repair the failure situation. It must be understood
that the different availabilities are defined for various time-states of the
system.
6. Serviceability is the ease with which machinery and equipment can be
repaired. Here repair includes diagnosis of the fault, replacement of the
necessary parts, tryout, and bringing the equipment back on line. Serviceability is somewhat qualitative and addresses the ease by which the equipment, as designed, can be diagnosed and repaired. It involves factors such
as accessibility to test points, ease of removal of the failed components,
and ease of bringing the system back on line.
PRODUCT LIFE
AND
FAILURE RATE
Let us assume that we have released a population of products to the marketplace.
The failure rate is observed as the products age. The shape of the failure rate is
referred to as a bathtub curve (see Figure 7.1). Here we have overemphasized the
different parts of the curve for illustration.
This bathtub curve has three distinct regions:
1. Infant mortality period: During the infant mortality period the population
exhibits a high failure rate, decreasing rapidly as the weaker products fail.
Some manufacturers provide a “burn-in” period for their products to help
eliminate infant mortality failures. Generally, infant mortality is associated
with manufacturing issues. Examples are:
• Poor welds
• Contamination
• Improper installation
• And so on
2. Useful life period: During this period the population of products exhibits
a relatively low and constant failure rate. It is explained using the stress –
strength inference model for reliability. This model identifies the stress
SL3151Ch07Frame Page 294 Thursday, September 12, 2002 6:07 PM
294
Six Sigma and Beyond
distribution that represents the combined stressors acting on a system at
some point in time. The strength distribution represents the piece-to-piece
variability of components in the field. The inference area is indicative of
a potential failure when stresses exceed the strength of a component. In
other words, any failure in this period is a factor of the designed-in
reliability. Examples are:
• Low safety factors
• Abuse
• Misapplication
• Product variability
• And so on
3. Wear out period: At the onset of wear out, the failure rate starts to increase
rapidly. When the failure rate becomes high, replacement or major repair
must be performed if the product is to be left in service. Wear out is due
to a number of forces such as:
• Frictional wear
• Chemical change
• Maintenance practices
• Fatigue
• Corrosion or oxidation
• And so on
In conjunction with the bathtub curve there are two more items of concern. The
first one is the hazard rate (or the instantaneous failure rate) and the second, the
ROCOF plot.
The hazard rate is the probability that the product will fail in the next interval
of time (or distance or cycles). It is assumed the product has survived up to that
time. For example, there is a one in twenty chance that it will crack, break, bend,
or fail to function in the next month. Typically, hazard rate is shown as
h(t ) =
f (t )
f (t )
=
1 − F (t ) R(t )
where h(t) = hazard rate; f(t) = probability density function [PDF: f(t) = λe–λt]; F(t) =
cumulative distribution function [CDF: F(t) = 1 – e–λt; and R(t) = reliability at time
t [R(t) = 1 – F(t) = 1 – (1 – e–λt) = e–λt].
The Rate of Change of Failure or Rate of Change of Occurrence of Failure
(ROCOF), on the other hand, is a visual tool that helps the engineer to analyze
situations where a lot of data over time has been accumulated. Essentially, its purpose
is the same as that of the reliability bathtub curve, that is, to understand the life
stages of a product or process and take the appropriate action. A typical ROCOF
plot (for warranty item) will display an early (decreasing rate) and useful life
(constant rate) performance. If wear out is detected, it should be investigated.
Knowing what is happening to a product from one region of the bathtub curve to
the next helps the engineer specify what failed hardware to collect and aids with
calibrating the severity of development tests.
SL3151Ch07Frame Page 295 Thursday, September 12, 2002 6:07 PM
Reliability
295
If the number of failures is small, the ROCOF plot approach may be difficult to
interpret. When that happens, it is recommended that a “smoothing” approach be
taken. The typical smoothing methodology is to use log paper for the plotting.
Obviously, many more ways and more advanced techniques exist. It must be noted
here that most statistical software provides this smoothing as an option for the data
under consideration. See Volume III for more details on smoothing.
PRODUCT DESIGN AND DEVELOPMENT CYCLE
Developing a product that can be manufactured economically and consistently to be
delivered to the marketplace in quantity and that will work satisfactorily for the
customer takes a well established and precisely controlled design and development
cycle. Events must be scheduled to occur at precise times to phase the product into
the marketplace. To develop a new internal combustion engine for an automobile
takes about a three-year design cycle (down recently from five years), while a new
minicomputer takes about 18 months. Although the timing may be different for
different companies, the activities comprising a design and development cycle are
similar. The following is representative of the activities in a product development
cycle:
• Market research
• Forecast need.
• Forecast sales.
• Understand who the customer is and how the product will be used.
• Set broad performance objectives.
• Establish program cost objectives.
• Establish technical feasibility.
• Establish manufacturing capacity.
• Establish reliability and maintainability (R&M) requirements.
• Understand governmental regulations.
• Understand corporate objectives.
• Concept phase
• Formulate project team.
• Formulate design requirements.
• Establish real world customer usage profile.
• Develop and consider alternatives.
• Rank alternatives considering R&M requirements.
• Review quality and reliability history on past products.
• Assess feasibility of R&M requirements.
• Design phase
• Prepare preliminary design.
• Perform design calculations.
• Prepare rough drawings.
• Compare alternatives to pursue.
• Evaluate manufacturing feasibility of design approach (design for
manufacturability and assembly).
SL3151Ch07Frame Page 296 Thursday, September 12, 2002 6:07 PM
296
Six Sigma and Beyond
•
•
•
•
•
•
• Complete detailed design.
• Perform a design failure mode and effect analysis (FMEA).
• Complete detailed design package.
• Update FMEA to reflect current design and details.
• Develop design verification plan.
• Develop R&M model for product.
• Estimate product R&M using current design approach.
Prototype program
• Build components and prototypes.
• Write test plan.
• Perform component/subsystem tests.
• Perform system test.
• Eliminate design weaknesses.
• Estimate reliability using growth techniques.
Manufacturing engineering
• Process planning
• Assembly planning
• Capability analyses
• Process FMEA
Finalized design
• Consider test results.
• Consider manufacturing engineering inputs (design for manufacturability/assembly).
• Make design changes.
Freeze design
Release to manufacturing
Engineering changes
• Manufacturing experience
• Field experience
RELIABILITY
IN
DESIGN
The cost of unreliability is:
•
•
•
•
High warranty costs
Field campaigns
Loss of future sales
Cost of added field service support
It has been demonstrated in the marketplace that highly reliable products (failure
free) gain market share. A very classic example of this is the American automotive
market. In the early 1960s, American manufacturers were practically the only game
in town with GM capturing some 60% of the market. Since then, progressively and
on a yearly basis the market has shifted to the point where Flint (2001) reports that
now GM has a shade over 25% without trucks and Saab, Ford 14.7% without Volvo
and Jaguar, and Chrysler about 5%. The projections for the 2002 model year are
SL3151Ch07Frame Page 297 Thursday, September 12, 2002 6:07 PM
Reliability
297
not any better with GM capturing only 25%, Ford 15%, and Chrysler 6%. The sad
part of the automotive scene is that GM, Ford, and DaimlerChrysler have lost market
share, and sales are continually nudging down with no end in sight. That is, as Flint
(2001, p. 21) points out, “they are not going to recover that market share, not in the
short term, not in the next five to ten years.”
The evidence suggests that the mission of a reliability program is to estimate,
track, and report the reliability of hardware before it is produced. The reliability of
the equipment must be reported at every phase of design and development in a
consistent and easy-to-understand format. Warranty cost is an expensive situation
resulting from poor manufacturing quality and inadequate reliability. For example,
the chairman and chief executive of Ford Motor Company, Jacques Nasser, in the
1st quarter of 2001 leadership cascading meeting made the statement that in 1999,
there were 2.1 times as many vehicles recalled as were sold. In 2000, there were
six times as many. By way of comparison:
In 1994, according to an article in USA Today, the cost of warranty for a
Chrysler automobile was as high as $850 per vehicle. From the same article,
one could deduce that the cost per vehicle for General Motors was about
$350 and for Ford $650. This would be to cover the 36,000 mile warranty
in effect at that time.
In 2000, the warranty cost for Chrysler was about $1,300, GM about $1,200,
and Ford about $850 (Mayne et al., 2001).
For each car sold, the manufacturer must collect and retain this expense in a
warranty account.
COST
OF
ENGINEERING CHANGES
AND
PRODUCT LIFE CYCLE
The stage of product development/manufacturing and the cost of an engineering
change have been estimated many times by many different industries and various
trade magazines as a cost that grows by a factor of five to ten as one moves from
early design to manufacturing. Typical figures for this high cost are
• Prototype stage: <$20,000
• After start of production: >$100,000
Therefore, reliability can play an important role in designing products that will
satisfy the customer and will prove durable in the real world usage application. The
focus of reliability is to design, identify, and detect early potential concerns at a
point where it is really cost effective to do so.
Reliability must be valued by the organization and should be a primary consideration in all decision making. Reliability techniques and disciplines are integrated
into system and component planning, design, development, manufacturing, supply,
delivery, and service processes. The reliability process is tailored to fit individual
business unit requirements and is based on common concepts that are focused on
producing reliable products and systems, not just components.
SL3151Ch07Frame Page 298 Thursday, September 12, 2002 6:07 PM
298
Six Sigma and Beyond
Any organization committed to satisfy the customer’s expectations for reliability
(and value) throughout the useful life of its product must be concerned with reliability. For without it, the organization is doomed to fail. The total reliability process
includes robustness concepts and methods that are integrated into the organization’s
timing schedule and overall business system. Cross-functional teams and empowered
individuals are key to the successful implementation of any reliability program.
Reliability concepts and methods are generally thought of as a proprietary
domain of only the product development department or community. That is not
completely true. Reliability may be used anywhere there is a need for design and
development work, such as manufacturing and tooling. However, it does not address
actions specifically targeted at manufacturing and assembly. This is the reason why
under Design for Six Sigma (DFSS), reliability becomes very important from the
“get go.” To be sure, reliability currently does not include all the elements of the
Advanced Product Quality Plan (APQP), but it is compatible with APQP. It outlines
the three quality and reliability phases that all program teams and supporting organizations should go through in the product development process to achieve a more
reliable and robust product. The three phases stress useful life reliability, focusing
specifically on the deployment of customer-driven requirements, designing in robustness, and verifying that the designs meet the requirements.
RELIABILITY
IN THE
TECHNOLOGY DEPLOYMENT PROCESS
Technology is ever changing on all fronts. Customers expect increased reliability
and better quality for a reasonable cost. Reliability may indeed play a major role in
bringing technology, customer satisfaction, and lower cost into reality. Let us then
try to understand the process of support and the cascading of requirements throughout the Technology Deployment Process (TDP).
Understanding the TDP begins with the recognition that this process has three
phases and each phase has specific requirements. The three phases are pre deployment process, core engineering process, and quality support. In the pre deployment
process, there are three stages with very specific inputs and outputs. In core engineering, the development of generic requirements begins, and in quality support, the
“best” reliability practices are developed.
1. Pre-Deployment Process
Three stages are involved here. They are:
1. Identify/select new technologies: The main function of this stage is to
identify and select technology for reliable and robust products that meet
future customer needs or wants. In essence, here we are to develop and
understand:
• Customer wants process
• Competitive analysis
• Technology strategy/roadmap
SL3151Ch07Frame Page 299 Thursday, September 12, 2002 6:07 PM
Reliability
299
2. Develop/optimize technology to achieve concept readiness: The main
function of this stage is to sufficiently develop and prove through analytical and/or surrogate testing that the technology meets the functional and
reliability requirements for customer wants or needs under real world
usage conditions. In essence, here we are to generate, understand, and
develop readiness through:
• Reviewing quality history of similar systems/concepts
• Understanding real world usage profile
• Defining functional requirements of system
• Planning for robustness
• Reviewing quality/reliability/durability reports or worksheets
3. Develop/optimize technology to achieve implementation readiness: The
main function of this stage is to optimize the technology to meet functional
and/or reliability requirements. Additionally, the aim is to demonstrate
that the technology is robust and reliable under real world usage conditions. In essence, here we are to further understand the requirements by:
• Refining design requirements
• Designing for robustness
• Verifying the design
• Reviewing quality/reliability/durability reports or worksheets
2. Core Engineering Process
Develop generic requirements for forward models by providing product lines with
generic information on system robust design, such as case studies, system P-diagrams, measurement of ideal functions, etc. In this stage, we also conduct competitive technical information analysis to our potential product lines through test-thebest and reliability benchmarking. Some of the specific tools we may use are:
•
•
•
•
•
•
•
System design specification guidelines
Real world usage demographics
Failure mode and effect analysis
Key life testing
Fault tree analysis
Design verification process
And so on
The idea here is to be able to develop common-cause problem resolution, that
is, to be able to identify common-cause problems/root causes across the product
line(s) and champion corrective action by following reliability disciplines. In essence
then, core engineering should:
• Prioritize concerns
• Identify root causes
• Determine/incorporate corrective action
SL3151Ch07Frame Page 300 Thursday, September 12, 2002 6:07 PM
300
Six Sigma and Beyond
• Validate improvements
• Champion implementation across product line(s)
3. Quality Support
Identify best reliability practices and lead the process standardization and simplification. Develop a toolbox and provide reliability consultation.
RELIABILITY MEASURES — TESTING
The purpose of performing a reliability test is to answer the question, “Does the
item meet or exceed the specified minimum reliability requirement?” Reliability
testing is used to:
• Determine whether the system conforms to the specified, quantitative
reliability requirements
• Evaluate the system’s expected performance in the warranty period and
its compliance to the useful life targets as defined by corporate policy
• Compare performance of the system to the goal that was established earlier
• Monitor and validate reliability growth
• Determine design actions based on the outcomes of the test
In addition to their other uses, the outcomes of reliability testing are used as a
basis for design qualification and acceptance. Reliability testing should be a natural
extension of the analytical reliability models, so that test results will clarify and
verify the predicted results, in the customer’s environment.
WHAT IS
A
RELIABILITY TEST?
A reliability test is effectively a “sampling” test in that it involves a sample of objects
selected from a “population.” From the sample data, some statement(s) are made
about the population parameter(s). In reliability testing, as in any sampling test:
• The sample is assumed to be representative of the population.
• The characteristics of the sample (e.g., sample mean) are assumed to be
an estimate of the true value of the population characteristics (e.g., population mean).
A key factor in reliability test planning is choosing the proper sample size. Most
of the activity in determining sample size is involved with either:
1. Achieving the desired confidence that the test results give the correct
information
2. Reducing the risk that the test results will give the wrong information
SL3151Ch07Frame Page 301 Thursday, September 12, 2002 6:07 PM
Reliability
301
WHEN DOES RELIABILITY TESTING OCCUR?
Prior to the time that hardware is available, simulation and analysis should be used
to find design weaknesses. Reliability testing should begin as soon as hardware is
available for testing. Ideally, much of the reliability testing will occur “on the bench”
with the testing of individual components. There is good reason for this: The effect
of failure on schedule and cost increases progressively with the program timeline.
The later in the process that the failure and corrective action are found, the more it
costs to correct and the less time there is to make the correction. Some key points
to remember regarding test planning:
• Develop the reliability test plan early in the design phase.
• Update the plan as requirements are added.
• Run the formal reliability testing according to the predetermined procedure. This is to ensure that results are not contaminated by development
testing or procedural issues.
• Develop the test plan in order to get the maximum information with the
fewest resources possible.
• Increase test efficiency by understanding stress/strength and acceleration
factor relationships. This may require accelerated testing, such as AST
(Accelerated Stress Test), which will increase the information gained from
a test program.
• Make sure your test plan shows the relationship between development testing
and reliability testing. While all data contribute to the overall knowledge
about a system, other functional development testing is an opportunity to
gain insight into the reliability performance of your product.
Note: A “control sample” should be maintained as a reference throughout the
reliability testing process. Control samples should not be subjected to any stresses
other than the normal parametric and functional testing.
RELIABILITY TESTING OBJECTIVES
When preparing the test plan, keep these objectives in mind:
• Test with regard to production intent. Make sure the sample that is tested
is representative of the system that the customer will receive. This means
that the test unit is representative of the final product in all areas including
materials (metals, fasteners, weight), processes (machining, casting, heat
treat), and procedures (assembly, service, repair). Of course, consider that
these elements may change or that they may not be known. However, use
the same production intent to the extent known at the time of the test plan.
• Determine performance parameters before testing is started. It is often
more important in reliability evaluations to monitor the percentage change
in a parameter rather than the performance to specification.
SL3151Ch07Frame Page 302 Thursday, September 12, 2002 6:07 PM
302
Six Sigma and Beyond
• Duplicate/simulate the full range of the customer stresses and environments. This includes testing to the 95th percentile customer. (For most
organizations this percentile is the default. Make sure you identify what
is the exact percentile for your organization.)
• Quantify failures as they relate to the system being tested. A failure results
when a system does not perform to customer expectations, even if there
is no actual broken part.
Remember,
• Customer requirements include the specifications and requirements of internal customers and regulatory agencies as well as the ultimate purchaser.
• You should structure testing to identify hardware interface issues as they
relate to the system being tested.
Sudden-Death Testing
Sudden-death testing allows you to obtain test data quickly and reduces the number
of test fixtures required. It can be used on a sample as large as 40 or more or as
small as 15. Sudden-death testing reduces testing time in cases where the lower
quartile (lower 25%) of a life distribution is considerably lower than the upper
quartile (upper 25%). The philosophy involved in sudden-death testing is to test
small groups of samples to a first failure only and use the data to determine the
Weibull distribution of the component. The method is as follows:
1. Choose a sample size that can be divided into three or more groups with
the same number of items in each group. Divide the sample into three or
more groups of equal size and treat each group as if it were an individual
assembly.
2. Test all items in each group concurrently until there is a first failure in
that group. Testing is then stopped on the remaining units in that group
as soon as the first unit fails, hence the name “sudden death.”
3. Record the time to first failure in each group.
4. Rank the times to failure in ascending order.
5. Assign median ranks to each failure based on the sample size equal to
the number of groups. Median rank charts are used for this purpose.
6. Plot the times to failure vs. median ranks on Weibull paper.
7. Draw the best fit line. (Eye the line or use the regression model.) This
line represents the sudden-death line.
8. Determine the life at which 50% of the first failures are likely to occur
(B50 life) by drawing a horizontal line from the 50% level to the suddendeath line. Drop a vertical line from this point down.
9. Find the median rank for the first failure when the sample size is equal
to the number of items in each subgroup. Again, refer to the median rank
charts. Draw a horizontal line from this point until it intersects the vertical
line drawn in the previous step.
SL3151Ch07Frame Page 303 Thursday, September 12, 2002 6:07 PM
Reliability
303
TABLE 7.1
Failure Rates with Median Ranks
Failure Order
Number
Life
Hours
Median Ranks,
%
1
2
3
4
5
65
120
155
200
300
12.95
31.38
50.00
68.62
87.06
10. Draw a line parallel to the sudden-death line passing through the intersection point from step 9. This line is called the population line and
represents the Weibull distribution of the population.
Sudden-death testing is a good method to use to determine the failure distribution
of the component. (Note: Only common failure mechanisms can be used for each
Weibull distribution. Care must be taken to determine the true root cause of all
failures. Failure must be related to the stresses applied during the test.)
EXAMPLE
Assume you have a sample of 40 parts from the same production run available for
testing purposes. The parts are divided into five groups of eight parts as shown below:
Group
Group
Group
Group
Group
l
2
3
4
5
12345678
12345678
12345678
12345678
12345678
All parts in each group are put on test simultaneously. The test proceeds until any
one part in each group fails. At that time, testing stops on all parts in that group.
In the test, we experience the following first failures in each group:
Group
Group
Group
Group
Group
1
2
3
4
5
Part
Part
Part
Part
Part
#3
#4
#1
#5
#7
fails
fails
fails
fails
fails
at
at
at
at
at
120 hours
65 hours
155 hours
300 hours
200 hours
Failure data are arranged in ascending hours to failure, and their median ranks are
determined based on a sample size of N = 5. (There are five failures, one in each of
five groups.) The chart in Table 7.1 illustrates the data. The median rank percentage
for each failure is derived from the median rank (Table 7.2) for five samples.
If the life hours and median ranks of the five failures are plotted on Weibull paper,
the resulting line is called the sudden-death line. The sudden-death line represents
SL3151Ch07Frame Page 304 Thursday, September 12, 2002 6:07 PM
304
Six Sigma and Beyond
TABLE 7.2
Median Ranks
Rank
Order
1
2
3
4
5
6
7
8
9
10
Rank
Order
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Sample size
1
2
3
4
5
6
7
8
9
10
50.0
29.3
70.7
20.6
50.0
7 9.4
15.9
38.6
61.4
84.1
12.9
31.4
50.0
68.6
87.1
10.9
26.4
42.1
57.9
73.9
89.1
9.4
22.8
36.4
50.0
63.6
77.2
90.6
8.3
20.1
3G.1
44:0
56.0
67.9
79.9
91.7
7.4
18.0
Z8.6
39.3
50.0
60.7
71.4
82.0
92.6
6.7
16.2
25.9
35.5
45.2
54.8
64.5
74.1
83.8
93.3
11
12
13
14
15
16
17
18
19
20
6.1
14.8
23.6
32.4
41.2
50.0
58.8
67.6
76.4
85.2
93.9
5.6
13.6
21.7
29.8
37.9
46.0
54.0
62.1
70.2
78.3
86.4
94.4
5.2
12.6
20.0
27.5
35.0
42.5
50.0
57.5
65.0
72.5
80.0
87.4
94.8
4.8
1 1.7
18.6
25.6
32.6
39.5
46.5
53.5
60.5
67.4
74.4
81.4
88.3
95.2
4.5
10.9
17.4
23.9
30.4
37.0
43.5
50.0
56.5
63.0
69.5
76.1
82.6
89.1
95.5
4.2
10.3
16.4
22.5
28.6
34.7
40.8
46.9
53.1
59.2
65.3
71.4
77.5
83.6
89.7
95.8
4.0
9.7
15.4
21.2
26.9
32.7
38.5
44.2
50.0
55.8
61.5
67.3
73.1
78.8
84.6
90.3
96.0
3.8
9.2
14.6
20.0
25.5
30.9
36.4
41.8
47.3
52.7
58.2
63.6
69.1
74.5
80.0
85.4
90.8
96.2
3.6
8.7
13.8
19.0
24.2
29.3
34.5
39.7
44.8
50.0
55.2
60.3
65.5
70.7
75.8
81.0
86.2
91.3
96.4
3.4
8.3
13.1
18.1
23.0
27.9
32.8
37.7
42.6
47.5
52.5
57.4
62.3
67.2
72.1
77.0
81.9
86.9
91.7
96.6
Sample Size
the cumulative distribution that would result if five assemblies failed, but it actually
represents five measures of the first failure in eight of the population. The median
life point on the sudden-death line (point at which 50% of the failures occur) will
correspond to the median rank for the first failure in a sample of eight, which is
8.30%. The population line is drawn parallel to the sudden-death line through a point
plotted at 8.30% and at the median life to first failure as determined above. This
estimate of the population’s minimum life is just as reliable as the one that would
have been obtained if all 40 parts were tested to failure.
SL3151Ch07Frame Page 305 Thursday, September 12, 2002 6:07 PM
Reliability
305
Accelerated Testing
Accelerated testing is another approach that may be used to reduce the total test
time required. Accelerated testing requires stressing the product to levels that are
more severe than normal. The results that are obtained at the accelerated stress levels
are compared to those at the design stress or normal operating conditions. We will
look at examples of this comparison during this section.
We use accelerated testing to:
• Generate failures, especially in components that have long life under
normal conditions
• Obtain information that relates to life under normal conditions
• Determine design/technology limits of the hardware
Accelerated testing is accomplished by reducing the cycle time, such as by:
• Compressing cycle time by reducing or eliminating idle time in the normal
operating cycle
• Overstressing
There are some pitfalls in using accelerated testing:
• Accelerated testing can cause failure modes that are not representative.
• If there is little correlation to “real” use, such as aging, thermal cycling,
and corrosion, then it will be difficult to determine how accelerated testing
affects these types of failure modes.
ACCELERATED TEST METHODS
There are many test methods that can be used for accelerated testing. This section covers:
•
•
•
•
Constant-stress testing
Step-stress testing
Progressive-stress testing
AST/PASS testing
Before we discuss the methods, keep in mind that any product may be subjected
to multiple stresses and combinations of stresses. The stresses and combinations are
identified very early in the design phase. When accelerated tests are run, ensure that
all the stresses are represented in the test environment and that the product is exposed
to every stress.
CONSTANT-STRESS TESTING
In constant-stress testing, each test unit is run at constant high stress until it fails or
its performance degrades. Several different constant stress conditions are usually
SL3151Ch07Frame Page 306 Thursday, September 12, 2002 6:07 PM
306
Six Sigma and Beyond
employed, and a number of test units are tested at each condition. Some products
run at constant stress, and this type of test represents actual use for those products.
Constant stress will usually provide greater accuracy in estimating time to failure.
Also, constant-stress testing is most helpful for simple components. In systems and
assemblies, acceleration factors often differ for different types of components.
STEP-STRESS TESTING
In step-stress testing, the item is tested initially at a normal, constant stress for a
specified period of time. Then the stress is increased to a higher level for a specified
period of time. Increases continue in a stepped fashion.
The main advantage of step-stress testing is that it quickly yields failure, because
increasing stress ensures that failures occur. A disadvantage is that failure modes
that occur at high stress may differ from those at normal use conditions. Quick
failures do not guarantee more accurate estimates of life or reliability. A constantstress test with a few failures usually yields greater accuracy in estimating the actual
time to failure than a shorter step-stress test; however, we may need to do both to
correlate the results so that the results of the shorter test can be used to predict the
life. (Always remember that failures must be related to the stress conditions to be
valid. Other test discrepancies should be noted and repaired and the testing continued.)
PROGRESSIVE-STRESS TESTING
Progressive-stress testing is step-stress testing carried to the extreme. In this test,
the stress on a test unit is continuously increased, rather than being increased in
steps. Usually, the accelerating variable is increased linearly with time.
Several different rates of increase are used, and a number of test units are tested
at each rate of increase. Under a low rate of increase of stress, specimens tend to
live longer and to fail at lower stress because of the natural aging effects or cumulative effects of the stress on the component. Progressive-stress testing has some of
the same advantages and disadvantages as step-stress testing.
ACCELERATED-TEST MODELS
The data from accelerated tests are interpreted and analyzed using different models.
The model that is used depends upon the:
• Product
• Testing method
• Accelerating variables
The models give the product life or performance as a function of the accelerating
stress. Keep these two points in mind as you analyze accelerated test data:
1. Units run at a constant high stress tend to have shorter life than units run
at a constant low stress.
SL3151Ch07Frame Page 307 Thursday, September 12, 2002 6:07 PM
Reliability
307
2. Distribution plots show the cumulative percentage of the samples that fails
as a function of time. In fact, over time the smoothing of the curve in the
shape of an “S” is indeed the estimate of the actual cumulative percentage
failing as a function of time.
Two common models — although appropriate for component level testing —
that deal specifically with accelerated tests are:
1. Inverse Power Law Model
2. Arrhenius Model
Inverse Power Law Model
The inverse power law model applies to many failure mechanisms as well as to
many systems and components. This model assumes that at any stress, the time to
failure is Weibull distributed. Thus:
• The Weibull shape parameter β has the same value for all the stress levels.
• The Weibull scale parameter θ is an inverse power function of the stress.
The model assumes that the life at rated stress divided by the life at accelerated
stress is equal to the quantity, accelerated stress divided by rated stress, raised to
the power n, where: n = acceleration factor determined from the slope of the S-N
diagram on the log-log scale.
Using the above information, we can say that:
θu = θa[Accelerated stress/Rated stress]n
where θu = life at the rated (usage) stress level; θa = life at the accelerated stress
level; and n = acceleration factor determined from the slope of the S-N diagram on
the log-log scale.
EXAMPLE
Let us assume we tested 15 incandescent lamps at 36 volts until all items in the
sample failed. A second sample of 15 lamps was tested at 20 volts. Using these data,
we will determine the characteristic life at each test voltage and use this information
to determine the characteristic life of the device when operated at 5 volts.
From the accelerated test data:
θ20 volts = 11.7 hours
θ36 volts = 2.3 hours
Since we know these two factors, we can determine the acceleration factor, n. We
have the following relationship:
SL3151Ch07Frame Page 308 Thursday, September 12, 2002 6:07 PM
308
Six Sigma and Beyond
[Life at rated stress/life at accelerated stress] = [Accelerated stress/rated stress]n
This relationship becomes
[θ20 volts/θ36 volts] = [36 volts/20 volts]n
Substituting the values for theta 20 v and theta 36 v we have
11.7hrs  36v 
=
2.3hrs  20 v 
n
Therefore,
n = 2.767
Now we can use the following equation to determine the characteristic life at 5 volts:
θu = θa [Accelerated stress/Rated stress]n
 36 
θ5 v = θ36 v [Accelerated stress/Rated stress] = 2.3  
 5
n
2.767
= 542 hours
The characteristic life at 5 volts is 542 hours.
The reader must be very careful here because not all electronic parts or assemblies will follow the inverse power law model. Therefore, its applicability must
usually be verified experimentally before use.
Arrhenius Model
The Arrhenius relationship for reaction rate is often used to account for the effect
of temperature on electrical/electronic components. The Arrhenius relationship is as
follows:
 − Ea 
Reaction rate = A exp 

 K BT 
where: A = normalizing constant; KB = Boltzman’s constant (8.63 × 10–5 ev/degrees
K); T = ambient temperature in degrees Kelvin; and Ea = activation energy type
constant (unique for each failure mechanism).
In those situations where it can be shown that the failure mechanism rate follows
the Arrhenius rate with temperature, the following Acceleration Factor (AF) can be
developed:
SL3151Ch07Frame Page 309 Thursday, September 12, 2002 6:07 PM
Reliability
309
 − Ea 
Rateuse = A exp 

 K BTuse 


− Ea
Rateaccelerated = A exp 

 K BTaccelerated 
 − Ea 
A exp 

 K BTa 
Acceleration Factor = AF = Ratea/Rateu =
 − Ea 
A exp 

 K BTu 
 −E  1
E  1
1 
1 
AF = exp  a  −   = exp  a  −  
 K B  Ta Tu  
 K B  Tu Ta  
where Ta = acceleration test temperature in degrees Kelvin and Tu = actual use
temperature in degrees Kelvin.
EXAMPLE
Assume we have a device that has an activation energy of 0.5 and a characteristic
life of 2750 hours at an accelerated operating temperature of 150°C. We want to find
the characteristic life at an expected use temperature of 85°C. (Remember that the
conversion factor for Celsius to Kelvin is: °K = °C + 273 — You may want to review
Volume II.)
Therefore:
Ta = 150 + 273 = 423°K and Tu = 85 + 273 = 358°K
The Ea = 0.5. Our calculations would look like:

 1
.5
1 
−
AF = exp 


−5
 8.63x10  358 423  
AF = exp [2.49] = 12. Therefore, the acceleration factor is 12. To determine life at
85°C, multiply the acceleration factor times the characteristic life at the accelerated
test level of 150°C.
Characteristic life at 85°C = (12) (2750 hours) = 33,000 hours
SL3151Ch07Frame Page 310 Thursday, September 12, 2002 6:07 PM
310
Six Sigma and Beyond
AST/PASS
HALT (Highly Accelerated Life Test) and HASS (Highly Accelerated Stress Screens)
are two types of accelerated test processes used to simulate aging in manufactured
products. The HALT/HASS process was invented by Dr. Gregg Hobbs in the early
1980s. It has since been used with much success in various military and commercial
applications. The HALT/HASS methods and tools are still in the development phase
and will continue to evolve as more companies embrace the concept of accelerated
testing. Many companies use this type of testing, which they call AST (Accelerated
Stress Test) and PASS (Production Accelerated Stress Screen).
The goal of accelerated testing is to simulate aging. If the stress-strength relationships are plotted, the design strength and field stress are distributed around
means. Let us assume the stress and strength distributions are overlapped (the right
tail of the stress curve is overlapped with the left tail of the strength curve). When
that happens, there is an opportunity for the product to fail in the field. This area of
overlap is called interference.
Many products, including some electronic products, have a tendency to grow
weaker with age. This is reflected in a greater overlap of the curves, thus increasing
the interference area. Accelerated testing attempts to simulate the aging process so
that the limits of design strength are identified quickly and the necessary design
modifications can be implemented.
PURPOSE
OF
AST
AST is a highly accelerated test designed to fail the target component or module.
The goal of this process is to cause failure, discover the root cause, fix it, and retest
it. This process continues until the “limit of technology” is reached and all the
components of one technology (i.e., capacitors, diodes, resistors) fail. Once a design
reaches its limit of technology, the tails of the stress-strength distribution should
have minimal overlap.
The AST method uses step-stress techniques to discover the operating and
destruct limits of the component or module design. This method should be used in
the pre-prototype and/or pre-bookshelf phase of the product development cycle or
as soon as the first parts are available. Let us look at an example:
We want to discover the operating and destruct limits of a component/module
design for minimum temperature. The unit is placed in a test chamber, stabilized at
–40°C, then powered up to verify the operation. The unit is then unpowered, the
temperature lowered to –45°C and the unit allowed to stabilize at that temperature.
It is then powered on and verified. This process is repeated as the temperature is
lowered by 5° increments.
At –70°C, the unit fails. The unit is warmed to –65°C to see if it recovers.
Normally, it will recover. The temperature of –65°C is said to be its operational
limit. The test continues to determine the destruct limit. The limit is lowered to
–75°C, stabilized, powered to see if it operates, then returned it to –65°C to see if
it recovers. If when this unit is taken down to –95°C and returned to –65°C, it does
not recover, the minimum temperature destruct limit for this module is determined
SL3151Ch07Frame Page 311 Thursday, September 12, 2002 6:07 PM
Reliability
311
to be –95°C. The failed module is then analyzed to determine the root cause of the
failure.
The team must then determine if the failure mode is the limit of technology or
if it is a design problem that can be fixed. Experience has shown that 80% of the
failures are design problems accelerated to failure using the AST or similar accelerated stress test methods.
AST PRE-TEST REQUIREMENTS
Before AST is run on a product, the product development team should verify that:
• The component/module meets specification requirements at minimum and
maximum temperature.
• The vibration evaluation test (sine-sweep) is complete.
• Data are available for review by the reliability engineer.
• A copy of all schematics is available for review.
The product development team will provide the component/module monitoring
equipment used during AST and will work with the reliability engineer to define
what constitutes a “failure” during the test.
OBJECTIVE
AND
BENEFITS
OF
AST
The objective of AST is to discover the operational and destruct limits of a design
and to verify how close these limits are to the technological limits of the components
and materials used in the design. It also verifies that the component/module is strong
enough to meet the requirements of the customer and product application. These
requirements must be balanced with reasonable cost considerations. The benefits of
AST include:
• Easier system and subsystem validation due to:
• Elimination of component- /module-related failures
• Verification of worst-case stress analysis and derating requirements
• A list of failure modes and corrections to be shared with the design team
and incorporated into future designs
• Products that allow the manufacturing team to use PASS and to eliminate
the in-process “build and check” types of tests
The failure modes from the AST and PASS are used by the manufacturing team
to ensure that they do not see any of these problems in their products.
PURPOSE
OF
PASS
PASS is incorporated into a process after the design has been first subjected to AST.
The purpose of PASS is to take the process flaws created in the component/module
from latent (invisible) to patent (visible). This is accomplished by severely stressing
a component enough to make the flaws “visible” to the monitoring equipment. These
SL3151Ch07Frame Page 312 Thursday, September 12, 2002 6:07 PM
312
Six Sigma and Beyond
flaws are called outliers, and they result from process variation, process changes,
and different supplier sources. The goal of PASS is to find the outliers, which will
assist in the determination of the root cause and the correction of the problem before
the component reaches the customer. This process offers the opportunity for the
organization to eliminate module conditioning and burn-in.
PASS development is an iterative process that starts when the pre-pilot units
become available in the pre-pilot phase of the product development cycle. The initial
PASS screening test limits are the AST operational limits and will be adjusted
accordingly as the components/modules fail and the root cause determinations indicate whether the failures are limits of technology or process problems. The PASS
also incorporates findings from process failure mode and effect analysis (PFMEA)
regarding possible “significant” process failure modes that must be detected if
present.
When PASS development is complete, a strength-of-PASS test is performed to
verify that the PASS has not removed too much useful life from the product. A
sample of 12 to 24 components is run through 10 to 20 PASS cycles. These samples
are then tested using the design verification life test. If the samples fail this test, the
screen is too strong. The PASS will be adjusted based on the root cause analysis,
and the strength-of-PASS will be rerun.
OBJECTIVE
AND
BENEFITS
OF
PASS
The objective of PASS is to precipitate all manufacturing defects in the component/module at the manufacturing facility, while still leaving the product with substantially more fatigue life after screening than is required for survival in the normal
customer environment. The benefits of PASS include:
• Accelerated manufacturing screens
• Reduced facility requirements
• Improved rate of return on tester costs
CHARACTERISTICS OF A RELIABILITY
DEMONSTRATION TEST
Eight characteristics are important in reliability demonstration testing. These are:
1. Specified reliability, Rs: This value is sometimes known as the “customer
reliability.” Traditionally, this value is represented as the probability of
success (i.e., 0.98); however, other measures may be used, such as a
specified MTBF.
2. Confidence level of the demonstration test: While customers desire a
certain reliability, they want the demonstration test to prove the reliability
at a given confidence level. A demonstration test with a 90% confidence
level is said to “demonstrate with 90% confidence that the specified
reliability requirement is achieved.”
SL3151Ch07Frame Page 313 Thursday, September 12, 2002 6:07 PM
Reliability
313
3. Consumer’s risk, β: Any demonstration test runs the risk of accepting bad
product or rejecting good product. From the consumer’s point of view,
the risk is greatest if bad product is accepted. Therefore, the consumer
wants to minimize that risk. The consumer’s risk is the risk that a test can
accept a product that actually fails to meet the reliability requirement.
Consumer’s risk can be expressed as: β = 1 – confidence level
4. Probability distribution: This is the distribution that is used for the number
of failures or for time to failure. These are generally expressed as normal,
exponential, or Weibull.
5. Sampling scheme
6. Number of test failures to allow
7. Producer’s risk, α : From the producer’s standpoint, the risk is greatest if
the test rejects good product. Producer’s risk is the risk that the test will
reject a product that actually meets the reliability requirement.
8. Design reliability, Ra: This is the reliability that is required in order to
meet the producer’s risk, α, requirement at the particular sample size
chosen for the test. Small test sample sizes will require a high design
reliability in order to meet the producer’s risk objective. As the sample
size increases, the design reliability requirement will become smaller in
order to meet the producer’s risk objective.
THE OPERATING CHARACTERISTIC CURVE
The relationship between the probability of acceptance and the population reliability
can be shown with an operating characteristic (OC) curve. An operating characteristic
curve can also be used to show the relationship between the probability of acceptance
and MTBF or failure rate. Given then an OC curve, one may calculate the:
1. Producer’s risk, α
2. Consumer’s risk, β
3. Probability of acceptance at any other population reliability or MTBF or
failure rate
Obviously, a specific OC curve will apply for each test situation and will depend
on the number of pieces tested and the number of failures allowed.
ATTRIBUTES TESTS
If the components being tested are merely being classified as acceptable or unacceptable, the demonstration test is an attributes test. Attributes tests:
• May be performed even if a probability distribution of the time to failure
is not known
• May be performed if a probability distribution such as normal, exponential, or Weibull is assumed by dichotomizing the life distribution into
acceptable and unacceptable time to failure
SL3151Ch07Frame Page 314 Thursday, September 12, 2002 6:07 PM
314
Six Sigma and Beyond
• Are usually simpler and cheaper to perform than variables tests
• Usually require larger sample sizes to achieve the same confidence or
risks as variables tests
VARIABLES TESTS
Variables tests are used when more information is required than whether the unit
passed or failed, for example, “What was the time to failure?” The test is a variables
test if the life of the items under test is:
• Recorded in time units
• Assumed to have a specific probability distribution such as normal, exponential, or Weibull
FIXED-SAMPLE TESTS
When the required reliability and the test confidence/risk are known, statistical theory
will dictate the precise number of items that must be tested if a fixed sample size
is desired.
SEQUENTIAL TESTS
A sequential test may be used when the units are tested one at a time and the
conclusion to accept or reject is reached after an indeterminate number of observations. In a sequential test:
1. The “average” number of samples required to reach a conclusion will
usually be lower than in a fixed-sample test. This is especially true if the
population reliability is very good or very poor.
2. The required sample size is unknown at the beginning of the test and can
be substantially larger than that in the fixed-sample test in certain cases.
3. The test time required is much longer because samples are tested one at
a time (in series) rather than all at the same time (in parallel), as in fixedsample tests.
Now that you are familiar with the four test types, let us look at the test methods.
Note that the four test types are not mutually exclusive. We can have fixed-sample
or sequential-attributes tests as well as fixed-sample or sequential-variables tests.
RELIABILITY DEMONSTRATION TEST METHODS
Attributes tests can be used when:
• The accept/reject criterion is a go/no-go situation.
• The probability distribution of times to failure is unknown.
• Variables tests are found to be too expensive.
SL3151Ch07Frame Page 315 Thursday, September 12, 2002 6:07 PM
Reliability
315
SMALL POPULATIONS — FIXED-SAMPLE TEST
USING THE HYPERGEOMETRIC DISTRIBUTION
When items from a small population are tested and the accept/reject decision is based
on attributes, the hypergeometric distribution is applicable for test planning. The definition of successfully passing the test will be that an item survives the test. The parameter
to be evaluated is the population reliability. The estimation of the parameter is based
on a fixed sample size and testing without repair. The method to use is described below:
1.
2.
3.
4.
Define the criteria for success/failure.
Define the acceptance reliability, RS.
Specify the confidence level or the corresponding consumer’s risk, β.
Specify, if desired, producer’s risk, α. (Producer’s risk can be used to
calculate the design reliability target, Rd, needed in order to meet the α
requirements.)
The process consists of a trial-and-error solution of the hypergeometric equation
until the conditions for the probability of acceptance are met.
The equation that is used is:
f
Pr(x ≤ f) =
∑
x =0
 N (1 − R)  NR 

x   n − x 

 N
 n
 
where Pr(x < –f) = probability of acceptance; f = maximum number of failures to
be allowed; x = observed failures in sample; R = reliability of population; N =
population size; and n = sample size.
 N
N!
 n =
n
N
−n!
!
 
(
)
LARGE POPULATION — FIXED-SAMPLE TEST
USING THE BINOMIAL DISTRIBUTION
When parts from a large population are tested and the accept/reject decision is based
on attributes, the binomial distribution can be used. Note that for a large N (one in
which the sample size will be less than 10% of the population), the binomial
distribution is a good approximation for the hypergeometric distribution. The binomial attribute demonstration test is probably the most versatile for use on product
components for several reasons:
1. The population is large.
2. The time-to-failure distribution for the parts is probably unknown.
3. Pass/fail criteria are usually appropriate.
SL3151Ch07Frame Page 316 Thursday, September 12, 2002 6:07 PM
316
Six Sigma and Beyond
As with the hypergeometric distribution, the procedure begins by identification of:
1. Specified reliability, Rs
2. Confidence level or consumer’s risk, β
3. Producer’s risk, α (if desired)
The process consists of a trial-and-error solution of the binomial equation until
the conditions for the probability of acceptance are met. The equation that is used is:
f
Pr(x ≤ f) =
 n
∑  x (1 − R) ( R)
x
n− x
x =0
where Pr(x < f) = probability of acceptance; f = maximum number of failures to be
allowed; x = observed failures in sample; R = reliability of population; and n =
sample size.
LARGE POPULATION — FIXED-SAMPLE TEST
USING THE POISSON DISTRIBUTION
The Poisson distribution can be used as an approximation of both the hypergeometric
and the binomial distributions if:
The population, N, is large compared to the sample size, n.
The fractional defective in the population is small (Rpopulation > 0–9).
The process consists of a trial-and-error solution using the following equation
or Poisson tables, Rs, Rd, and various sample sizes until the conditions of
α and β are satisfied.
f
Pr(x ≤ f) =
∑
x =0
λxpoie
− λ poi
x!
where Pr(x ≤ f) = probability of acceptance; f = maximum number of failures to be
allowed; x = observed failures in sample; λpoi = (n) (1 – R) (The reader should note
that the λpoi is the Poisson density and does not relate to failure rate); r = reliability
of population; and n = sample size.
SUCCESS TESTING
Success testing is a special case of binomial attributes testing for large populations
where no failures are allowed. Success testing is the simplest method for demonstrating a required reliability level at a specified confidence level. In this test case,
n items are subjected to a test for the specified time of interest, and the specified
SL3151Ch07Frame Page 317 Thursday, September 12, 2002 6:07 PM
Reliability
317
reliability and confidence levels are demonstrated if no failures occur. The method
uses the following relationship:
R = (1 – C)1/n = (β)1/n
where R = reliability required; n = number of units tested; C = confidence level;
and β = consumer’s risk.
The necessary sample size to demonstrate the required reliability at a given
confidence level is:
n=
SEQUENTIAL TEST PLAN
FOR THE
ln(1 − C )
ln R
BINOMIAL DISTRIBUTION
The sequential test is a hypothesis testing method in which a decision is made after
each sample is tested. When sufficient information is gathered, the testing is discontinued. In this type of testing, sample size is not fixed in advance but depends upon
the observations. Sequential tests should not be used when the exact time or cost of
the test must be known beforehand or is specified. This type of test plan may be
useful when the:
1. Accept/reject criterion for the parts on test is based on attributes
2. Exact test time available and sample size to be used are not known or
specified
The test procedure consists of testing parts one at a time and classifying the
tested parts as good or defective. After each part is tested, calculations are made
based on the test data generated to that point. The decision is made as to whether
the test has been passed or failed or if another observation should be made. A
sequential test will result in a smaller average number of parts tested when the
population tested has a reliability close to either the specified or design reliability.
The method to use is described below:
Determine Rs, Rd, α.β
Calculate the accept/reject decision points using:
β
1−β
and
1−α
α
As each part is tested, classify it as either a failure or success. Evaluate the
following expression for the binomial distribution,
f
 1 − Rs   Rs 
L =
  
 1 − Rd   Rd 
s
SL3151Ch07Frame Page 318 Thursday, September 12, 2002 6:07 PM
318
Six Sigma and Beyond
where F = total number of failures and S = total number of successes.
If
If L >
1−β
, the test is failed.
α
If L <
β
, the test is passed.
1−α
β
1−β
≤L≤
, the test should be continued.
1−α
α
GRAPHICAL SOLUTION
A graphical solution for critical values of f and s is possible by solving the following
equations:
ln
 1 − Rs 
R 
1−β
= ( f )ln
+ ( s )ln s 

α
 1 − Rd 
 Rd 
ln
 1 − Rs 
R 
β
= ( f )ln
+ ( s )ln s 

1−α
 1 − Rd 
 Rd 
and
VARIABLES DEMONSTRATION TESTS
This section deals with demonstration tests where you can test by variables. Rather
than being a straight accept/reject, the variables test will determine whether the
product meets other reliability criteria.
FAILURE-TRUNCATED TEST PLANS — FIXED-SAMPLE TEST
USING THE EXPONENTIAL DISTRIBUTION
This test plan is used to demonstrate life characteristics of items whose failure times
are exponentially distributed and when the test will be terminated after a pre-assigned
number of failures. The method to use is as follows:
First, obtain the specified reliability (RS), failure rate (λs), or MTBF (θs), and
test confidence. Remember that for the exponential distribution:
RS = e − λ st = e
t
θs
SL3151Ch07Frame Page 319 Thursday, September 12, 2002 6:07 PM
Reliability
319
Then, solve the following equation for various sample sizes and allowable
failures:
 n 
 2 ti 


θ ≥  i2=1 
χβ,2 f






∑
where θ = MTBF demonstrated; ti = hours of testing for unit i; f = number of failures;
2
χβ,2
f = the β percentage point of the chi-square distribution for 2f degrees of
freedom; and β = 1 – confidence level.
TIME-TRUNCATED TEST PLANS — FIXED-SAMPLE TEST USING
EXPONENTIAL DISTRIBUTION
THE
This type of test plan is used when:
1. A demonstration test is constrained by time or schedule.
2. Testing is by variables.
3. Distribution of failure times is known to be exponential.
The method to use will be the same as with the failure-truncated test. In this case:
 n 
ti 
2
 i=1 
θ≥ 2

χ
 β,2 ( f +1) 




∑
where θ = MTBF demonstrated; ti = hours of testing for unit i; f = number of failures;
χβ,2 2 ( f +1) = the β percentage point of the chi-square distribution for 2(f + 1) degrees
of freedom; and β = 1 – confidence level.
For the time-truncated test, the test is stopped at a specific time and the number
of observed failures (f) is determined. Due to the fact that the time at which the next
failure would have occurred after the test was stopped is unknown, it will be assumed
to occur in the next instant after the test is stopped. This is the reason that the number
is added to the number of failures in the degrees of freedom for chi-squared.
EXAMPLE
How many units must be checked on a 2000-hour test if zero failures are allowed
and θs = 32,258? A 75% confidence level is required.
SL3151Ch07Frame Page 320 Thursday, September 12, 2002 6:07 PM
320
Six Sigma and Beyond
From the information, we know that:
β = 1 – 0.75 = 0.25
2(f + 1) = 2(0 + 1) = 2
Therefore:
 n 
 n 
2
t
ti 

2
i
 i=1 
 i=1 
= θ≥
= 32,258
θ≥ 2
2.772 
χ0.25,2 












∑
∑
By rearranging this equation, we see that:
n
∑t =
i
i=1
(2.772)(32, 258)
= 44, 709.59
2
Since no failures are allowed, all units must complete the 2000-hour test and:
n
∑ t = 44, 709.59 = (n)(2, 000)
i
i=1
Solving for n:
n = 44,709.59/2000 = 22.35 or 23 units. We can say that if we place 23 units on test
for 2000 hours and have no failures, we can be 75% confident that the MTBF is
equal to or greater than 32,258 hours. (Note: This assumes that the test environment
duplicates the use environment such that one hour on test is equal to one hour of
actual use.)
Failure-truncated and time-truncated demonstration test plans for the exponential
distribution can also be designed in terms of θS, θd, α, and β by using methods
covered in the sources listed in the references and selected bibliography.
WEIBULL
AND
NORMAL DISTRIBUTIONS
Fixed-sample tests using the Weibull distribution and for the normal distribution
have also been developed. If you are interested in pursuing the tests for either of
these distributions, see the sources listed in the selected bibliography.
SL3151Ch07Frame Page 321 Thursday, September 12, 2002 6:07 PM
Reliability
321
SEQUENTIAL TEST PLANS
Sequential test plans can also be used for variables demonstration tests. The sequential test leads to a shorter average number of part hours of test exposure if the
population MTBF is near θS, θd (i.e., close to the specified or design MTBF).
EXPONENTIAL DISTRIBUTION SEQUENTIAL TEST PLAN
This test plan can be used when:
1. The demonstration test is based upon time-to-failure data.
2. The underlying probability distribution is exponential.
The method to be used for the exponential distribution is to:
1. Identify θS, θd, α, and β
2. Calculate accept/reject decision points
1−β
β
and
α
1−α
Evaluate the following expression for the exponential distribution:
L=
 1
θd
1 
exp − −  
θs
  θs θd  
n
∑t
i
i=1
where ti = time to failure of the ith unit tested and n = number tested.
If
If L >
1−β
, the test is failed.
α
If L <
β
, the test is passed.
1−α
β
1−β
≤ L≤
, the test should be continued.
1−α
α
A graphical solution can also be used by plotting decision lines:
nb – h1 and nb + h2
SL3151Ch07Frame Page 322 Thursday, September 12, 2002 6:07 PM
322
Six Sigma and Beyond
where n = number tested; b =
1 1−α
ln
D
β
1 θd
1
1
1 1−β
; h1 =
ln ; D =
−
ln
D θs
θs θ d
D
α
; and
h2 =
.
Let ti equal time to failure for the ith item. Make conclusions based on the
following:
If
∑t
If
∑t
If nb – h1 ≤
i
i
< nb – h1, the test has failed.
≥ nb + h2, the test is passed.
∑t
i
< nb + h2, continue the test.
EXAMPLE
Assume you are interested in testing a new product to see whether it meets a specified
MTBF of 500 hours with a consumer’s risk of 0.10. Further, specify a design MTBF
of 1000 hours for a producer’s risk of 0.05. Run tests to determine whether the
product meets the criteria.
Determine D based on the known criteria:
D=
1
1
= (1/500) – (1/1000) = .001
−
θs θ d
Then calculate
h1 =
1 1−β
= (1/.001) ln[(1 – .10)/.05] ≈2890
ln
α
D
h2 =
1 1−α
= (1/.001) ln[(1 – .05)/.10] ≈2251
ln
D
β
Now solve for b
b=
1 θd
= (1/.001) ln(1000/500) ≈693
ln
D θs
Using these results, we can determine at which points we can make a decision, by
using the following:
SL3151Ch07Frame Page 323 Thursday, September 12, 2002 6:07 PM
Reliability
323
R1
R2
R3
FIGURE 7.2 A series block diagram.
If nb – h1 ≤
WEIBULL
If
∑ t < nb – h , 693n – 2890, the test has failed.
If
∑ t ≥ nb + h , 693n + 2251, the test is passed.
1
i
2
i
∑ t < nb + h , 693n – 2890 ≤ ∑ t < 693n + 2251, continue the test.
AND
i
2
i
NORMAL DISTRIBUTIONS
Sequential test methods have also been developed for the Weibull distribution and
for the normal distribution. If you are interested in pursuing the sequential tests for
either of these distributions, see the selected bibliography
INTERFERENCE (TAIL) TESTING
Interference demonstration testing can sometimes be used when the stress and
strength distributions are accurately known. If a random sample of the population
is obtained, it can be tested at a point stress that corresponds to a specific percentile
of the stress distribution. By knowing the stress and strength distributions, the
required reliability, the desired confidence level, and the number of allowable failures, it is possible to determine the sample size required.
RELIABILITY VISION
Reliability is valued by the organization and is a primary consideration in all decision
making. Reliability techniques and disciplines are integrated into system and component planning, design, development, manufacturing, supply, delivery, and service
processes. The reliability process is tailored to fit individual business unit requirements and is based on common concepts that are focused on producing reliable
products and systems, not just components.
RELIABILITY BLOCK DIAGRAMS
Reliability block diagrams are used to break down a system into smaller elements and
to show their relationship from a reliability perspective. There are three types of reliability block diagrams: series, parallel, and complex (combination of series and parallel).
1. A typical series block diagram is shown in Figure 7.2 with each of the
three components having R1, R2, and R3 reliability respectively.
SL3151Ch07Frame Page 324 Thursday, September 12, 2002 6:07 PM
324
Six Sigma and Beyond
R1
R2
R3
FIGURE 7.3 A parallel reliability block diagram.
The system reliability for the series is
Rtotal = (R1) (R2) (R3) … (Rn)
EXAMPLE
If the reliability for R1 = .80, R2 = .99, and R3 = .99, the system reliability is: Rtotal =
(.80)(.99)(.99) = .78. Please notice that the total reliability is no more than the weakest
component in the system. In this case, the total reliability is less than R1.
2. A parallel reliability block diagram shows a system that has built-in
redundancy. A typical parallel system is shown in Figure 7.3.
The system reliability is
Rtotal = 1 – [1 – R1(t) (1 – R2(t) (1 – R3)(t) … (1 – Rn(t)]
EXAMPLE
If the reliability for R1 = .80, R2 = .90, and R3 = .99, the system reliability is: Rtotal =
1 – [(1 – .80)(1 – .90)(1 – .99)] = .9998 Please notice that the total reliability is more
than that of the strongest component in the system. In this case, the total reliability
is more than the R3.
3. Complex reliability block diagrams show systems that combine both series
and parallel situations. A typical complex system is shown in Figure 7.4.
The system reliability for this system is calculated in two steps:
Step 1. Calculate the parallel reliability.
Step 2. Calculate the series reliability — which becomes the total reliability.
EXAMPLE
If the reliability for R1 = .80, R2 = .90, R3 = .95, R4 = .98, and R5 = .99, what is
the total reliability for the system?
SL3151Ch07Frame Page 325 Thursday, September 12, 2002 6:07 PM
Reliability
325
R3
R5
R1
R2
R4
FIGURE 7.4 A complex reliability block diagram.
Step 1. The parallel reliability for R3 and R4 is
Rtotal = 1 – [1 – R1(t) (1 – R2(t) = 1 – [1 – .95) (1 – .98)] = .999
Step 2. The series reliability for R1, R2, (R3 & R4), and R5 is
Rtotal = (R1) (R2) (R3& R4) (R5) = (.80)(.90)(.999)(.99) = .712
Please notice that the parallel reliability was actually converted into a single reliability
and that is why it is used in the series as a single value.
WEIBULL DISTRIBUTION — INSTRUCTIONS FOR PLOTTING AND ANALYZING
FAILURE DATA ON A WEIBULL PROBABILITY CHART
This technique is useful for analyzing test data and graphically displaying it on
Weibull probability paper. The technique provides a means to estimate the percent
failed at specific life characteristics together with the shape of the failure distribution.
The following procedure presents a manual method of conducting the analysis, but
many computer programs can do the same calculations and also plot the Weibull
curve. Weibull analysis is one of the simpler analytical methods, but it is also one
of the most beneficial. The technique can be utilized for other than just analyzing
failure data. It can be used for comparing two or more sets of data such as different
designs, materials, or processes. Following are the steps for conducting a Weibull
analysis.
1. Gather the failure data (it can be in miles, hours, cycles, number of parts
produced on a machine, etc.), then list in ascending order. For example:
We conduct an experiment and the following failures (sample size of 10
failures) are identified (actual hours to failure): 95, 110, 140, 165, 190,
205, 215, 265, 275, and 330.
2. Using the table of median ranks (Table 7.2), find the column corresponding to the number of failures in the sample tested. In our example we
have a sample size of ten, so we use the “sample size 10” column. The
“% Median Ranks” are then read directly from the table.
SL3151Ch07Frame Page 326 Thursday, September 12, 2002 6:07 PM
326
Six Sigma and Beyond
3. Match the hours (or some other failure characteristic that is measured)
with the median ranks from the sample size selected. For example:
Actual Hours
to Failure
% Median Ranks
95
110
140
165
190
205
215
265
275
330
6.7
16.2
25.9
35.5
45.2
54.8
64.5
74.1
83.8
93.3
Sample size
of 10 failures
4. In constructing the Weibull plot, label the “Life” on the horizontal log
scale on the Weibull graph in the units in which the data were measured.
Try to center the life data close to the center of the horizontal scale
(Figure 7.5).
5. Plot each pair of “actual hours to failure” (on the horizontal logarithmic
scale) and “% median rank” (on the vertical axis, which is a log-log scale)
on the graph. The matching points are shown as dots (“ •s”) on Figure 7.5.
Draw a “line of best fit” (generally a straight line) as close to the data
pairs as possible. Half the data points should be on one side of the line,
and the other half should be on the other side. No two people will generate
the exact same line, but analysts should keep in mind that this is a visual
estimate. (If the line is computer generated, it is actually calculated based
on the “best fit” regression line.)
6. After the line of “best fit” is drawn, the life at a specific point can be
found be going vertically to the “Weibull line” then going horizontally to
the “Cumulative % Failed.” In other words, this is the percent that is
expected to fail at the life that was selected. In the example, 100 was
selected as the life, then going up to the line and then across, we can see
the expected % failed to be 10%. In this case, the life at 100 hours is also
known as the B10 life (or 90% reliability) and is the value at which we
would expect 10% of the parts to fail when tested under similar conditions.
(Please note that there is nothing secret about the B10 life. Any Bx life can
be identified. It just happens that the B10 is the conventional life that most
engineers are accustomed to using.) In addition, we can plot the 5% and
the 95% confidence using Tables 7.3 and 7.4 respectively.
The confidence lines are drawn for our example in Figure 7.5. The reader
will notice that the confidence lines are not straight. That is because as we
move in the fringes of the reliability we are less confident about the results.
SL3151Ch07Frame Page 327 Thursday, September 12, 2002 6:07 PM
95.0
90.0
1.
4
1.
99.0
2
2.0
99.9
327
6.0
4.0
3.0
Reliability
WEIBULL
SLOPE
80.0
0
1.
8
0.
0.7 6
0.
0.5
70.0
60.0
50.0
40.0
30.0
20.0
10.0
5.0
4.0
3.0
2.0
PERCENT
1.0
0.50
0.40
0.30
0.20
0.10
0.05
0.04
0.03
2
3
4 5 67 89
2
3
4 5 67 89
FIGURE 7.5 The Weibull distribution for the example.
2
3
4 5 67 89
SL3151Ch07Frame Page 328 Thursday, September 12, 2002 6:07 PM
328
Six Sigma and Beyond
TABLE 7.3
Five Percent Rank Table
Sample Size (n)
1
1
1 5.000
2
2
3
4
5
6
7
8
9
10
0.512
2.532
1.695
1.274
1.021
0.851
0.730
0.639
0.568
22.361
13.535
9.761
7.644
6.285
5.337
4.639
4.102
3.677
36.840
24.860
18.925
15.316
12.876
11.111
9.775
8.726
47.237
34.259
27.134
22.532
19.290
16.875
15.003
54.928
41.820
34.126
28.924
25.137
22.244
60.696
47.820
40.031
34.494
30.354
65.184
52.932
45.036
39.338
68.766
57.086
49.310
3
4
5
6
7
8
9
71.687
10
60584
74.113
Sample Size (n)
j
11
12
13
14
15
16
17
18
19
20
1
0.465
0.426
0.394
0.366
0.341
0.320
0.301
0.285
0.270
0.256
2
3.332
3.046
2.805
2.600
2.423
2.268
2.132
2.011
1.903
1.806
3
7.882
7.187
6.605
6.110
5.685
5.315
4.990
4.702
4.446
4.217
4
13.507
12.285
11.267
10.405
9.666
9.025
8.464
7.969
7.529
7.135
5
19.958
18.102
16.566
15.272
14.166
13.211
12.377
11.643
10.991
10.408
6
27.125
24.530
22.395
20.607
19.086
17.777
16.636
15.634
14.747
13.955
7
34.981
31.524
28.705
26.358
24.373
22.669
21.191
19.895
18.750
17.731
8
43.563
39.086
35.480
32.503
29.999
27.860
26.011
24.396
22.972
21.707
9
52.991
47.267
42.738
39.041
35.956
33.337
31.083
29.120
27.395
25.865
10
63.564
56.189
50.535
45.999
42.256
39.101
36.401
34.060
32.009
30.195
11
76.160
66.132
58.990
53.434
48.925
45.165
41.970
39.215
36.811
34.693
77.908
68.366
61.461
56.022
51.560
47.808
44.595
41.806
39358
79.418
70.327
63.656
58.343
53.945
50.217
47.003
44.197
80.736
72.060
65.617
60.436
56.112
52.420
49.218
81.896
73.604
67.381
62.332
58.088
54.442
82.925
74.988
68.974
64.057
59.897
83.843
76.234
70.420
65.634
84.668
77.363
71.738
85.413
78.389
12
13
14
15
16
17
18
19
20
86.089
SL3151Ch07Frame Page 329 Thursday, September 12, 2002 6:07 PM
Reliability
329
TABLE 7.4
Ninety-five Percent Rank Table
Sample Size (n)
j
1
2
3
4
5
6
1
95.000
77.639
63.160
52.713
45.072
39.304
34.816
31.234
28.313
25.887
97.468
86.465
75.139
65.741
58.180
52.070
47.068
42.914
39.416
98.305
90.239
81.075
72.866
65.874
59.969
54.964
50.690
98.726
92.356
84.684
77.468
71.076
65.506
60.662
98.979
93.715
87.124
80.710
74.863
69.646
99.149
94.662
88.889
83.125
77.756
99.270
95.361
90.225
84.997
99.361
95.898
91.274
99.432
96.323
2
3
4
5
6
7
7
8
8
9
9
10
10
99.488
Sample Size (n)
j
11
12
13
14
15
16
1
23.840
22.092
20.582
19.264
18.104
17.075
16.157
15.332
14.587
13.911
2
36.436
33.868
31.634
29.673
27.940
26.396
25.012
23.766
22.637
21.611
3
47.009
43.811
41.010
38.539
36.344
34.383
32.619
31.026
29.580
28.262
4
56.437
52.733
49.465
46566
43.978
41.657
39.564
37.668
35.943
34366
5
65.019
60.914
57.262
54.000
51.075
48.440
46.055
43.888
41.912
40.103
6
72.875
68.476
64.520
60.928
57.744
54.835
52.192
49.783
47.580
45.558
7
80.042
75.470
71.295
67.497
64.043
60.899
58.029
55.404
52.997
50.782
8
86.492
81.898
77.604
73.641
70.001
66.663
63.599
60.784
58.194
55.803
9
92.118
87.715
83.434
79.393
75.627
72.140
68.917
65.940
63.188
60.641
10
96.668
92.813
88.733
84.728
80.913
77.331
73.989
70.880
67.991
65.307
11
99.535
96.954
93.395
89.595
85.834
82.223
78.809
75.604
72.605
69.805
99.573
97.195
93.890
90.334
86.789
83.364
80.105
77.028
74.135
99.606
97.400
94.315
90.975
87.623
84.366
81.250
78.293
99.634
97.577
94.685
91.535
88.357
85.253
82.269
99.659
97.732
95.010
92.030
89.009
86.045
99.680
97.868
95.297
92.471
89.592
99.699
97.989
95.553
92.865
99.715
98.097
95.783
99.730
98.193
12
13
14
15
16
17
18
19
20
17
18
19
20
99.744
SL3151Ch07Frame Page 330 Thursday, September 12, 2002 6:07 PM
330
Six Sigma and Beyond
7. The graph can be used for estimating the cumulative % failure at a
specified life, or it can be used for determining the estimated life at a
cumulative % failure. In the example, we would expect 63.2% of the test
units to fail at 222 hours. This value at 63.2% is also known as the
characteristic life or the mean time between failures (MTBF) for the
example distribution. Or looking at the chart another way, we would like
to estimate the failure hours at a specified % failure. For example at 95%
cumulative % failed, the hours to failure are 325 hours. Once the Weibull
plot is determined, an analyst can go either way.
8. The Weibull graph can also be used to estimate the reliability at a given
life, using the equation of R(t) = 1 – F(t). A designer who wishes to
estimate the reliability of life at 200 hours would go vertically to the
Weibull line, then go horizontally to 52%, which is the percent expected
to fail. The estimated reliability at 200 hours would be 1 – 0.52 = 0.48
or 48%. At 80 hours it would be 1 – 0.056 = 0.944 or 94.4%. The slope
is obtained by drawing a line parallel to the Weibull line on the Weibull
slope scale that is in the upper left corner of the chart.
9. If a computer program is used, the calculation for the line of best fit is
determined by the computer. Some programs draw the graph and show
the paired points, the line of best fit (using the least squares method or
the maximum likelihood method), the reliability at a specified hour (or
other designated parameter), and the slope of the line.
10. One of the interesting observations regarding the Weibull graph is the
interpretations that can be made about the distribution by the portrayal of
the slope. When the slope is:
• Less than 1, this indicates a decreasing failure rate, early life, or infant
mortality
• Approximately 1, the distribution indicates a nearly constant failure
rate (useful life or a multitude of random failures)
• Exactly 1, the distribution has an exponential pattern
• Greater than 1, the start of wear out
• Approximately 3.55, a normal distribution pattern,
11. Weibull plots can be made if test data also include test samples that have
not failed. Parts that have not failed (for whatever reason during the
testing) can be included in the calculations together with the failed parts
or assemblies. The non-failed data are referred to as suspended items. The
method of determining the Weibull plot is shown in the next set of
instructions.
SL3151Ch07Frame Page 331 Thursday, September 12, 2002 6:07 PM
Reliability
331
INSTRUCTIONS FOR PLOTTING FAILURE
ON A WEIBULL PROBABILITY CHART
AND
SUSPENDED ITEMS DATA
1. Gather the failure and suspended items data, then including the suspended
items, list in ascending order.
Item Number
1
2
3
4
5
6
7
8
9
10
11
12
13
a
Hours to Failure
or Suspension
Failure
or Suspension Codea
95
110
140
165
185
190
205
210
215
265
275
330
350
F1
F2
F3
F4
S1
F5
F6
S2
F7
F8
F9
F10
S3
Sample Size 13
10 failures
3 suspensions
Code items as failed (F) or suspended (S).
2. Calculate the mean order number of each failed unit. The mean order
numbers before the first suspended item are the respective item numbers
in the order of occurrence, i.e., 1, 2, 3, and 4. The mean order numbers
after the suspended items are calculated by the following equations.
Mean order number = (previous mean order number) + (new number)
where, new increment =
(
(N +1) − (previous mean order number)
1 + number of items beyond present suspended item
)
and N = total sample size.
For example, to compensate for S1 (first suspended item), new increment =
[(13 + 1) –4]/(1 + 8) = 1.111 and the mean order number of F5 (fifth
failed item) = 4 + 1.111 = 5.111.
Note: Only one new increment is found each time a suspended item is encountered. Mean order number of F6 = 5.111 + 1.111 = 6.222.
New increment for mean order number of F7 = [(13 + 1) – 6.222] (1 + 5) =
1.296.
SL3151Ch07Frame Page 332 Thursday, September 12, 2002 6:07 PM
332
Six Sigma and Beyond
Then, the mean order number of F7 (seventh failed item) is 6.222 + 1.296 =
7.518 (and so on for F8, F9, and F10).
This new increment also applies to mean order numbers:
Item
Number
1
2
3
4
5
6
7
8
9
10
11
12
13
Hours to Failure
or Suspension
Failure or
Suspension Code
95
110
140
165
185
190
205
210
215
265
275
330
350
F1
F2
F3
F4
S1
F5
F6
S2
F7
F8
F9
F10
S3
Mean Order
Number
1
2
3
4
—
5.111
6.222
—
7.518
8.815
10.111
11.407
—
3. A rough check on the calculations can be made by adding the last increment to the final mean order number. If the value is close to the total
sample size, the numbers are correct. In our example, 11.407 + [11.407 –
10.111] = 11.407 + 1.296 = 12.702, which is a close approximation to
the sample size of 13.
4. Using the table of median ranks for a sample size of 13 we can determine
the median rank for the first four failures, or we can use the approximate
median rank formula.
Median rank = [J – .3]/[N + .4]
where J = mean order number and N = total sample size.
For example, the median rank of F5 is:
5.111 − .3
= 0.359
13 + .4
and, the remainder of the failures:
6.222 − .3
= 0.442
13 + .4
7.518 − .3
and so on.
13 + .4
SL3151Ch07Frame Page 333 Thursday, September 12, 2002 6:07 PM
Reliability
333
Item
Number
1
2
3
4
5
6
7
8
9
10
11
12
13
Hours to
Failure or
Suspension
Failure or
Suspension
Code
Mean
Order
Number
% Median
Rank
95
110
140
165
185
190
205
210
215
265
275
330
350
F1
F2
F3
F4
S1
F5
F6
S2
F7
F8
F9
F10
S3
1
2
3
4
—
5.111
6.222
—
7.518
8.815
10.111
11.407
—
5.2
12.6
20.0
27.5
—
35.9
44.2
—
53.9
63.5
73.2
82.9
—
5. Label the “Life” on the horizontal log scale on the Weibull graph in the
units in which the data were measured. Try to center the life data close
to the center of the horizontal scale.
6. Plot each pair of “actual hours to failure” (on the horizontal scale) and
“% median rank” (on the vertical scale) on the graph. Draw a “line of
best fit” (generally a straight line) as close to the data pair as possible.
Half the data points should be on one side of the line, and the other half
should be on the other side.
7. Once the line is drawn, the life at a specific point can be found by going
vertically to the “Weibull line” then going horizontally to the “Cumulative
% failed.” In other words, this is the percent that is expected to fail at the
life that was selected. In the example, 200 hours was selected as the life,
then going up to the line and then across, we can see the expected %
failed to be 40%.
8. Other reliability parameters that can be read from the Weibull plot are:
MTBF = 240 hours
B10
= 105 hours
B
= 2.5
Reliability at 100 hours is 1 – 0.09 = 0.91 reading from the graph, or using
the Weibull equation
R= e
 t B
−

 MTBF 
= e
 100  2.5
−

 240 
= 0.9038
9. Comparing the two examples shows that the analysis with suspended items
results in a slightly higher reliability characteristics. This is using the same
failure data plus the three suspended items.
SL3151Ch07Frame Page 334 Thursday, September 12, 2002 6:07 PM
334
Six Sigma and Beyond
ADDITIONAL NOTES
ON THE
USE
OF THE
WEIBULL
1. Weibull plotting is an invaluable tool for analyzing life data; however,
some precautions should be taken. Goodness-of-fit is one concern. This
can be tested with various tests such as the Kolmogorov-Smirnov or Chisquare. The use of an adequate sample size is another concern. Generally
a sample size should be greater than ten, but if the failure rate is in a tight
pattern (with relatively low variability), this generality may be relaxed.
Be suspicious of a curved line that best fits the data. This may indicate a
mixed sample of failures or inappropriate sampling.
2. If the Weibull plot is made and a curvilinear relation develops for the
connecting points, it usually indicates that two or more distributions are
making up the data. This may be due to infant mortality failures being
mixed with the data, failures due to components from two different
machines or assembly operations, or some other underlying cause. If a
curved relationship is indicated, the analyst should revisit the data and try
to determine if the data are made up of two or more distributions and then
manage each distribution separately.
3. There is another parameter in the Weibull analysis that was not discussed.
Beside the shape or slope (b) of the Weibull line and the scale or characteristic
life (the mean life or MTBF at the 63.2% cumulative percentage), there is
the “location parameter.” In most cases it is usually zero and should be of
little concern. In effect, it states that the distribution of failure times starts at
zero time, which is more often the case because it is difficult to imagine
otherwise. The characteristic life splits the distribution in two areas of 0.632
β
before and 0.368 ( R(θ) = e − (θ θ) = e −1 = .368 ) after.
4. One of the advantages of using the Weibull is that it is very flexible in its
interpretations. A wealth of information can be derived from it. If the
Weibull slope is equal to one, the distribution is the same as the exponential, or a constant failure rate. If the slope is in the vicinity of 3.5, it is a
“near normal distribution.” If the slope is greater than one, the plot starts
to represent a wear out distribution, or an increasing hazard rate. A slope
less than one generally indicates a decreasing hazard rate, or an infant
mortality distribution.
5. Analysts should be careful about extrapolating beyond the data when
making predictions. Remember that the failure points fall within certain
bounds and that the analyst should have a valid reason when venturing
beyond these bounds. When making projections over and above these
confines, sound engineering judgment, statistical theory, and experience
should all be taken into consideration.
6. The three-parameter Weibull is a distribution with non-zero minimum life.
This means that the population of products goes for an initial period of
time without failure. The reliability function for the three-parameter
Weibull is given by
R(t) = e
 t −δ  β
−

 θ−δ 
,t≥δ
SL3151Ch07Frame Page 335 Thursday, September 12, 2002 6:07 PM
Reliability
335
where t = time to failure (t ≥ δ); δ = minimum life parameter (δ ≥ 0); β =
Weibull slope (β > 0); and θ = characteristic life (θ ≥ δ).
For a given reliability
1
 1 β
t = θ + (θ – δ) × ln( )
 R 
and the B10 life is
1

1 β
B10 = θ + (θ – δ) × ln(
)
 0.90 
DESIGN OF EXPERIMENTS
IN RELIABILITY APPLICATIONS
Certainly we can use DOE in passive observation of the covariates in the tested
components. We can also use DOE in directed experimentation as part of our
reliability improvement. Covariates are usually called factors in the experimentation
framework. Two main technical problems arise in the reliability area, however, when
standard methods of experimental design are employed.
1. Failure time data are rarely normally distributed, so standard analysis tools
that rely on symmetry, e.g., normal plots, do not work too well.
2. Censoring.
The first problem can be overcome by considering a transformation of the fail
times to make them approximately normal — the log transformation is usually a
good choice. The exact form of the fail time distribution is not important because
we are looking for effects that improve reliability, rather than exact predictions of
the reliability itself.
The second problem of censoring is a little bit trickier but can be dealt with by
iteration as follows:
1. Choose a basic model to fit to the data.
2. Fit the model to the data, treating the censor times as failure times.
3. Using this model, make a conditional prediction for the unobserved fail
times for each censored observation. The prediction is conditional because
the actual failure time must be consistent with the censoring mechanism.
4. Replace censor times with the fail time predictions from step 3.
5. Go back to step 2.
SL3151Ch07Frame Page 336 Thursday, September 12, 2002 6:07 PM
336
Six Sigma and Beyond
Eventually this process will converge, i.e., the predictions for the fail times of
the censorings will stop changing from one iteration to the next. If necessary, the
process can be tried with several model choices for step 1. In fact, the algorithm of
the five steps leads to the same results as maximum likelihood estimation.
RELIABILITY IMPROVEMENT
THROUGH PARAMETER DESIGN
Two special categories of covariates in any parameter design are design parameters
(or control factors) and error variables (or noise factors). The terms in parenthesis
are the equivalent terms within the context of robustness, which we already have
discussed in Volume V of this series.
The achievement of higher reliability can also be viewed as an improvement to
robustness. Robustness is defined as reduced sensitivity to noise factors. In most
industries, noise factors have five main categories:
1.
2.
3.
4.
5.
Piece to piece variation
Changes to component characteristics over time
Customer duty cycle
Environmental conditions
Interfacing (environment created by neighboring components in the system)
Typically, noises in categories 3, 4, and 5 can induce noises in category 2. If
the function of the component can be made robust to noises in category 2, then the
component will, by definition, be more reliable. Often, noise category 1 contributes
to infant mortality, category 2 to degradation, and categories 3, 4, and 5 to useful
life problems. Recognizing this pattern of noises, we can relate them to the bathtub
curve (see Figure 7.1) for the hazard function.
Often, knowing the type of failure rate that is acting on our component can give
a clue as to the offending noise factor and hence lead to a root cause analysis of the
failure mechanism. Components can be made robust to noises by experimenting
with control factors. The idea (as in robustness generally) is to look for interactions
between control and noise factors. The reliability connection is made if there is a
“time lag” between the extremes of the noise space, denoted N– and N+, say — see
Figure 7.6.
Note that the functional measure is not failure time, but some ideal function of
the system. C1 and C2 represent two settings of a control factor. A design with C2
is more robust to noise than one with C1 and is therefore more reliable. Note: A
closely related area to robustness in reliability studies is Accelerated Degradation
Testing (ADT), which is closely associated with Accelerated Life Testing (ALT).
A parameter design layout in reliability applications follows the pattern for
parameter design studies, as in the example shown in Figure 7.7.
SL3151Ch07Frame Page 337 Thursday, September 12, 2002 6:07 PM
Reliability
337
Functional
measure
C2
C1
N-
N+
Time
FIGURE 7.6 Control factors and noise interactions.
Control Factors
Configuration
A
B
C ...
G
2
3
4
5
6
7
8
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Noise Factors
time
N(new)
ylY2Y3Y4YsY6Y7Y8-
N+
(old)
Y1+
Y2+
Y3+
Y4+
Ys+
Y6+
Y7+
Y8+
FIGURE 7.7 An example of a parameter design in reliability usage.
The idea of experimental layouts of this type is to look for interactions between
control factors and noise factors, which lead to configurations with minimum difference between the y values.
DEPARTMENT OF DEFENSE RELIABILITY
AND MAINTAINABILITY — STANDARDS
AND DATA ITEMS
Table 7.5 provides very useful information about reliability and maintainability
(R&M) standards and data items used in reliability.
SL3151Ch07Frame Page 338 Thursday, September 12, 2002 6:07 PM
338
Six Sigma and Beyond
TABLE 7.5
Department of Defense Reliability and Maintainability — Standards and
Data Items
Standard
General Design Standards
MIL-STD-454M
MIL-HDBK-727
MIL-STD-810E
MIL-STD-1629A
MIL-STD-1686A
MIL-E-4158E-(USAF)
MIL-E-5400T
MIL-HDBK-251
MIL-HDBK-263A
MIL-HDBK-338A
Reliability Standards
MIL-STD-721C
MIL-STD-756B
MIL-STD-781 D
Explanation
Standard General Requirements for Electronic Equipment
Design Guidance for Producibility
Environmental Test Methods & Engineering Guidelines
Procedures for Performing a Failure Mode Effects & Criticality Analysis
Electrostatic Discharge Control Program for Protection of Electrical &
Electronic Parts, Assemblies & Equipment
General Specification for Ground Electronic Equipment
General Specification for Aerospace Electronic Equipment
Reliability/Design Thermal Applications
Electrostatic Discharge Handbook for Protection of Electrical &
Electronic Parts, Assemblies & Equipment
Electronic Reliability Design Handbook
DoD-HDBK-344-(USAF)
Definitions of Terms for Reliability & Maintainability
Reliability Modeling & Prediction
Reliability Testing for Engineering Development Qualification &
Production
Reliability Program Systems & Equipment Development & Production
Reliability Program Requirements for Space & Launch Vehicles
Failure Reporting Analysis & Corrective Action System
Environmental Stress Screening Process for Electronic Equipment
Quality Program Requirements
Reliability Growth Management
Reliability Prediction of Electronic Equipment
Reliability Test Methods, Plans & Environments for Engineering
Development, Qualification & Production
Environmental Stress Screening of Electronic Equipment
Maintainability Standards
MIL-STD-470B
MIL-STD-471A
MIL-STD-2084-(AS)
MIL-STD-2165
MIL-HDBK-472
Maintainability Program for Systems & Equipment
Maintainability Demonstration
General Requirements for Maintainability
Testability Program for Electronic Systems & Equipment
Maintainability Prediction
Major Parts Standards
MIL-STD-198E
MIL-STD-199E
MIL-STD-202E
MIL-STD-701N
MIL-STD-750C
Selection & Use of Capacitors
Selection & Use of Resistors
Test Methods for Electronic & Electrical Component Parts
Lists of Standard Semiconductor Devices
Test Methods for Semiconductor Devices
MIL-STD-785B
MIL-STD-1543B-(USAF)
MIL-STD-2155-(AS)
MIL-STD-2164-(EC)
MIL-0–9858A
MIL-HDBK-189
MIL-HDBK-217F
MIL-HDBK-781
SL3151Ch07Frame Page 339 Thursday, September 12, 2002 6:07 PM
Reliability
339
TABLE 7.5 (continued)
Department of Defense Reliability and Maintainability — Standards and
Data Items
MIL-STD-790E
MIL-STD-883D
MIL-STD-965A
MIL-STD-983
MIL-STD-1546A-(USAF)
MIL-STD-1547A-(USAF)
MIL-STD-1556B
MIL-STD-1562W
MIL-STD-1772B
MIL-S-19500H + OPL
MIL-M-38510J + QPL
MIL-H-38534A + QML
MIL-1–38535A + QML
MIL-HDBK-339-(USAF)
MIL-HDBK-780A
MIL-BUL-103J
Reliability Assurance Program for Electronic Part Specifications
Test Methods & Procedures for Microelectronics
Parts Control Program
Substitution List for Microcircuits
Parts, Materials & Processes Control Program for Space & Launch
Vehicles
Electronic Parts, Materials & Processes for Space & Launch Vehicles
Government/Industry Data Exchange Program (GIDEP) Contractor
Participation Requirements
Lists of Standard Microcircuits
Certification Requirements for Hybrid Microcircuit Facility & Lines
General Specification for Semiconductor Devices
General Specification for Microcircuits
General Specification for Hybrid Microcircuits
General Specification for Integrated Circuits (Microcircuits)
Manufacturing
Custom LSI Circuit Development & Acquisition for Space Vehicles
Standardized Military Drawings
List of Standardized Military Drawings (SMDs)
Reliability Analysis Center Publications
DSR
Discrete Semiconductor Device Reliability
FMD
Failure Mode/Mechanism Distributions
FTA
Fault Tree Analysis
MFAT-1
Microelectronics Failure Analysis Techniques — A Procedural Guide
MFAT-2
GaAs Characterization & Failure Analysis Techniques
NONOP-1
Nonoperating Reliability Data
NPRD
Nonelectronic Parts Reliability Data
NPS-1
Analysis Techniques for Mechanical Reliability
PRIM
A Primer for DoD Reliability, Maintainability, Safety and Logistics
Standards
RDSC-1
Reliability Sourcebook
RMST
Reliability and Maintainability Software Tools
SOAR-2
Practical Statistical Analysis for the Reliability Engineer
SOAR-4
Confidence Bounds for System Reliability
SOAR-6
ESD Control in the Manufacturing Environment
SOAR-7
A Guide for Implementing Total Quality Management
SOAR-8
Process Action Team (PAT) Handbook
VZAP
Electrostatic Discharge Susceptibility Data
Computer Formats
NPRD-P
NRPS
VZAP-P
Nonelectronic Parts Reliability Data (IBM PC database)
Nonoperating Reliability Prediction Software (Includes NONOP-1)
VZAP Data (IBM PC database)
SL3151Ch07Frame Page 340 Thursday, September 12, 2002 6:07 PM
340
Six Sigma and Beyond
TABLE 7.5 (continued)
Department of Defense Reliability and Maintainability — Standards and
Data Items
Rome Laboratory Technical Reports
Rome Laboratory (formerly Rome Air Development Center) has published hundreds of useful R&M
technical reports that are available from the Defense Technical Information Center and the National
Technical Information Service. Call RAC for a list. [Address at publication time: Reliability Analysis
Center * 201 Mill Street * Rome, NY. 13440–6916 * Telephone: 315.337.0900]
Data Item Descriptions
MIL-STD-756 Reliability Modeling and Prediction
DI-R-7081
B Mathematical Model(s)
B Predictions Report(s)
DI-R-7082
B Block Diagrams & Math. Models Report
DI-R-7094
B Predict. & Doc. of Support. Material
DI-R-7095
B Report for Explor. Advanced Develop.
DI-R-7100
MIL-STD-781 Reliability Test Methods, Plans, and Environments for engineering development,
Qualification and Production
DI-RELI-80247
Thermal Survey Report
DI-RELI-80248
Vibration Survey Report
DI-RELI-80249
ESS Report
DI-RELI-80250
B Test Plan
DI-RELI-80251
B Test Procedures
DI-RELI-80252
B Test Report
DI-RELI-80253
Failed Item Analysis Report
DI-RELI-80254
Corrective Action Plan
DI-RELI-80255
Failure Summary and Analysis Report
MIL-STD-785 Reliability Program for Systems and Equipment Development and Production and
MIL-STD-1543 Reliability Program Requirements for Space and Launch Vehicles
DI-R-7079
R Program Plan
DI-R-7084
Elect. Parts/Circuits Tol. Analysis Report
DI-R-7086
FMECA Plan
DI-A-7088
Conference Agenda
DI-A-7089
Conference Agenda
DI-OCIC-80125
ALERT/SAFE ALERT
DI-OCIC-80126
Response to ALERT/SAFE ALERT
DI-RELI-80249
ESS Report
DI-RELI-80250
Test Plan
DI-RELI-80251
Test and Demo. Procedures
DI-RELI-80252
Test Reports
DI-RELI-80253
Failed Item Analysis Report
DI-RELI-80255
Report, Failure Summary and Analysis
DI-RELI-80685
Critical Item List
DI-RELI-80686
Allocat., Assess. & Analysis Report
DI-RELI-80687
Report, FMECA
SL3151Ch07Frame Page 341 Thursday, September 12, 2002 6:07 PM
Reliability
341
TABLE 7.5 (continued)
Department of Defense Reliability and Maintainability — Standards and
Data Items
MIL-STD-2155 FRACA System
DI-E-2178
Computer Software Trouble Report
DI-R-21597
FRACA System Plan
DI-R-21598
Failure Report
DI-R-21599
Develop. & Product. Failure Summary Report
MIL-STD-2164 ESS Process for Electronic Equipment
DI-ENVR-80249
Environmental Stress Screening Report
DOD-HDBK-344 ESS of Electronic Equipment
DI-ENVR-80249
Environmental Stress Screening Report
MIL-STD-810 Environmental Test Methods and Engineering Guidelines
DI-ENVR-80859
Environmental Management Plan
DI-ENVR-80860
Life Cycle Environmental Profile
DI-ENVR-80861
Environmental Design Test Plan
DI-ENVR-80862
Operational Environment Verif. Plan
DI-ENVR-80863
Environmental Test Report
MIL-STD-1629 Procedures for Performing a FMECA
DI-R-7085
FMECA Report
DI-R-7086
FMECA Plan
MIL-STD-1686 ESD Control Program for Protection of Electrical and Electronic Parts, Assemblies
and Equipment
DI-RELI-80669
ESD Control Program Plan
DI-RELI-80670
Reporting Results of ESD Sensitivity Tests of Electrical & Electronic Parts
DI-RELI-80671
Handling Procedure for ESD Sensitive Items
MIL-STD-1546 Parts, Materials, and Processes Control Program for Space and Launch Vehicles
DI-A-7088
Conference Agenda
DI-A-7089
Conference Minutes
DI-MI SC-80526
Parts Control Program Plan
DI-MISC-80072
Program Parts Selection List (PPSL)
DI-MISC-80071
Part Approval Requests
MIL-STD-1556 GIDEP Contractor Participation Requirements
DI-QCIC-80125
ALERT/SAFE-ALERT
DI-QCIC-80126
Response to an ALERT/SAFE-ALERT
DI-QCIC-80127
GIDEP Annual Progress Report
SL3151Ch07Frame Page 342 Thursday, September 12, 2002 6:07 PM
342
Six Sigma and Beyond
TABLE 7.5 (continued)
Department of Defense Reliability and Maintainability — Standards and
Data Items
MIL-STD-470 Maintainability Program for Systems and Equipment
DI-R-2129
M Demo. Plan (MIL-STD-470A, Task 301 only)
DI-R-7085
FMECA Report
DI-MNTY-80822
Program Plan
DI-MNTY-80823
M Status Report
DI-MNTY-80824
Data Collect., Anal. & Correct. Action System
DI-MNTY-80825
M Modeling Report
DI-MNTY-80826
M Allocations Report
M Predictions Report
DI-MNTY-80827
M Analysis Report
DI-MNTY-80828
M Design Criteria Plan
DI-MNTY-80829
Inputs to the Detailed Maintenance Plan & LSA
DI-MNTY-80830
M Testability Demo. Test Plan
DI-MNTY-80831
M Testability Demo. Test Report
DI-MNTY-80832
MIL-STD-471 Maintainability Demonstration
DI-R-2129
M Demonstration Plan
DI-MNTY-80831
M Testability Demo. Test Plan
DI-MNTY-80832
M Testability Demonstration Report
DI-MNTY-81188
Verif., Demo., Assess. & Evaluation Plan
DI- QCIC-81187
Quality Assessment Report
MIL-STD-2165 Testability Program for Electronic Systems and Equipments
DI-E-5423
Design Review Data Package
DI-T-7198
Testability Program Plan
DI-T-7199
Testability Analysis Report
DI-MNTY-80824
Data Collect., Anal. & Correct. Act. System Plan
DI-MNTY-80831
M/Testability Demo. Test Plan
DI-MNTY-80832
M/Testability Demo. Report
MIL-HDBK-472 Maintainability Prediction
DI-MNTY-80827
M Predictions Report
Note: Only data items specified in the Contract Data Requirements List (CDRL) are deliverable.
REFERENCES
Anon., Warranty Cost Issue Hurts Chrysler, USA Today, Oct. 24, 1994, p. 3B.
ANSI/IEEE Standard 100–1988, 4th ed., IEEE Standard Dictionary of Electrical and Electronic Terms, The Institute of Electrical and Electronic Engineers, Inc., New York,
1988.
Flint, J., It Is Time To Get Realistic, WARD’S AUTOWORLD, Oct. 2001, p. 21.
Mayne, E. et al., Quality Crunch, Ward’s AUTOWORLD, July 2001, pp. 14–18.
VonAlven, W.H., Ed., Reliability Engineering, Prentice Hall, Inc., Englewood Cliffs, NJ, 1964.
SL3151Ch07Frame Page 343 Thursday, September 12, 2002 6:07 PM
Reliability
343
SELECTED BIBLIOGRAPHY
Aitken, M., A note on the regression analysis of censored data, Technometrics, 23, 161–163,
1981.
Box, G.E.P. and Meyer, R.D., Finding the active factors in fractionated screening experiments,
Journal of Quality Technology, 25, 94–105, 1993.
Cox, D.R. and Oakes, D., Analysis of Survival Data, Chapman Hall, London, 1984.
Grove, D.M. and Davis, T.P., Engineering, Quality, and Experimental Design, Longman,
Harlow, England, 1992.
Hamada, M. and Wu, C.F.J., Analysis of censored data from highly fractionated experiments.
Technometrics, 33, 25–3, 1991.
Hamada, M. and Wu, C.F.J., Analysis of designed experiments with complex aliasing, Journal
of Quality Technology, 23, 130–137, 1992.
Kalbfleisch, J.D. and Prentice, R.L., The Statistical Analysis of Failure Time Data, Wiley,
New York, 1980.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, Wiley, New York, 1977.
Kececioglu, D., Reliability Engineering Handbook, Vols. 1 and 2, Prentice Hall, Englewood
Cliffs, NJ, 1991.
Lawless, J. F., Statistical Models and Methods for Lifetime Data, Wiley, New York, 1982.
McCormick, N.J., Reliability and Risk Analysis, Academic Press, New York, 1981.
Nelson, W., Theory and applications of hazard plotting for censored failure data, Technometrics, 14, 945–966, 1972.
Schmee, J. and Hahn, G., A simple method of regression analysis with censored data. Technometrics, 21, 417–432, 1979.
Smith, R.L., Weibull regression models for reliability data, Reliability Engineering and System
Safety, 34, 55–57, 1991.
SL3151Ch07Frame Page 344 Thursday, September 12, 2002 6:07 PM
SL3151Ch08Frame Page 345 Thursday, September 12, 2002 6:07 PM
8
Reliability and
Maintainability
As the world moves towards building more competitive products, it is important to
put additional emphasis on reliability and maintainability (R&M), which support
reduction of inventories and “build to schedule” targets.
The Quality Systems Requirements, Tooling & Equipment (TE) Supplement to
QS-9000 was developed by Chrysler, Ford, General Motors, and Riviera Die & Tool
to enhance quality systems while eliminating redundant requirements, facilitating
consistent terminology, and reducing costs. It is important that everyone involved
in the design or purchase of machinery be aware of this supplement and their
responsibilities as outlined in the QS-9000 process. It is also important that everyone
understand that the TE supplement defines machinery as tooling and equipment
combined. Machinery is a generic term for all hardware, including necessary operational software, which performs a manufacturing process.
The TE goal is to improve the quality, reliability, maintainability, and durability
of products through development and implementation of a fundamental quality
management system. The supplement communicates additional common system
requirements unique to the manufacturers of tooling and equipment as applied to
the QS-9000 requirements. This particular chapter will emphasize the reliability and
maintainability areas. Quality operating systems (QOS) and durability are equally
important subjects but are beyond the scope of this work. The reader is encouraged
to review Volume IV — the material on machine acceptance.
WHY DO RELIABILITY AND MAINTAINABILITY?
Due to a lack of confidence in the performance of our equipment, we have traditionally purchased excessive facilities and tooling in order to meet production objectives. It is estimated that approximately 73% of the total cost in a program development through launching, in the automotive industry for example, is in this area.
Additionally, capital spent on “insurance-type” spare tooling hidden for unplanned
breakdowns shows a lack of confidence in production equipment. Operational effects
of production shortfall and the inability to predict downtime are countless. They
include unplanned overtime, unplanned and increasing maintenance requirements
and costs, and excessive work in process around constraint operations.
The R&M process builds confidence in predicting performance of machinery,
and, through this process, we can improve the expected and demonstrated levels of
machinery performance. Properly predicting and improving performance contributes
to lower total cost and improved profits for the organization.
345
SL3151Ch08Frame Page 346 Thursday, September 12, 2002 6:07 PM
346
Six Sigma and Beyond
The R&M process consists of five phases that form a continuous loop. The five
phases are: (1) concept; (2) design and development; (3) machinery build and
installation; (4) machinery operation, continuous improvement, performance analysis; and (5) conversion concept of next cycle. As the loop continues, each generation
of machinery improves.
In this chapter we will concentrate on the first three phases of the loop, not
because they are more important, but because they are the major focus of this
planning effort of the design for six sigma (DFSS) campaign. The last two phases
should be well documented in each organization for they are facility dependent.
OBJECTIVES
The emphasis of all R & M is focused on three objectives:
Reliability — The probability that machinery and equipment can perform
continuously, without failure, for a specified interval of time (when operating under stated conditions)
Maintainability — A characteristic of design, installation, and operation, usually expressed as the probability that a machine can be retained in, or
restored to, specified operable conditions within a specified interval of time
(when maintenance is performed in accordance with prescribed procedures)
Durability — Ability to perform intended function over a specified period
(under normal use with specified maintenance) without significant deterioration
MAKING RELIABILITY AND
MAINTAINABILITY WORK
Machinery reliability and maintainability should be considered an integral part of
all facilities and tooling (F&T) purchases. However, the appropriate degree of time
and effort dedicated to R&M engineering must be individually applied for each
unique application and purchase situation. Each project engineering manager should
consider the value proposition of applying varying degrees of R&M engineering for
the unique circumstances surrounding each equipment purchase.
For example, we may choose to apply a large amount of R&M engineering
resources to a project that includes a large quantity of single design machines. The
value proposition would show that investing up-front resources on a single design
that can be leveraged beyond a single application would offer a large payoff. We
would also consider applying high-level R&M engineering to equipment critical to
a continuous operation. On the other hand, we may choose to apply a minimal level
of R&M engineering on a purchase of equipment that has a mature design and
minimally demonstrated field problems.
Some of the issues to consider when determining appropriate levels of R&M
engineering for a project include:
SL3151Ch08Frame Page 347 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
347
1. Review the availability of existing machines in the organization that may
be idle. This is a good opportunity for reusability.
2. How many units are we ordering with identical or leverageable design?
3. What is the condition of the existing machinery that will be rehabilitated?
4. What is the status of the operating conditions? Are they extremely
demanding?
5. What is the cycle plan for the machinery? Does it require continuous or
intermittent duty? For how many years is the equipment expected to
produce?
6. Where is the machinery in the manufacturing process? Is it a constraint
(bottleneck) operation?
7. How well documented and complete is the root cause analysis for the
design? Will it decrease up-front work?
8. How much data exist to support known design problems?
WHO’S RESPONSIBLE?
Full realization of R&M benefits requires consistent application of the process.
Simultaneous engineering (SE) teams, together with the plants and the supply base,
must align their efforts and objectives to provide quality machinery designed for
R&M. Reliability and maintainability engineering is the responsibility of everyone
involved in machinery design, as much as the collection and maintenance of operational data are the responsibility of those operating and maintaining the equipment
day to day.
The R&M process places responsibility on the groups possessing the skills or
knowledge necessary to efficiently and accurately complete a given set of tasks. It
turns out that much of the expertise is in the supply base, and as such, the suppliers
must take the lead role and responsibility in R&M efforts. The R&M process
encourages the organization and suppliers to lock into budget costs based on Life
Cycle Costing (LCC) analysis of options and cost targets. Warranty issues should
be considered in the LCC analysis so that design helps decrease excessive warranty
costs after installation. The focus places responsibility for correcting design defects
on the machinery designers.
Facility and tooling producers who practice R&M will ultimately reduce the
cost (such as warranty) of their product and will become more competitive over
time. Further, suppliers that practice R&M will qualify as QS-9000 certified, preferred, global sourcing partners. Engineers and program managers who practice and
encourage R&M will reduce operational costs over time. In doing so, they will meet
manufacturing and cost objectives for their projects or programs.
TOOLS
There are many R&M tools. The ones mentioned here are required in the Design
and Development Planning (4.4.2) section of the TE Supplement. Many others
beyond the few that are addressed here are available and can improve reliability.
SL3151Ch08Frame Page 348 Thursday, September 12, 2002 6:07 PM
348
Six Sigma and Beyond
Mean Time Between Failure (MTBF) is defined as the average time between
failure occurrences. It is simply the sum of the operating time of a machine divided
by the total number of failures. For example, if a machine runs for 100 hours and
breaks down four times, the MTBF is 100 divided by 4 or 25 hours. As changes are
made to the machine or process, we can measure the success by comparing the new
MTBF with the old MTBF and quantify the action that has been taken.
Mean Time to Repair (MTTR) is defined as the average time to restore machinery
or equipment to its specified conditions. This is accomplished by dividing the total
repair time by the number of failures. It is important to note that the MTTR
calculation is based on repairing one failure and one failure only. The length of time
it takes to repair each failure directly affects up-time, up-time %, and capacity. For
example, if a machine runs 100 hours and has eight failures recorded with a total
repair time of four hours, the MTTR for this machine would be four hours divided
by eight failures or .5 hours. This is the mean time it takes to repair each failure.
Fault Tree Analysis (FTA) is an effect-and-cause diagram. It is a method used
to identify the root causes of a failure mode using symbols developed in the defense
industry. The FTA is a great prescriptive method for determining the root causes
associated with failures and can be used as an alternative to the Ishikawa Fish Bone
Diagram. It compliments the Machinery Failure Mode and Effects Analysis
(MFMEA) by representing the relationship of each root cause to other failure-mode
root causes. Some feel the FTA is better suited than the FMEA to providing an
understanding of the layers and relationships of causes. An FTA also aids in establishing
a troubleshooting guide for maintenance procedures. It is a top down approach.
Life Cycle Costs (LCC) are the total costs of ownership of the equipment or
machinery during its operational life. A purchased system must be supported during
its total life cycle. The importance of life cycle costs related to R&M is based on
the fact that up to 95% of the total life cycle costs are determined during the early
stages of the design and development of the equipment. The first three phases of
the equipment’s life cycle are typically identified as non-recurring costs. The remaining two phases are associated with the equipment’s support costs.
SEQUENCE AND TIMING
The R&M process is a generic model of logically sequenced events that guides the
simultaneous engineering team through the main drivers of good design for R&M
engineering. The amount of time budgeted for each activity or task should vary
depending on the circumstances surrounding the equipment or processes in design.
However, regardless of the unique conditions, all of the steps in the R&M process
need to be considered in their logical sequence and applied as needed.
In Table 8.1, we identify different activities that you may consider in the first
three phases of the R&M process. These phases are divided into main areas for
consideration; then, various activities are listed for each area. This list is not complete, but it focuses the reader on the type of activities that should occur during each
time period. This list also helps identify the sequence in which these activities may
be completed, depending on the project.
SL3151Ch08Frame Page 349 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
349
TABLE 8.1
Activities in the First Three Phases of the R&M Process
Concept
Design/Development
Build and Installation
Bookshelf data manufacturing
process selection
R&M and production needs
analysis
R&M planning
Process design for R&M
machinery FMEA — design review
Equipment run-off
Operation of machinery
To determine timing for the R&M process, you may use the following procedure:
1. Determine deadline dates to meet production requirements.
2. Check relevance of R&M activities with regard to achieving program/project targets.
3. Plan relevant R&M activities by working backwards from deadline dates,
estimating time required for completion of each activity.
4. Set appropriate start dates for each activity/stage based on requirements
and timing.
5. Determine and assign responsibility for stage-based deliverables.
6. Continually track progress of your plan, within and at the conclusion of
each stage.
CONCEPT
BOOKSHELF DATA
Activities associated with the bookshelf data stage include:
1.
2.
3.
4.
5.
6.
7.
8.
9.
Identify good design practices.
Collect machinery things gone right/things gone wrong (TGR/TGW).
Document successful machinery R&M features.
Collect similar machinery history of mean time between failures (MTBF).
Collect similar standardized component history of mean time between
failures (MTBF).
Collect similar machinery history of mean time to repair (MTTR).
Collect similar machinery history of overall equipment effectiveness
(OEE).
Collect similar machinery history of reliability growth.
Collect similar machinery history of root cause analyses.
At this point it is important to ask and answer this question: Have we collected
all of the relevant historical data from similar operations or designs and documented
them for use during the process selection and design stages?
SL3151Ch08Frame Page 350 Thursday, September 12, 2002 6:07 PM
350
Six Sigma and Beyond
MANUFACTURING PROCESS SELECTION
Activities associated with the manufacturing process selection stage include:
1. Identify general life cycle costs to drive the manufacturing process selection.
2. Establish OEE targets including availability, quality, and performance
efficiency numbers that drive the manufacturing process selection.
3. Establish broad R&M target ranges that drive the manufacturing process
selection.
4. Establish manufacturing assumptions based on cycle plan, including volumes and dollar targets.
5. Identify simultaneous engineering (SE) partners for project.
6. Select manufacturing process based on demonstrated performance and
expected ability to meet established targets.
7. Search for other surplus equipment to be considered for reuse.
8. If surplus machinery has not been identified for reuse, identify a supplier,
based on manufacturing process selection (evaluate R&M capability).
9. Generate detailed life cycle costing analysis on selected manufacturing
process.
At this point it is important to ask and answer these questions: Have broad, high
level R&M targets been set to drive detailed process trade-off decisions? Is the life
cycle cost analysis complete for the selected manufacturing process? Do the projections support the budget per the affordable business structure?
R&M
AND
PREVENTIVE MAINTENANCE (PM) NEEDS ANALYSIS
Activities associated with the R&M and PM needs analysis stage include:
1. Establish a clear definition of failure by using all known operating conditions and unique circumstances surrounding the process.
2. Establish R&M requirements for the unique operating conditions surrounding the chosen manufacturing process.
3. Establish/issue R&M engineering requirements for the project to the
designers of the machinery.
4. Identify PM requirements for maintainability.
At this point it is important to ask and answer this question: Have specific R&M
targets been set to support the unique operating conditions and PM program objectives?
DEVELOPMENT AND DESIGN
R&M PLANNING
Activities associated with the R&M planning stage include:
SL3151Ch08Frame Page 351 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
351
1. Conduct process concept review.
2. Identify design effects for other related equipment (automation, integration, processing, etc.).
3. Standardize fault diagnostics (controls, software, interfaces, level of diagnosis, etc.).
4. Develop R&M/PM plan (process/machinery FMEA, mechanical/electrical derating, materials compatibility, thermal analyses, finite element analysis to support machine condition signature analysis, R&M predictions,
R&M simulations, design for maintainability, etc.).
5. Establish R&M/PM testing requirements (burn-in testing, voltage cycling,
probability ratio sequential testing, design of experiments for process optimization, environmental stress screening, life testing, test-analyze-fix, etc.).
At this point it is important to ask and answer these questions: Does the R&M
plan address each project target? Is the R&M plan sufficient to meet project targets?
PROCESS DESIGN
FOR
R&M
Activities associated with the process design for R&M stage include:
1.
2.
3.
4.
Conduct process design review.
Develop process flow chart.
Develop process simulation model.
Conduct process design simulation for multiple scenarios by analyzing
operational effects of various R&M design trade-offs.
5. Develop life cycle costing analysis on process-related equipment.
6. Review process FMEA.
7. Complete final process review and simultaneous engineering team input.
At this point it is important to ask and answer this question: Is the process FMEA
complete, and have causes of potentially common failure modes been addressed and
redesigned?
MACHINERY FMEA DEVELOPMENT
Activities associated with the machinery FMEA development stage include:
1. Develop plant floor computer data collection system (activity tracking,
downtime, reliability growth curves).
2. Establish machinery data feedback plan (crisis maintenance, MTBF,
MTTR, tool lives, OEE, production report, etc.).
3. Verify completion of machinery FMEA on all critical machinery. Confirm
design actions, maintenance burdens, things gone wrong, root cause analyses, etc.
4. Develop fault diagnostic strategy (built in test equipment, rapid problem
diagnosis, control measures).
SL3151Ch08Frame Page 352 Thursday, September 12, 2002 6:07 PM
352
Six Sigma and Beyond
5. Review equipment and material handling layouts (panels, hydro, coolant
systems).
At this point it is important to ask and answer these questions: Is the machinery
FMEA complete, and have causes of potentially common failure modes been
addressed and redesigned? Is the data collection plan complete?
DESIGN REVIEW
Activities associated with the design review stage include:
1. Conduct machinery design review (field history, machinery FMEA, test
or build problems, R&M simulation and reliability predictions, maintainability, thermal/mechanical/electrical analyses, etc.).
2. Provide R&M requirements to tier two suppliers (levels, root cause analyses, standardized component applications, testing, etc.).
At this point it is important to ask and answer this question: Have the R&M
plan requirements been incorporated in the machinery design?
BUILD AND INSTALL
EQUIPMENT RUN-OFF
Activities associated with the equipment run-off stage include:
1. Conduct machinery run-off (perform root cause analysis, Failure, Reporting Analysis, and Corrective Action System [FRACAS], complete testing,
verify R&M and TPM requirements, validate diagnostic logic and data
collection).
2. Complete preventative maintenance/predictive maintenance manuals and
review maintenance burden.
At this point it is important to ask and answer this question: Has the plant
maintenance department devised a maintenance plan based on expected machine
performance?
OPERATION
OF
MACHINERY
Activities associated with the operation of machinery stage include:
1.
2.
3.
4.
5.
Implement and utilize machinery data feedback plan.
Implement and utilize FRACAS.
Evaluate PM program.
Update FMEA and reliability predictions.
Conduct reliability growth curve development and analysis.
At this point it is important to ask and answer this question: Have design practices
been documented for use by the next generation design teams? (Also note that as
SL3151Ch08Frame Page 353 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
353
the machinery begins to operate, the continuous improvement cycle phases begin to
lead the R&M effort in phases four and five.)
OPERATIONS AND SUPPORT
After the equipment has been installed and the run-off has been performed, the
Durability phase of the cycle begins. The PM program now begins to utilize the
R&M team member more as a team leader than a participant. Durability, as defined
in the TE supplement, is the “ability to perform intended function over a specified
period under normal use (with specified maintenance, without significant deterioration).” As the machinery begins to acquire additional operation hours, PM personnel
identify issues and take corrective action. These issues and corrections are fed back
to FMEA personnel and R&M planners as lessons learned for the next generation
of machinery. Whether these corrections involve the design of the machinery or the
maintenance schedule/tasks, each must be incorporated into the continuous improvement loop.
CONVERSION/DECOMMISSION
Conversion is one of the key elements of the investment efficiency loop. The R&M
process for reuse of equipment is very similar to the purchase of new equipment
except that you have more limitations on the concept of the new process. The data
are collected and phase one is repeated, often, with more specific direction as the
current equipment may limit some of the other concepts.
While decommission may be the process of equipment disposal, it is necessary
to verify and record R&M data from this equipment to help identify the best design
practices. It is also important to make note of those design practices that did not
work as well as planned.
As plans for decommission become firm, it is important to generate forecasts
for equipment availability. These forecasts should then be entered into a database
for future forecasted and available machinery and equipment. Maintenance data,
including condition, operation description, and reason for availability should be
included. This will assist engineers evaluating surplus machinery and equipment for
reuse in their programs.
TYPICAL R&M MEASURES
R&M MATRIX
Perhaps the most important document in the R&M process is the R&M matrix. This
matrix identifies the requirements of the customer on a per phase basis. Three major
categories of tasks are usually identified. They are:
R&M programmatic tasks
Engineering tasks
R&M continuous improvement
SL3151Ch08Frame Page 354 Thursday, September 12, 2002 6:07 PM
354
Six Sigma and Beyond
RELIABILITY POINT MEASUREMENT
This may be expressed by:
R(t ) = e
 −t 


 MTBF 
where R(t) = reliability point estimate during a constant failure rate period; e =
natural logarithm which is 2.718281828…; t = schedule time or mission time of the
equipment or machinery; and MTBF = mean time between failure.
Special note: This calculation may be performed only when the machine has
reached the bottom of the bathtub curve.
EXAMPLE
A water pump is scheduled (mission time) to operate for 100 hours. The MTBF for
this pump is also rated at 100 hours and the MTTR is 2 hours. The probability that
the pump will not fail during the mission is:
R(t ) = e
 −t 


 MTBF 
= R(t ) = e
 −100 


 100 
= .37 or 37%.
This means that the pump will have a 37% chance of not breaking down during the
100-hour mission time.
Conversely, the unreliability of the pump can be calculated as:
R = 1 – R = 1 – .37 = .63 or 63%.
This means that the pump has a 63% chance of failing during the 100 hour mission.
MTBE
Mean time between event can be calculated as:
MTBE = Total Operating Time/N
where Total Operating Time = the total scheduled production time when machinery
or equipment is powered and producing parts and N = the total number of downtime
events, scheduled and unscheduled.
EXAMPLE
The total operating time for a machine is 550 hours. In addition, the machine
experiences 2 failures, 2 tool changes, 2 quality checks, 1 preventive maintenance
meeting, and 5 lunch breaks. What is the MTBE?
SL3151Ch08Frame Page 355 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
355
MTBE = Total Operating Time/N = 550/12 = 45.8 hours
MTBF
Mean time between failure is the average time between failure occurrences and is
calculated as:
MTBF = Operating Time/N
where Operating Time = scheduled production time and N = total number of failures
observed during the operating period.
EXAMPLE
If machinery is operating for 400 hours and there are eight failures, what is the
MTBF?
MTBF = Operating Time/N = 400/8 = 50 hours. (Special note: Sometimes C (cycles)
is substituted for T. In that case, we calculate the MCBF. The steps are identical to
those of the MTBF calculation.)
FAILURE RATE
Failure rate estimates the number of failures in a given unit of time, events, cycles, or
number of parts. It is the probability of failure within a unit of time. It is calculated as:
Failure rate = 1/MTBF
EXAMPLE
The failure rate of a pump that experiences one failure within an operating time
period of 2000 hours is:
Failure rate = 1/MTBF = 1/2000 = .0005 failures per hour.
This means that there is a .0005 probability that a failure will occur with every hour
of operation.
MTTR
Mean time to repair is a calculation based on one failure and one failure only. The
longer each failure takes to repair, the more the equipment’s cost of ownership goes
up. Additionally, MTTR directly effects uptime, uptime percent, and capacity. It is
calculated as:
MTTR =
∑t
N
SL3151Ch08Frame Page 356 Thursday, September 12, 2002 6:07 PM
356
Six Sigma and Beyond
where
∑ t = total repair time and N = total number of repairs.
EXAMPLE
A pump operates for 300 hours. During that period there were four failure events
recorded. The total repair time was 5 hours. What is the MTTR?
MTTR =
∑t
N
= 5/4 = 1.25 hours
AVAILABILITY
Availability is the measure of the degree to which machinery or equipment is in an
operable and committable state at any point in time. Availability is dependent upon
(a) breakdown loss, (b) setup and adjustment loss, and (c) other factors that may
prevent machinery from being available for operation when needed. When calculating this metric, it is assumed that maintenance starts as soon as the failure is reported.
(Special note: Think of the measurement of R&M in terms of availability. That is,
MTBF is reliability and MTTR is maintainability.) Availability is calculated as:
Availability = MTBF/(MTBF + MTTR)
EXAMPLE
What is the availability for a system that has an MTBF of 50 hours and an MTTR
of 1 hour?
Availability = MTBF/(MTBF + MTTR) = 50/(50 + 1) = .98 or 98%
OVERALL EQUIPMENT EFFECTIVENESS (OEE)
Overall equipment effectiveness (OEE) is a measure of three variables. They are:
1. Availability = percent of time a machine is available to produce
2. Performance efficiency = actual speed of the machine as related to the
design speed of the machine
3. Quality rate = percent of resulting parts that are within specifications
A good OEE is considered to be 85% or higher.
LIFE CYCLE COSTING (LCC)
Life cycle costing (LCC) is the total cost over the life of the machine or equipment.
It is calculated based on the following:
LCC = Acquisition costs (A) + Operating costs (O) +
Maintenance costs (M) ± Conversion and or decommission costs (c)
SL3151Ch08Frame Page 357 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
357
TABLE 8.2
Cost Comparison of Two Machines
Costs
Acquisition costs (A)
Operating costs (O)
Maintenance costs (M)
Conversion and/or decommission costs (C)
Total LCC
Machine A
Machine B
$2,000.00
$9,360.00
$7,656.00
$1,520.00
$10,870.00
$9,942.00
$19,016.00
$22,332.00
EXAMPLE
What is the LCC for the two machines shown in Table 8.2 and which one is a better deal?
The reader should notice that before the decision is made all costs should be evaluated. In this case, machine A has a higher acquisition cost than machine B, but it
turns out that machine A has a lower LCC than machine B. Therefore, machine A
is the better deal.
TOP 10 PROBLEMS
AND
RESOLUTIONS
This list allows the designer to see the major sources of downtime associated with
the current equipment. Once the list items are identified, a root cause analysis or
problem resolution should be conducted on each of the failures. If the design is
known, the designer can then modify the design to reflect the changes. (Sometimes
the top ten problems are based on historical data and must be adjusted to reflect
current design considerations.)
THERMAL ANALYSIS
This analysis is conducted to help the designer to develop the appropriate and
applicable heat transfer (Table 8.3). The actual analysis is conducted by following
these six steps:
1.
2.
3.
4.
5.
Develop a list of all electrical components in the enclosure.
Identify the wattage rating for each component located in the enclosure.
Sum the total wattage for the enclosure.
Add in any external heat generating sources.
Calculate the surface area of the enclosure that will be available for
cooling.
6. Calculate the thermal rise above ambient.
EXAMPLE
The electrical enclosure is 5 ft. tall by 4 ft. deep. The surface area for this enclosure
is calculated as follows:
SL3151Ch08Frame Page 358 Thursday, September 12, 2002 6:07 PM
358
Six Sigma and Beyond
TABLE 8.3
Thermal Calculation Values
Thermal Calculation Values
Component Name
Quantity
Individual Wattage
Maximum
Total
Wattage
Internal
Relay
A18 contactor
A25 contactor
PS27 power supply
Monochrome monitor
4
1
2
1
1
2.5
1.7
2
71
85
Subtotal wattage
10.0
1.7
4.0
71.0
85.0
171.7
External
Servo transformer
1
450
Subtotal wattage
Total enclosure wattage
63.0
63.0
234.7
Note: The servo transformer is mounted externally and next to the enclosure. Therefore, only 14% of the total wattage is estimated to radiate into the enclosure
Front and Back = 5 ft. × 4ft. × 2 = 40 sq. ft.
Sides = 2 ft. × 5 ft. × 2 = 20 sq. ft.
Enclosure top = 2ft. × 4ft. = 8 sq. ft
Bottom is ignored due to the fact that heat rises.
Total surface area = 40 + 20 + 8 = 68 sq. ft.
To calculate the thermal rise (∆T) we use the following formula:
Thermal rise (∆T) = Thermal resistance (θCA) cabinet to ambient × Power (W)
θCA = 1/(Thermal conductivity × Cooling area)
The thermal conductivity value is found in the catalog of the National Electrical
Manufacturing Association (NEMA).
θCA = 1/(.25 W/degree F) × (square footage)
θCA = 1/.25 × 68 = .0588
Thus, .25 W/degree F is the thermal conductivity value for a NEMA 12 enclosure.
If the equipment inside the enclosure generates 234.7 watts, then the thermal rise is
SL3151Ch08Frame Page 359 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
359
∆T = θCA × wattage = .0588 × 234.7 = 13.8°F.
If the ambient temperature is 100°F, then the enclosure temperature will reach
113.8°F. If the enclosure temperature is specified as 104°F, then the design exceeds
the specification by approximately 9.8°F. The enclosure must be increased in size,
the load must be reduced, or active cooling techniques need to be applied. (Special
note: Remember that a 10% rise in temperature decreases the reliability by about
50%. Also the method just mentioned in this example is not valid for enclosures that
have other means of heat dissipation such as fans, or for those made of heavier metal
or if the material were changed. This specific calculation assumes that the heat is
being radiated through convection to the outside air.)
ELECTRICAL DESIGN MARGINS
Design margins in electrical engineering of the equipment are referred to as derating.
On the other hand, mechanical design margins are referred to as safety margins. A
rule of thumb for derating is about 20% for electrical components. However, the
actual calculation is
% derating = 1 −
IT
IS
where IT = total circuit current draw and IS = total supply current.
EXAMPLE
During a design review, the question arose as to whether the 24 V power supply for
a motor was adequately derated. The power supply takes 480 VAC three phase with
a 2 A circuit breaker and has a rated output of 10 A. An examination of the system
reveals that 24 V power is delivered to the load through three circuit breakers (A =
.477 A, B = .73 A, and C = 5.53 A. The total for the three circuits is therefore 6.737
A.) When these circuit breakers are combined, 11 A of current flow to the load. This
situation may not happen, but further investigation is required.
% derating = 1 −
IT
6.737
= 1−
= 32.63%
IS
10.0
This means that in this case the power supply will not be overloaded and the circuit
breakers are generously oversized. In other words, the circuit breakers should not be
tripped due to false triggers.
SAFETY MARGINS (SM)
For mechanical components, SM are generally defined as the amount of strength of
a mechanical component relating to the applied stress. A rule of thumb for SM with
a normally distributed stress load relationship is that the safety margin should always
be greater or equal to three. However, the actual calculation for the MS is
SL3151Ch08Frame Page 360 Thursday, September 12, 2002 6:07 PM
360
Six Sigma and Beyond
SM =
U STRENGTH − U STRESS
Sv 2 + Lv 2
Where SM = safety margin; USTRENGTH = mean strength; ULOAD = mean load; Lv2 =
load variance; and Sv2 = strength variance.
EXAMPLE
A robot’s arm has a mean strength of 80 kg. The maximum allowable stress applied
by the end of arm tooling is 50 kg. The strength variance is 8 kg and the stress
variance is 7 kg. What is the SM?
SM =
U STRENGTH − U STRESS
Sv + Lv
2
2
=
80 − 50
82 + 72
= 2.822
(A low SM may indicate the need to assign another size robot or redesign the tooling
material.)
INTERFERENCE
Once the SM is calculated, it can be used to calculate the interference and reliability
of the components under investigation. Interference may be thought of as the overlap
between the stress and the strength distributions. In more formal terms, it is the
probability that a random observation from the load distribution exceeds a random
observation from the strength distribution. To calculate interference, we use the SM
equation and substitute the z for the SM distribution:
Z=
U STRENGTH − U STRESS
Sv 2 + Lv 2
EXAMPLE
If we use the answer from the previous example (z = 2.822), we can use the z table
(in this case the area under the z = 2.822 is .0024). This means that there exists a
.0024 or .24% probability of failure.
Reliability, on the other hand, may be calculated as
R = 1 – interference or R = 1 – α
R = 1 – .0024 = .9976 or 99.76%.
This means that even though the strength and the load have a very low (.24%)
probability of failure, the reliability of the system is very high with a 99.76%.
SL3151Ch08Frame Page 361 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
361
TABLE 8.4
Guidelines for the Duane Model
β
Recommended Actions
0 to .2
No priority is given to reliability improvement; failure data not analyzed; corrective action
taken for important failure modes, but with low priority
Routine attention to reliability improvement; corrective action taken for important failure
modes
Priority attention to reliability improvement; normal (typical stresses) environment utilization;
well-managed analysis and corrective action for important failure modes
Eliminating failures takes top priority; immediate analysis and corrective action for all failures
.2 to .3
.3 to .4
.4 to .6
CONVERSION
OF
MTBF
TO
FAILURE RATE
AND
VICE VERSA
The relationship between these two metrics is
MTBF =
1
1
and FR =
FR
MTBF
RELIABILITY GROWTH PLOTS
This plot is an effective method to track continual improvement for R&M as well
as to predict reliability growth of machinery from one machine to the other. The
steps to generate this plot are:
Step 1. Collect data on the machine and calculate the cumulative MTBF value
for the machine.
Step 2. Plot the data on log–log paper. (An increasing slope indicates a
reliability growth flatness, which indicates that the machine has achieved
its inherent level of MTBF and cannot get any better)
Step 3. Calculate the slope, using regression analysis or best fit line. Once the
slope (the beta value) is calculated, we can apply the Duane model interpretation. The guidelines (Table 8.4) for the interpretation are
MACHINERY FMEA
Machinery FMEA is a systematic approach that applies the tabular method to aid
the thought process used by simultaneous engineering teams to identify the
machine’s potential failure modes, potential effects, and potential causes and to
develop corrective action plans that will remove or reduce the impact of the failure
modes. Perhaps the most important use of the machinery FMEA is to identify and
correct all safety issues. A more detailed discussion will be given in Chapter 6.
SL3151Ch08Frame Page 362 Thursday, September 12, 2002 6:07 PM
362
Six Sigma and Beyond
KEY DEFINITIONS IN R&M
The following terms are commonly encountered in R&M:
Accelerated life testing — Verification of machine and equipment design
relationship much sooner than if operated typically. Intended especially for
new technology, design changes, and ongoing development.
Derating — The practice of limiting stresses that may be applied to a component to levels below the specified maxima in order to enhance reliability.
Derating values of electrical stress are expressed as ratios of applied stress
to rated maximum stress. The applied stress is taken as the maximum likely
to be applied during worst-case operating conditions. Thermal derating is
expressed as a temperature value.
Design of experiments (DOE) — A technique that focuses on identifying factors
that affect the level or magnitude of a product/process response, examining
the response surface, and forming the mathematical prediction model.
Design review — A review providing in-depth detail relative to the evolving
design supported by drawings, process flow descriptions, engineering analyses, reliability design features, and maintainability design considerations.
Dry run — The rehearsal or cycling of machinery, normally with the intent
of not processing the work piece, to verify function, clearances, and construction stability.
Durability — Ability to perform intended function over a specified period
under normal use with specified maintenance, without significant deterioration.
Equipment — The portion of process machinery that is not specific to a
component or sub assembly.
Failure — An event when machinery/equipment is not available to produce
parts under specified conditions when scheduled or is not capable of producing parts or performing scheduled operations to specifications. For every
failure, an action is required.
Failure mode and effects analysis (FMEA) — A technique to identify each
potential failure mode and its effect on machinery performance.
Failure reporting, analysis, and corrective action system (fracas) — An
orderly system of recording and transmitting failure data from the supplier’s
plant to the end users fits into a unitary database. The database allows
identification of pattern failures and rapid resolution of problems through
rigorous failure analysis.
Fault tree analysis (FTA) — A top down approach to failure analysis starting
with an undesirable event and determining all the ways it can happen.
Feasibility — A determination that a process, design, procedure, or plan can
be successfully accomplished in the required time frame.
Finite element analysis (FEA) — A computational structure analysis technique that quantifies a structure’s response to applied loading conditions.
Total productive maintenance (TPM) — Natural cross-functional groups
working together in an optimal balance to improve the overall effectiveness
SL3151Ch08Frame Page 363 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
363
of their equipment and processes within their work areas. TPM implementation vigorously benchmarks, measures, and corrects all losses resulting
from inefficiencies.
Life cycle — The sequence through which machinery and equipment pass
from conception through decommission.
Life cycle costs (LCC) — The sum of all cost factors incurred during the
expected life of machinery.
Machine condition signature analysis (MCSA) — An application that
applies mechanical signature (vibration) analysis techniques to characterize
machinery and equipment on a systems level to significantly improve reliability and maintainability.
Machinery — Tooling and equipment combined. A generic term for all hardware (including necessary operational software) that performs a manufacturing process.
Maintainability — A characteristic of design, installation, and operation,
usually expressed as the probability that a machine can be retained in, or
restored to, specified operable condition within a specified interval of time
when maintenance is performed in accordance with prescribed procedures.
Mean time between failures (MTBF) — The average time between failure
occurrences. The sum of the operating time of a machine divided by the
total number of failures. Predominantly used for repairable equipment.
Mean time to failure (MTTF) — The average time to failure for a specific
equipment design. Used predominantly for non-repairable equipment.
Mean time to repair (MTTR) — The average time to restore machinery or
equipment to specified conditions.
Overall equipment effectiveness (OEE) — Percentage of the time the
machinery is available (Availability) × how fast the machinery is running
relative to its design cycle (Performance efficiency) × percentage of the
resulting product within quality specifications (Yield).
Perishable tooling — Tooling which is consumed over time during a manufacturing operation.
Plant floor information system (PFIS) — An information gathering system
used on the plant floor to gather data relating to plant operations including
maintenance activities.
Predictive maintenance (PdM) — A portion of scheduled maintenance dedicated to inspection for the purpose of detecting incipient failures.
Preventative maintenance (PM) — A portion of scheduled maintenance dedicated to taking planned actions for the purpose of reducing the frequency
or severity of future failures, including lubrication, filter changes, and part
replacement dictated by analytical techniques and predictive maintenance
procedures.
Probability ratio sequential testing (PRST) — A reliability qualification test
to demonstrate if the machinery/equipment satisfies a specified MTBF
requirement and is not lower than an acceptable MTBF (MIL-STD-781).
Process — Any operation or sequence of operations that contributes to the
transformation of raw material into a finished part or assembly.
SL3151Ch08Frame Page 364 Thursday, September 12, 2002 6:07 PM
364
Six Sigma and Beyond
Product — In relation to tooling and equipment suppliers, the term “product”
refers to the end item produced (e.g., machine, tool, die, etc.).
Production — In relation to tooling and equipment suppliers, the term “production” refers to the process required to produce the product.
R&M plan — A reliability and maintainability (R&M) plan shall establish a
clear implementation strategy for design assurance techniques, reliability
testing and assessment, and R&M continuous improvement activities during
the machinery/equipment life cycle.
R&M targets — The range of values that MTBF and MTTR are expected to
fall between plus an improvement factor that leads to MTBF and MTTR
requirements.
Reliability — The probability that machinery and equipment can perform
continuously, without failure, for a specified interval of time when operating
under stated conditions.
Reliability growth — Machine reliability improvement as a result of identifying and eliminating machinery or equipment failure causes during
machine testing and operations.
Root cause analysis (RCA) — A logical, systematic approach to identifying
the basic reasons (causes, mechanisms, etc.) for a problem, failure, nonconformance, process error, etc. The result of root cause analysis should
always be the identification of the basic mechanism by which the problem
occurs and a recommendation for corrective action.
Simultaneous engineering (SE) — Product engineering that optimizes the
final product by the proper integration of requirements, including product
function, manufacturing and assembly processing, service engineering, and
disposal.
Things gone right/things gone wrong (TGR/TGW) — An evolving program-level compilation of lessons learned that capture successful and
unsuccessful manufacturing engineering activity and equipment/performance for feedback to an organization and its suppliers for continuous
improvement.
Tooling — The portion of the process machinery that is specific to a component of sub assembly.
DFSS AND R&M
R&M’s goal is to make sure that the machinery/tool delivered to the customer meets
or exceeds its requirements. DFSS, on the other hand, is the methodology that
controls the process for satisfying the customer’s expectations early on in the product
development cycle. This is very important since in R&M the reliability matrix actually
attempts to quantify the initial product vision with the customer’s requirements.
Having said that, we must also recognize that quite often in product development
we do not have all the answers. In fact, quite often we are on a fuzzy front end.
This is where DFSS offers its greatest contribution. That is, with the process knowledge of DFSS, the engineer not only will be aware but also will make sure that the
appropriate design fits within both the customer’s and the organization’s goals.
SL3151Ch08Frame Page 365 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability
365
DFSS may be applied in an original design, which involves elaborating original
solutions for a given task; adaptive design, which involves adapting a known system
to a changed task or evolving a significant subsystem of a current product; variant
design, which involves varying parameters of certain aspects of a product to develop
a new or more robust design; and redesign, which implies any of the items just
mentioned. A redesign is not a variant design, rather it implies that a product already
exists that is perceived to fall short in some criteria, and a new solution is needed.
The new solution can be developed through any of the above approaches. In fact, it
is often difficult to argue against the maxim that all design is redesign (Otto and
Wood, 2001).
REFERENCES
Otto, K. and Wood, K., Product Design, Prentice Hall, Upper Saddle River, NJ, 2001.
SELECTED BIBLIOGRAPHY
Anon., Reliability and Maintainability Guideline for Manufacturing Machinery and Equipment, M-110.2, 2nd ed., Society of Automotive Engineers, Inc., Warrendale, PA and
National Center for Manufacturing Sciences, Inc., Ann Arbor, MI, 1999.
Anon., ISO/TS16949. International Automotive Task Force. 2nd ed. AIAG. Southfield, MI,
2002.
Automotive Industry Action Group, Potential Failure Mode and Effect Analysis, 3rd ed.,
Chrysler Corp., Ford Motor Co., and General Motors. Distributed by AIAG, Southfield, MI, 2001.
Blenchard, B.S., Logistics Engineering and Management, 3rd ed., Prentice Hall, Englewood
Cliffs, NJ, 1986.
Chrysler, Ford, and GM, Quality System Requirements: QS-9000, distributed by Automotive
Industry Action Group, Southfield, MI, 1995.
Chrysler, Ford, and GM, Quality System Requirements: Tooling and Equipment Supplement,
distributed by Automotive Industry Action Group, Southfield, MI, 1996.
Creveling, C.M., Tolerance Design: A Handbook for Developing Optimal Specifications,
Addison Wesley Longman, Reading, MA, 1997.
Hollins, B. and Pugh, S., Successful Product Design, Butterworth Scientific. London, 1990.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, Wiley, New York, 1977.
Nelson, W., Graphical analysis of system repair data, Journal of Quality Technology, 20,
24–35, 1988.
Stamatis, D.H., Implementing the TE Supplement to QS-9000, Quality Resources, New York,
1998.
SL3151Ch08Frame Page 366 Thursday, September 12, 2002 6:07 PM
SL3151Ch09Frame Page 367 Thursday, September 12, 2002 6:05 PM
9
Design of Experiments
SETTING THE STAGE FOR DOE
Design of Experiments (DOE) is a way to efficiently plan and structure an investigatory testing program. Although DOE is often perceived to be a problem-solving
tool, its greatest benefit can come as a problem avoidance tool. In fact, it is this
avoidance that we emphasize in design for six sigma (DFSS).
This chapter is organized into nine sections. The user who is looking for a basic
DOE introduction in order to participate with some understanding in a problemsolving group is urged to study and understand the first two sections or go back and
review Volume V of this series. The remaining sections discuss more complex topics
including problem avoidance in product and process design, more advanced experimental layouts, and understanding the analysis in more detail.
WHY DOE (DESIGN
OF
EXPERIMENTS) IS
A
VALUABLE TOOL
DOE is a valuable tool because:
1. DOE helps the responsible group plan, conduct, and analyze test programs
more efficiently.
2. DOE is an effective way to reduce cost.
Usually the term DOE brings to mind only the analysis of experimental data.
The application of DOE necessitates a much broader approach that encompasses the
total process involved in testing. The skills required to conduct an effective test
program fall into three main categories:
1. Planning/organizational
2. Technical
3. Analytical/statistical
The planning of the experiment is a critical phase. If the groundwork laid in the
planning phase is faulty, even the best analytic techniques will not salvage the
disaster. The tendency to run off and conduct tests as soon as a problem is found,
without planning the outcome, should be resisted. The benefits from up-front planning almost always outweigh the small investment of time and effort. Too often,
time and resources are wasted running down blind alleys that could have been
avoided. Section 2 of this chapter contains a more detailed discussion of planning
and the techniques used to ensure a well-planned experiment.
367
SL3151Ch09Frame Page 368 Thursday, September 12, 2002 6:05 PM
368
Six Sigma and Beyond
TABLE 9.1
One Factor at a Time
The group tests configurations containing the following combinations of the factors:
Level of Factor (1 and 2 Indicate the Different Levels)
Results
Test
Number
A
B
C
D
E
F
G
a
b
1
2
3
4
5
6
7
8
1
2
1
1
1
1
1
1
1
1
2
2
2
2
2
2
1
1
1
2
1
1
1
1
1
1
1
1
2
2
2
2
1
1
1
1
1
2
1
1
1
1
1
1
1
1
2
1
1
1
1
1
1
1
1
2
271.4
215.0
275.3
235.2
296.6
305.2
278.8
251.9
266.3
211.2
271.1
231.5
301.6
301.1
275.3
254.3
DOE can be a powerful tool in situations where the effect on a measured output
of several factors, each at two or more levels, must be determined. In the traditional
“one factor at a time” approach, each test result is used in a small number of
comparisons. In DOE, each test is used in every comparison. A simplified example
follows.
EXAMPLE
A problem-solving brainstorming group suspects 7 factors (named A, B, C, D, E, F,
and G), each at two levels (level 1 and level 2), of influencing a critical, measurable
function of the design. The group wants to determine the best settings of these factors
to maximize the measured test results — see Table 9.1. Two evaluations (a and b)
are run at each test configuration rather than a single evaluation in order to attain a
higher confidence in the difference between factor levels (this assumes no need for
a “tie breaker”). The group makes comparisons as shown in Table 9.2. Sixteen total
tests are run, and four tests are used to determine the difference between levels for
each factor. The best combination of factors is (1, 2, 1, 2, 2, 1, 1) for factors A
through G.
However, using DOE the group runs test configurations as shown in Table 9.3.
The group makes comparisons as shown in Table 9.4. Eight total tests are run, and
eight tests are used to determine the difference between levels for each factor. This
can be done because each level of every factor equally impacts the determination of
the average response at all levels of all of the other factors (i.e., of the four tests run
at A = 1, two were run at B = 1 and two were run at B = 2; this is also true of the
four tests run at A = 2). This relationship is called orthogonality. This concept is
very important, and the reader should work through the relationships between the
levels of at least two other factors to better understand the use of orthogonality in
this testing matrix. The best level is [1, (1 or 2), 1, 2, (1 or 2), 1, 1] for A through
G. Factors B and E are not significant and may be set to the least expensive level.
SL3151Ch09Frame Page 369 Thursday, September 12, 2002 6:05 PM
Design of Experiments
369
TABLE 9.2
Test Numbers for Comparison
Test Numbers Used to Determine:
Factor
A
B
C
D
E
F
G
Level 1
1a,
1a,
3a,
3a,
5a,
6a,
6a,
Difference
Level 1 – Level 2
Level 2
1b
1b
3b
3b
5b
6b
6b
2a,
3a,
4a,
5a,
6a,
7a,
8a,
2b
3b
4b
5b
6b
7b
8b
55.8
–4.4
39.9
–25.7
–4.3
26.1
50.1
TABLE 9.3
The Group Runs Using DOE Configurations
Level of Factor
(1 and 2 Indicate the Different Levels)
Test
Number
A
B
C
D
E
F
G
Result
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
270.7
223.8
158.2
263.1
129.3
175.1
195.4
194.6
TABLE 9.4
Comparisons Using DOE
Test Numbers Used to Determine
Factor
A
B
C
D
E
F
G
Level 1
Level 2
1,
1,
1,
1,
1,
1,
1,
5,
3,
3,
2,
2,
2,
2,
2,
2,
2,
3,
3,
4,
4,
3,
5,
7,
5,
6,
5,
6,
4
6
8
7
8
8
7
6,
4,
4,
4,
4,
3,
3,
7,
7,
5,
6,
5,
5,
5,
8
8
6
8
7
8
8
Difference
Level 1 – Level 2
55.4
–3.1
39.7
–25.8
–3.3
26.3
49.6
SL3151Ch09Frame Page 370 Thursday, September 12, 2002 6:05 PM
370
Six Sigma and Beyond
TABLE 9.5
Comparison of the Two Means
Number of Tests
Estimate at the Best Levels
Confidence Interval
at 90% Confidence
16
8
301.1
299.6
± 3.7
± 3.3
One factor at a time
DOE
For a comparison of the two methods, see Table 9.5. Half as many tests are
required using a DOE approach and the estimate at each level is better (four tests
per factor level versus two). This is almost like getting something for nothing. The
only thing that is required is that the group plan out what is to be learned before
running any of the tests. The savings in time and testing resources can be significant.
Direct benefits include reduced product development time, improved problem correction response, and more satisfied customers. And that is exactly what DFSS should
be aiming at.
This approach to DOE is also very flexible and can accommodate known or
suspected interactions and factors with more than two levels. A properly structured
experiment will give the maximum amount of information possible. An experiment
that is less well designed will be an inefficient use of scarce resources.
TAGUCHI’S APPROACH
Here it is appropriate to summarize Dr. Taguchi’s approach, which is to minimize
the total cost to society. He uses the “Loss Function” (Section 4) to evaluate the
total cost impact of alternative quality improvement actions. In Dr. Taguchi’s view,
we all have an important societal responsibility to minimize the sum of the internal
cost of producing a product and the external cost the customer incurs in using the
product. The customer’s cost includes the cost of dissatisfaction. This responsibility
should be in harmony with every company’s objectives when the long-term view of
survival and customer satisfaction is considered. Profits may be maximized in the
short run by deceiving today’s customers or trading away the future.
Traditionally, the next quarter’s or next year’s “bottom line” has been the driving
force in most corporations. Times have changed, however. Worldwide competition
has grown, and customers have become more concerned with the total product cost.
In this environment, survival becomes a real issue, and customer satisfaction must
be a part of the cost equation that drives the decision process.
Dr. Taguchi uses the signal-to-noise (S/N) ratio as the operational way of incorporating the loss function into experimental design. Experiment S/N is analogous
to the S/N measurement developed in the audio/electronics industry. S/N is used to
ensure that designs and processes give desired responses over different conditions
of uncontrollable “noise” factors. S/N is introduced in Section 4 and developed in
examples in later sections.
SL3151Ch09Frame Page 371 Thursday, September 12, 2002 6:05 PM
Design of Experiments
371
There are three basic types of product design activity in Dr. Taguchi’s approach:
1. System design
2. Parameter design
3. Tolerance design
System design involves basic research to understand nature. System design
involves scientific principles, their extension to unknown situations, and the development of highly structured basic relationships. Parameter and tolerance design
involves optimizing the system design using empirical methods. Taguchi’s methods
are most useful in parameter and tolerance designs. The rest of this chapter will
discuss these applications.
Parameter design optimizes the product or process design to reach the target
value with minimum possible variability with the cheapest available components.
Note the emphasis on striving to satisfy the requirements in the least costly manner.
Parameter design is discussed in Section 8.
Tolerance design only occurs if the variability achieved with the least costly
components is too large to meet product goals. In tolerance design, the sensitivity
of the design to changes in component tolerances is investigated. The goal is to
determine which components should be more tightly controlled and which are not
as crucial. Again, the driving force is cost. Tolerance design is discussed in Section 9.
Problem resolution might appear to be another type of product design. If targets
are set correctly, however, and parameter and tolerance design occur, there will be
little need for problem resolution. When problems do arise, they are attacked using
elements of both parameter and tolerance design, as the situation warrants.
MISCELLANEOUS THOUGHTS
A tremendous opportunity exists when the basic relationships between components
are defined in equation form in the system design phase. This occurs in electrical
circuit design, finite element analysis, and other situations. In these cases, once the
equations are known, testing can be simulated on a computer and the “best” component values and appropriate tolerances obtained. It might be argued that the true
best values would not be located using this technique; only the local maxima would
be obtained. The equations involved are generally too complex to solve to the true
best values using calculus. Determining the local best values in the region that the
experienced design engineer considers most promising is generally the best available
approach. It definitely has merit over choosing several values and solving for the
remaining ones. The cost involved is computation time, and the benefit is a robust
design using the widest possible tolerances.
Those readers who have some experience in classical statistics may wonder
about the differences between the classical and Taguchi approaches. Although there
are some operational differences, the biggest difference is in philosophical
emphasis — see Volume V of this series. Classical statistics emphasizes the producer’s risk. This means a factor’s effect must be shown to be significantly different
SL3151Ch09Frame Page 372 Thursday, September 12, 2002 6:05 PM
372
Six Sigma and Beyond
from zero at a high confidence level to warrant a choice between levels. Taguchi
uses percent contribution as a way to evaluate test results from a consumer’s risk
standpoint. The reasoning is that if a factor has a high percent contribution, more
often than not it is worth pursuing. In this respect, the Taguchi approach is less
conservative than the classical approach. Dr. Taguchi uses orthogonal arrays extensively in his approach and has formulated them into a “cookbook” approach that is
relatively easy to learn and apply. Classical statistics has several different ways of
designing experiments including orthogonal arrays. In some cases, another approach
may be more efficient than the orthogonal array. However, the application of these
methods may be complex and is usually left to statisticians. Dr. Taguchi also
approaches uncontrollable “noise” differently. He emphasizes developing a design
that is robust over the levels of noise factors. This means that the design will perform
at or near target regardless of what is happening with the uncontrollable factors.
Classical statistics seeks to remove the noise factors from consideration by “blocking” the noise factors.
In certain cases, the approaches Taguchi recommends may be more complicated
than other statistical approaches or may be questioned by classical statisticians. In
these cases, alternative approaches are presented as supplemental information at the
end of the appropriate section. Additional analysis techniques are also presented in
section supplements.
The reader is encouraged to thoroughly analyze the data using all appropriate
tools. Incomplete analysis can result in incorrect conclusions.
PLANNING THE EXPERIMENT
The purpose of this section is to:
1. Impress upon the reader the importance of planning the experiment as a
prerequisite to achieving successful results
2. Present some tools to use and points to consider during the planning phase
3. Demonstrate DOE applications via simple examples
BRAINSTORMING
The first steps in planning a DOE are to define the situation to be addressed, identify
the participants, and determine the scope and the goal of the investigation. This
information should be written down in terms that are as specific as possible so that
everyone involved can agree on and share a common understanding and purpose.
The experts involved should pool their understanding of the subject. In a brainstorming session, each participant is encouraged to offer an opinion of which factors cause
the effect. All ideas are recorded without question or discussion at this stage. To aid
in the organization of the proposed factors, a branching (fishbone) format is often
used, where each main branch is a main aspect of the effect under investigation
(e.g., material, methods, machine, people, measurement, environment). The construction of a cause-and-effect (fishbone or Ishikawa) diagram in a brainstorming
session provides a structured, efficient way to ensure that pertinent ideas are collected
SL3151Ch09Frame Page 373 Thursday, September 12, 2002 6:05 PM
Design of Experiments
Cooling
373
Engine Control
Hardware
Calibration
Spark Scatter Poor Ground Injectors
F/A
Stuck
Too Great
Ratio
Broken
Spark Advance
Contaminated
Range
Internal
Harness &
Too Small
to Veh
Connectors
Intermittents
EMI
Fuel
Improper
Air Ports
Flow
Connector
CBs
Fit
Piston Ring
Scuff/Power
Bolt Torque
Loss
Bore
Fit
Distortion
Compression Grinding
Height
Piston
Piston
Rings
Design
Timing
Suppliers
Bore
Buffs
Camshaft
Finish
Finish
Compression
Ratio
Assembly
Engine
Manufacturing
Hardware
FIGURE 9.1 An example of a partially completed fishbone diagram.
and considered and that the discussion stays on track. An example of a partially
completed cause-and-effect diagram is shown in Figure 9.1.
After the participants have expressed their ideas on possible causes, the factors
are discussed and prioritized for investigation. Usually, a three-level (high, moderate,
and low) rating system is used to indicate the group consensus on the level of
suspected contribution. Quite often, the rating will be determined by a simple vote
of the participants. In situations where several different areas of contributing expertise are represented, participants’ votes outside of their areas of expertise may not
have the importance of the expert’s vote. Handling this situation becomes a management challenge for the group leader and is beyond the scope of this document —
the reader may need to review Volume II of this series.
During the brainstorming and prioritization process, the participants should
consider the following:
1. The situation — What is the present state of affairs and why are we
dissatisfied?
2. The goal — When will we be satisfied (at least in the short term)?
3. The constraints — How much time and resources can we use in the
investigation?
4. The approach — Is DOE appropriate right now or should we do other
research first?
5. The measurement technique and response — What measurement technique will be used and what response will be measured?
CHOICE
OF
RESPONSE
The choice of measurement technique and response is an important point that is
sometimes not given much thought. The obvious response is not always the best.
SL3151Ch09Frame Page 374 Thursday, September 12, 2002 6:05 PM
374
Six Sigma and Beyond
Factor 2 = Low
Response
Factor 2 = High
Low Level
High Level
Factor 1
FIGURE 9.2 An example of interaction.
As an example, consider the gap between two vehicle body panels. At first thought,
that gap could be used as the response in a DOE aimed at achieving a target gap.
However, the gap can be a symptom of more basic problems with the:
• Width of the panels
• Location holes in the panels
• Location of the attachment points on the body frame
All of these must be right for the gap to be as intended. If the goal of the
experiment is to identify which of these has the biggest impact on the gap, the choice
of the gap as a response is appropriate. If the purpose is to minimize the deviation
from the target gap, the gap may not be the right response. A more basic investigation
of the factors that contribute to the underlying cause is required. Do not confuse the
symptom with the underlying causes. This thought process is very similar to the
thought process used in SPC and failure mode and effect analysis (FMEA) and
draws heavily upon the experience of experts to frame the right question. In DOE,
the choice of an improper response could result in an inconclusive experiment or in
a solution that might not work as things change due to interactions between the
factors. An interaction occurs when the change in the response due to a change in
the level of a factor is different for the different levels of a second factor. An example
is shown in Figure 9.2.
The choice of the proper response characteristic will usually result in few
interactions being significant. Since there is a limitation as to how much information
can be extracted from a given number of experiments, choosing the right response
will allow the investigation of the maximum number of factors in the minimum
number of tests without interactions between factors blurring the factor main effect.
Interactions will be discussed in more detail in Section 3. The proper setup of an
SL3151Ch09Frame Page 375 Thursday, September 12, 2002 6:05 PM
Design of Experiments
375
experiment is not only a statistical task. Statistics serve to focus the technical
expertise of the participating experts into the most efficient approach.
In summary, the response should:
1. Relate to the underlying causes and not be a symptom
2. Be measurable (if possible, a continuous response should be chosen)
3. Be repeatable for the test procedure
The prioritization process continues until the most critical factors that can be
addressed within the resources of the test program are identified. The next step is
to determine:
1. Are the factors controllable or are some of them “noise” beyond our
control?
2. Do the factors interact?
3. What levels of each factor should be considered?
4. How do these levels relate to production limits or specs?
5. Who will supply the parts, machines, and testing facilities, and when will
they be available?
6. Does everyone agree on the statement of the problem, goal, approach,
and allocation of roles?
7. What kind of test procedure will be used?
When all of these questions have been answered, the person who is acting as
the statistical resource for the group can translate the answers into a hypothesis and
experimental setup to test the hypothesis. The following example illustrates how the
process can work:
EXAMPLE
A particular bracket has started to fail in the field with a higher than expected
frequency. Timothy, the design engineer, and Christine, the process engineer, are
alerted to the problem and agree to form a problem-solving team to investigate the
situation. Timothy reviews the design FMEA, while Christine reviews the process
FMEA. The information relating to the previously anticipated potential causes of
this failure and SPC charts for the appropriate critical characteristics are brought to
the first meeting. The team consists of Timothy, Christine, Cary (the machine operator), Stephen (the metallurgist), and Eric (another manufacturing engineer who has
taken a DOE course and has agreed to help the group set up the DOE).
In the first meeting, the group discussed the applicable areas from the FMEAs,
reviewed the SPC charts, and began a cause-and-effect listing for the observed failure
mode. At the conclusion of the meeting, Timothy was assigned to determine if the
loads on the bracket had changed due to changes in the components attached to it;
Christine was asked to investigate if there had been any change to the incoming
material; Stephen was asked to consider the testing procedure that should be used
to duplicate field failure modes and the response that should be measured, and all
SL3151Ch09Frame Page 376 Thursday, September 12, 2002 6:05 PM
376
Six Sigma and Beyond
Machine
Operator/Machine
Interface
C8
Material
C1
C4
C9
C5
C6
C2
C3
C7
Bracket
Breaks
C16
C12
C14
C17
C15
Process
C10
C13
C11
Design
FIGURE 9.3 Example of cause-and-effect diagram.
TABLE 9.6
The Test Matrix for the Seven Factors
Test
Number
1
2
3
4
5
6
7
8
Levels for Each Suspected Factor for Each of Eight Tests
C1
C2
C7
C11
C13
C15
C16
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
of the group members were asked to consider additions to the cause-and-effect list.
At the second meeting, the participants reported on their assignments and continued
constructing the cause-and-effect (C & E) diagram. Their cause-and-effect diagram
is shown in Figure 9.3 with the specific causes shown as “C1, C2, …” rather than
the actual descriptions that would appear on a real C & E diagram.
The group easily reached the consensus that seven of the potential causes were
suspected of contributing to the field problem. Eric agreed to set up the experiment
assuming two levels for each factor, and the others determined what those levels
should be to relate the experiment to the production reality. Eric returned to the group
and announced that he was able to use an L8 orthogonal array to set up the experiment
and that eight tests were all that were needed at this time. The test matrix for the
seven suspected factors is shown in Table 9.6.
Eric explained that this matrix would allow the group to determine if a difference
in test responses existed for the two levels of each factor and would prioritize the
SL3151Ch09Frame Page 377 Thursday, September 12, 2002 6:05 PM
Design of Experiments
377
TABLE 9.7
Test Results
10
13
15
17
14
16
19
21
C2
–
–
–
1
2
LEVEL
C16
1
2
LEVEL
–
–
–
–
–
–
–
–
–
–
–
1
2
LEVEL
C15
–
–
–
–
–
–
1
2
LEVEL
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
1
2
LEVEL
C13
18
17
16
15
14
13
C11
C7
–
–
–
–
–
–
–
1
2
3
4
5
6
7
8
–
–
–
–
–
–
–
–
–
–
–
–
1
2
LEVEL
R
E
S
P
O
N
S
E
Result
C1
18
17
16
15
14
13
–
R
E
S
P
O
N
S
E
Test Number
1
2
LEVEL
FIGURE 9.4 Plots of averages (higher responses are better).
within-factor differences. Since the two levels of each factor represented an actual
situation that existed in production during the time the failed parts were produced,
this information could be used to correct the problem. By now, Stephen had identified
a test procedure and response that seemed to fit the requirements outlined in this
section.
Two weeks were required to gather all the material and parts for the experiment
and to run the experiment. The test results are shown in Table 9.7. While Eric entered
the data into the computer for analysis, Timothy and Christine plotted the data to
see if anything was readily apparent. The factor level plots are shown in Figure 9.4.
SL3151Ch09Frame Page 378 Thursday, September 12, 2002 6:05 PM
378
Six Sigma and Beyond
Part 1
5 factors
Part 2
2 factors
Part 3
5 factors
Part 4
3 factors
Part 5
6 factors
FIGURE 9.5 A linear example of a process with several factors.
When Eric finished with the computer, he reported that of all the variability observed
in the data, 53.65% was due to the change in factor C2; 33.38% was due to the
change in factor C1; and 11.92% was due to the change in factor C11. The remaining
1.04% was due to the other factors and experimental error. The large percentage
variability contribution, coupled with the fact that the differences between the levels
of the three factors are significant from an engineering standpoint, indicate that these
three factors may indeed be the culprits. The computer analysis indicated that the
best estimate for a test run at C1 = 2, C2 = 2, and C11 = 2 is 21.4. One of the eight
tests in the experiment was run at this condition and the result was 21. Two confirmatory tests were run and the results were 11 and 20. The group then moved into a
second phase of the investigation to identify what the specs limits should be on C1,
C2, and C11. In the second round of testing, eight tests were required to investigate
three levels for each of the three factors. The setup for the second round of testing
involved an advanced procedure (idle column method) that will be presented later
in this chapter, so the example will be concluded for now.
In summary, the group in the example took the following actions:
1.
2.
3.
4.
5.
6.
7.
8.
Gathered appropriate backup data
Called together the right experts
Made a list of the possible causes for the problem
Prioritized the possible causes
Determined the proper test procedure and response to be measured
Reached agreement prior to running any tests
Approached the investigation in a structured manner
Asked and addressed one question at a time
Obviously, there are many ways to approach a particular DOE. In a situation
where testing or material is very expensive, the most efficient experimental layout
must be used. In the following sections, techniques are introduced that help the
experimenter optimize the experimental design. Additional opportunities to optimize
the experiment should be examined. Consider the situation where there is a five-part
process. A brainstorming group has constructed a cause-and-effect diagram for a
particular process problem. The number of suspected factors for each part of the
process is shown in Figure 9.5.
The obvious approach would be to set up the experiment with 21 factors. An
alternative approach would be to consider only seven factors for the first round of
testing. These would be the six factors within part 5 plus one factor for the best and
worst input to part 5. If the difference in input to part 5 is significant, then the
investigation is expanded upstream. The decision to approach a problem in this
manner is dependent upon the beliefs of the experts. If the experts have a strong
SL3151Ch09Frame Page 379 Thursday, September 12, 2002 6:05 PM
Design of Experiments
379
TABLE 9.8
An Example of Contrasts
Factor
C1
C2
C7
C11
C13
C15
C16
Average at
Level One
Average at
Level Two
13.75
13.25
15.18
14.50
15.50
15.50
15.50
17.50
18.00
15.50
16.75
15.75
15.75
15.75
Contrast
(Level 2 Avg. – Level 1 Avg.)
3.75
4.75
–0.25
2.25
0.25
0.25
0.25
prior belief that a factor in part 1, for instance, is significant, then a different approach
should be used. This approach is also dependent upon the structure of the situation.
The above example is presented to illustrate the point that the experimenter
should be alert for ways to test more efficiently and effectively.
MISCELLANEOUS THOUGHTS
An additional useful method of looking at the data is to plot the contrasts on
normal probability paper. For a two-level factor, the contrast is the average of all
the tests run at one level subtracted from the average of the tests run at the other
level. For the example in this section, the contrasts are shown in Table 9.8.
These contrasts are plotted on normal probability paper versus median ranks.
The values for median ranks are available in many statistics and reliability books
and are used in Weibull reliability plotting. For this example, the normal contrast
plot is shown in Figure 9.6.
To plot the contrasts on normal paper, the contrasts are ranked in numerical
order, here from –0.25 (C7) to 4.75 (C2). The contrasts are then plotted against the
median ranks or, in this case, against the rank number shown on the left margin of
the plot. Factors that are significant have contrasts that are relatively far from zero
and do not lie on a line roughly defined by the rest of the factors. These factors can
lie off the line on the right side (level 2 higher) or on the left side (level 1 higher).
In the example, two separate lines seem to be defined by the contrasts. This could
be due to either of these situations:
• C1, C2, and C11 are significant and the others are not.
• There may be one or more bad data points that occur when C1, C2, and
C11 are at one level and the other factors are set at the other level.
In this example, C1, C2, C11, and C16 were at level 2 and the other factors
were set at level 1 for run number eight. Depending upon the situation, it would be
worthwhile to either rerun that test or to investigate the circumstances that accompanied that the test (e.g., was the test hard to run because of the factor settings or
SL3151Ch09Frame Page 380 Thursday, September 12, 2002 6:05 PM
380
Six Sigma and Beyond
C2
Numerical Rank Corresponding To
Median Rank Probability
7
6
C1
C11
5
C16
4
3
C13
2
C15
C7
1
0
-1
1
2
3
4
5
Contrast
FIGURE 9.6 Contrasts shown in a graphical presentation.
did something else change that was not in the experiment?). In the example, this
combination of factors represented the best observed outcome, and the confirmation
runs supported the results of the original test.
Plotting contrasts is a way of better understanding the data. It helps the experimenter visualize what is happening with the data. Sometimes, information that
might be lost in a table of data will be crystal clear on a plot.
SETTING UP THE EXPERIMENT
This section discusses:
1.
2.
3.
4.
The choice of the number of levels for each factor
Fitting a linear graph to the experiment
Special applications to reduce the number of tests
How to handle noise factors in an experiment
CHOICE
OF THE
NUMBER
OF
FACTOR LEVELS
To review:
A factor is a unique component or characteristic about which a decision will be made.
SL3151Ch09Frame Page 381 Thursday, September 12, 2002 6:05 PM
Design of Experiments
381
A factor level is one of the choices of the factor to be evaluated (e.g., if the
screw speed of a machine is the factor to be investigated, two factor levels might
be 1200 and 1400 rpm).
Investigating a larger number of levels for a factor requires more tests than
investigating a smaller number of levels. There is usually a trade-off required concerning the amount of information needed from the experiment to be very confident
of the results and the time and resources available. If testing and material are cheap
and time is available, evaluate many levels for each factor. Usually, this is not the
case, and two or three levels for each factor are recommended. An exception to this
occurs when the factor is non-continuous, and several levels are of interest. Examples
of this type of factor include the evaluation of a number of suppliers, machines, or
types of material. This situation will be discussed later in this section.
The first round of testing is usually designed to screen a large number of factors.
To accomplish this in a small number of tests, two levels per factor are usually tested.
The choice of the levels depends upon the question to be addressed. If the question is
“Have we specified the right spec limits?” or “What happens to the response in the
clear worst possible situation?” then the choice of what the levels should be clear.
A more complicated question to address is “How will the distribution in production affect the response?” As suppliers become capable of maintaining low
variability about a target value, testing at the spec limits will not give a good answer
to this question. There are at least two approaches that can be used:
1. Test at the production limits, as a worst case.
2. Test at other points that put less emphasis on the tails of the distribution
where few parts are produced and more emphasis on the bulk of the
distribution. It is a difficult choice to pick two points to represent an entire
distribution. If this approach is being used, a rule of thumb is to choose
a level that encompasses approximately 70% of that distribution (mean ±
1 standard deviation).
The main point of this discussion is that the choice of levels is an integral part
of the experimental definition and should be carefully considered by the group setting
up the experiment.
The second and subsequent rounds of testing are usually designed to investigate
particular factors in more detail. Generally, three levels per factor are recommended.
Using two levels allows the estimation of a linear trend between the points tested.
The testing of three levels gives an indication of non-linearity of the response across
the levels tested. This non-linearity can be used in determining specification limits
to optimize the response. Although this concept will be explored in more detail in
a later section on tolerance design, its application can be illustrated as follows:
First round of testing — Level B of factor 1 gives a response that is more
desirable than that given by level A. See Figure 9.7.
Second round of testing — Level B gives a response that is more desirable
than those given by either C or D. However, the differences are not great.
Spec limits are set at C and D with B as the nominal. See Figure 9.8.
SL3151Ch09Frame Page 382 Thursday, September 12, 2002 6:05 PM
382
Six Sigma and Beyond
Response
Levels of Factor 1
B
A
FIGURE 9.7 First round testing.
Response
Levels of Factor 1
C
B
D
FIGURE 9.8 Second round testing.
In a manner similar to the two-level-per-factor situation, the choice of the specific
three levels to be tested depends upon the question under investigation. Testing at
three levels can be used by the experimenter to focus on a particular area of the
possible factor settings to optimize the response over as large a range as possible.
If three levels of a factor are used to gain understanding for an entire distribution,
a rule of thumb is to choose the levels at the mean and mean ± 1.225 standard
deviations that encompass approximately 78% of the distribution. These rules of
thumb will be used in tolerance design.
LINEAR GRAPHS
After the number of levels has been determined for each factor, the next step is to
decide which experimental setup to use. Dr. Taguchi uses a tool called “linear graphs”
to aid the experimenter in this process. Linear graphs are provided in the Appendix
of Volume V for several situations. Typical designs, however, are:
1. All factors at two levels (L4, L8, L12, L16, L32)
2. All factors at three levels (L9, L27)
3. A mix of two- and three-level factors (L18, L36)
SL3151Ch09Frame Page 383 Thursday, September 12, 2002 6:05 PM
Design of Experiments
DEGREES
OF
383
FREEDOM
In the orthogonal array designation, the number following the L indicates how many
testing setups are involved. This number is also one more than the degrees of freedom
available in the setup. Degrees of freedom are the number of pair-wise comparisons
that can be made. In comparing the levels of a two-level factor, one comparison is
made and one degree of freedom is expended. For a three-level factor, two comparisons are made as follows: first, compare A and B, then compare whichever is “best”
with C to determine which of the three is “best.” Two degrees of freedom are
expended in this comparison. Once the number of levels for each factor is determined, the degrees of freedom required for each factor are summed. This sum plus
one becomes the bottom limit to the orthogonal array choice.
The degrees of freedom for an interaction are determined by multiplying the
degrees of freedom for the factors involved in the interaction.
A two-level factor interacting with a two-level factor requires one degree of
freedom (df) (1 × 1 = 1).
A three-level factor interacting with a three-level factor requires 4 df (2 × 2 = 4).
A three-level factor interacting with a two-level factor requires 2 df (2 × 1 = 2).
Although the test response should be chosen to minimize the occurrence of
interactions, there will be times when the experts know or strongly suspect that
interactions occur. In these cases, linear graphs allow the interaction to be readily
included in the experiment.
If more than one test is run for each test setup, the total df is the total number of
tests run minus one. The dfs used for assigning factors remain the same as without the
repetition. The other dfs are used to estimate the non-repeatability of the experiment.
USING ORTHOGONAL ARRAYS
AND
LINEAR GRAPHS
In an orthogonal array, the number of rows corresponds to the number of tests to
be run and, in fact, each row describes a test setup. The factors to be investigated
are each assigned to a column of the array. The value that appears in that column
for a particular test (row) tells to what level that factor should be set for that test.
As an example, consider an L4 test setup — Table 9.9. If factor A was assigned to
column 1 and factor B was assigned to column 2, then test number 3 would be set
up with A at level 2 and B at level 1.
The sum of the degrees of freedom required for each column (a two-level column
requires 1 df; a three-level column requires 2 df) equals the sum of the available dfs
in the setup. Another property of the arrays is that orthogonally is maintained among
the columns. Orthogonally, mentioned earlier, is the property that allows each level
of every factor to equally impact the average response at each level of all other
factors. Using the L4 as an example, for the test where column 1 (factor A) is at
level one, column 2 (factor B) is tested at the low level and at the high level an
equal number of times. This is also column 1 at level 2. In fact, orthogonality is
maintained for all three columns. The reader is invited to study the L4 and verify
this statement.
SL3151Ch09Frame Page 384 Thursday, September 12, 2002 6:05 PM
384
Six Sigma and Beyond
TABLE 9.9
L4 Setup
Column
1
Row
1
2
3
1
2
3
4
1
1
2
2
1
2
1
2
1
2
2
1
3
2
FIGURE 9.9 Linear graph for L4.
Generally, near the orthogonal array are line-and-dot figures that look a little
like “stick” drawings. These are linear graphs. The dots represent the factors that
can be assigned to the orthogonal array, and the lines represent the possible interaction of the two dots joined by the line. The numbers next to the dots and lines
correspond to the column numbers in the orthogonal array. For example, the linear
graph for the L4 is shown in Figure 9.9.
The interpretation of this linear graph is that if a factor is assigned to column 1
and a factor is assigned to column 2, column 3 can be used to evaluate their
interaction. If the interaction is not suspected of influencing the response, another
factor can be assigned to column 3. If no other factor remains, column 3 is left
unassigned and becomes an estimator of experimental error or non-repeatability.
This will be explained in more detail later in this chapter. The interrelationships
between the columns are such that there are many ways of writing the linear graphs.
COLUMN INTERACTION (TRIANGULAR) TABLE
Also shown in Volume V near the orthogonal array is the column interaction table
for that particular array. This table shows in which column(s) the interaction would
be located for every combination of two columns. The linear graphs have been
constructed using this information. The L8 column interaction table is shown in
Table 9.10.
The interaction between two factors can be assigned by finding the intersection in
the column interaction table of the orthogonal array columns to which those factors
have been assigned. As an example, suppose that a factor was assigned to column 3
and another factor was assigned to column 5. If the brainstorming group suspects that
the interaction of these two factors is a significant influence and includes that interaction
in the analysis, that interaction must be assigned to column 6 in the orthogonal array.
(Note that the interaction of two two-level factors [one degree of freedom each] can be
assigned to one column which as one degree of freedom [1 × 1 = 1]).
SL3151Ch09Frame Page 385 Thursday, September 12, 2002 6:05 PM
Design of Experiments
385
TABLE 9.10
The L8 Interaction Table
Column
Column
1
2
3
4
5
6
FACTORS
WITH
1
2
3
4
5
6
7
3
2
1
5
6
7
4
7
6
1
7
4
5
2
3
6
5
4
3
2
1
THREE LEVELS
The orthogonal arrays, linear graphs, and column interaction tables for factors with
three levels are similar to the two-level situation. Since a three-level factor requires
two degrees of freedom, the three-level orthogonal array columns use two of the
available dfs. The interaction of two three-level factors requires 4 dfs (2 × 2). In the
linear graphs and column interaction table, and interaction is shown with two column
numbers. If an interaction is being investigated, it must be assigned to two columns.
The L9 orthogonal array, linear graph, and column interaction table are presented
in Figure 9.10.
INTERACTIONS
AND
HARDWARE TEST SETUP
The orthogonal array specifies the hardware setup for each test. To set up the
hardware for a particular test in the orthogonal array, the experimenter should
disregard the interaction columns and use only the columns assigned to single factors.
If an interaction is included in the experiment, its level will be based solely upon
the levels of the interacting factors. The interaction will come into consideration
during the analysis of the data. An example will demonstrate the use of the linear
graph and the layout of a simple experiment.
EXAMPLE
A brainstorming group has constructed a cause-and-effect diagram and determined
that four factors (A through D) are suspected of being contributors to the problem.
In addition, two interactions are suspected (B × D and C × D). The group has decided
to use two levels for each factor. The experiment is laid out as follows:
1. Determine the df requirement.
Four dfs are required for the main factors (one for each two level factor).
Two dfs are required for the interactions (one for each interaction of two level factors).
Six dfs are required in total.
SL3151Ch09Frame Page 386 Thursday, September 12, 2002 6:05 PM
386
Six Sigma and Beyond
Orthogonal Array
Row
1
2
3
4
5
6
7
8
9
1
1
1
1
2
2
2
3
3
3
Linear Graph
Column
2
3
1
1
2
2
3
3
1
2
2
3
3
1
1
3
2
1
3
2
1
4
1
2
3
3
1
2
2
3
1
3, 4
2
Column Interaction Table
Column
1
2
3
1
Column
2
3
3
4
2
4
1
4
4
2
3
4
3
1
2
1. A******
A
D
B
C
2. B***********
3.
3
1
5
6
2
4
7
FIGURE 9.10 The orthogonal array (OA), linear graph (LG), and column interaction for L9.
2. Determine a likely orthogonal array.
Since 6 dfs + 1 = 7 tests minimum and all factors have two levels, the L8 array is
a likely place to start.
3. Draw the linear graph required for the experiment.
The linear graph required for the experiment is shown in Figure 9.10A.
4. Compare the linear graph(s) of the orthogonal array to the linear graph required
for the experiment.
One of the linear graphs for the L8 that could fit is shown in Figure 9.10B.
5. Assign factors to the orthogonal array columns.
Make the column assignments shown in Figure 9.10C.
SL3151Ch09Frame Page 387 Thursday, September 12, 2002 6:05 PM
Design of Experiments
387
C**********
Column
1
2
3
4
5
6
7
Factor
D
B
BxD
C
CxD
A
unassigned
where, B x D indicates the interaction between B and D.
D*****
1
7
2
5
3
4
6
E*****
Column
1
2 Four Level Factor
1
1
1
1
2
2
2
1
3
2
2
4
F*****
Test
Number
1
2
3
4
5
6
7
8
1
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
0
3
1
1
2
2
3
3
4
4
Columns
4
1
2
1
2
1
2
1
2
5
1
2
1
2
2
1
2
1
6
1
2
2
1
1
2
2
1
7
1
2
2
1
2
1
1
2
FIGURE 9.10 (continued)
CHOICE
OF THE
TEST ARRAY
For a particular experiment, the test response should be chosen to minimize interaction, and the smallest orthogonal array that fits the situation should be used. The
emphasis should be on assigning factors to as many columns as possible. This allows
the question posed by the situation to be answered using a minimum number of tests.
SL3151Ch09Frame Page 388 Thursday, September 12, 2002 6:05 PM
388
Six Sigma and Beyond
G*********
1
3
5
7
2
6
4
H*****
Column
2
1
1
2
3
4
5
6
7
8
1
2
1
2
1
2
1
2
1
1
2
2
1
1
2
2
1
1
1
1
2
2
2
2
Eight Level
Factor
4
I ********
2
3, 4
5
6, 7
1
9, 10
8
12, 13
11
FIGURE 9.10 (continued)
SL3151Ch09Frame Page 389 Thursday, September 12, 2002 6:05 PM
Design of Experiments
389
Whether an interaction exists or not is an important issue that must be addressed
in setting up the experiment. If an interaction does exist and provision is not made
for it in the experimental setup, its effect becomes “mixed up” or confounded with
the effect of the factor assigned to the column where the interaction would be
assigned. The analysis will not be able to separate the two. This is an important
reason why confirmatory runs are necessary. Confirmatory runs should be made with
the nonsignificant factors set to their different levels, just to make sure.
Another way to minimize the effect of interactions is to use an L12, L18, or
L36 orthogonal array. These arrays have a special property that some, or all, of the
interactions between columns are spread across all columns more or less equally
instead of being concentrated in a column. This property can be used by the experimenter to rank the contribution of factors without worrying about interactions. There
are times when this can be a valuable tool for the experimenter. The linear graphs
for those arrays tell which interactions can be estimated and which cannot.
FACTORS
WITH
FOUR LEVELS
A factor with four levels can easily be assigned to a two-level orthogonal array. A
four-level factor requires 3 dfs. Since a two-level column has 1 df, three two-level
columns are used for the four-level factor. The three columns chosen must be
represented in the linear graph by two dots and the connecting interaction line. One
of the L8 linear graphs is shown in Figure 9.10D.
The line enclosing the column 1, 2, 3 designators indicates that these columns
will be used for a four-level factor. The particular level of the four-level factor for
each run can be determined by taking any two of the three columns that are to be
combined and assigning the four combinations to the four levels of the factor. As
an example, consider columns 1 and 2 (see Figure 9.10E).
Although column 3 is not used in determining the level of the four-level factor,
its df is used and no other factor can be assigned to it.
In the orthogonal array, one of the columns used for the four-level factor is set
to the levels of the four-level factor and the other two columns are set to zero for
each test. For the L8 example, the modified array would be Figure 9.10F.
FACTORS
WITH
EIGHT LEVELS
In a similar manner, a factor with eight levels requires 7 dfs and takes up seven twolevel columns. The particular columns are chosen by taking a closed triangle in the
linear graph and the interactions column of one of the points of the triangle with
the opposite base. One example is shown in Figure 9.10G.
The column interaction table indicates that the interaction of columns 1 and 6
will be in column 7. The actual factor level for each test is determined by looking
at the combinations of the three columns that make up the corners of the triangle
(see Figure 9.10H).
None of the seven columns which are used for the eight-level factor can be
assigned to another factor. In the orthogonal array, one of the columns used for the
eight-level factor is set to the levels of the eight-level factor and the other six columns
are set to zero for each test.
SL3151Ch09Frame Page 390 Thursday, September 12, 2002 6:05 PM
390
Six Sigma and Beyond
TABLE 9.11
An L9 with a Two-Level Column
FACTORS
WITH
Columns
Test
Number
1
2
3
4
1
2
3
4
5
6
7
8
9
1
1
1
2
2
2
1
1
1
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
NINE LEVELS
A factor with nine levels is handled in a similar manner to a four-level factor. The
nine-level factor requires 8 dfs, which are available in four three-level columns. Two
three-level columns and their two interaction columns are used. One of the L27
linear graphs is shown in Figure 9.10I.
The line enclosing the column 1, 2, 3, 4 designators indicates that these four
columns will be used for the nine-level factor. The level of the nine-level factor to
be used in a particular test can be determined by taking any two of the four columns
that are to be combined and assigning their nine combinations to the nine levels of
the factor. This is left to the reader to demonstrate.
In the orthogonal array, one of the columns used for the nine-level factor is set
to the levels of the nine-level factor and the other three columns are set to zero.
USING FACTORS
WITH
TWO LEVELS
IN A
THREE-LEVEL ARRAY
Dummy Treatment
Often, the situation calls for a mix of factors with two and three levels. A two-level
factor can be assigned to a three-level column by using one of the two levels as the
third level in the test determination. Consider using a two-level factor in an L9
array — see Table 9.11.
In column 1, the second set of 1s (in experiments 7, 8, and 9) is the dummy
treatment. In the analysis, the average at level one of the factor assigned to column
1 is determined with more accuracy than the average at level two since more tests
are run at level one. The level that is of more interest to the experimenter should be
the one used for the dummy treatment.
Combination Method
Two two-level factors can be assigned to a single three-level column. This is done by
assigning three of the four combinations of the two two-level factors to the three-level
SL3151Ch09Frame Page 391 Thursday, September 12, 2002 6:05 PM
Design of Experiments
391
TABLE 9.12
Combination Method
Factor A
Factor B
Three-Level
Column
1
1
2
1
2
1
1
2
3
factor and not testing the fourth combination. As an example, two two-level factors
are assigned to a three-level column as in Table 9.12. Note that the combination
A2B2 is not tested. In this approach, information about the AB interaction is not available,
and many ANOVA (analysis of variance) computer programs are not able to break apart
the effect of A and B. A way of doing that manually will be presented later.
USING FACTORS
WITH
THREE LEVELS
IN A
TWO-LEVEL ARRAY
A factor with three levels requires 2 dfs. Although it would seem that two two-level
columns combined would give the required dfs, the interaction of those two columns
is confounded with the three-level factor. The approach used to assign one threelevel factor to a two-level array is to construct a four-level column and use the
dummy treatment approach to assign the three-level factor to the four-level column.
Assigning more than one three-level factor to a two-level array uses a variation
of this approach. Recall that in constructing a four-level column, three two-level
columns are used. These three must be shown in the linear graph as two dots
connected by an interaction line. Any two of these columns are used to determine
the level to be tested. The third column’s df is used up in assigning a four-level
factor. In assigning a three-level factor, the third column’s df is not used for the level
three factor since it require only 2 dfs. However, the third column is confounded
with the three-level factor and should not be assigned to another factor. That column
is said to be “idle.” When two or more three-level factors are assigned to a two-level
array, the three-level factors can share the same idle column. An example of assigning
two three-level factors to an L8 array is shown in Figure 9.11.
Here column 1 would be idle (a factor cannot be assigned to column 1), columns
2 and 3 would be used to determine the levels of a three-level factor, columns 4 and
5 would be used to determine the levels of the second three-level factor, and columns
6 and 7 are available for two-level factors. The modified orthogonal array for this
experiment is shown in Table 9.13 (level 2 is the dummy treatment in both cases).
The idle column approach cannot be used with four-level factors. If it were
attempted, insufficient degrees of freedom would exist and the four-level factors
would be confounded.
OTHER TECHNIQUES
There are other techniques for setting up an experiment that will be mentioned here
but will not be discussed in detail. The user is invited to read the chapter on pseudofactor design in Quality Engineering — Product and Process Design Optimization,
SL3151Ch09Frame Page 392 Thursday, September 12, 2002 6:05 PM
392
Six Sigma and Beyond
1 (idle)
7
3
5
2
4
6
FIGURE 9.11 Three-level factors in a L8 array.
TABLE 9.13
Modified L8 Array
Columns
Test
Number
1
2
3
4
5
6
7
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
0
0
0
0
0
0
0
0
1
1
2
2
3
3
2
2
0
0
0
0
0
0
0
0
1
2
1
2
2
3
2
3
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
by Yuin Wu and Dr. Willie Hobbs Moore or to consult with a statistician to use these
techniques.
Nesting of Factors
Occasionally, levels of one factor have meaning only at a particular level of another
factor. Consider the comparison of two types of machine. One is electrically operated
and the other is hydraulically operated. The voltage and frequency of the electrical
power source and the temperature and formulation of the hydraulic fluid are factors
that have meaning for one type of machine but not the other. These factors are nested
within the machine level and require a special setup and analysis which is discussed
in the reference given above.
Setting Up Experiments with Factors with Large Numbers of Levels
Experiments with factors with large numbers of levels can be assigned to an experimental layout using combinations of the techniques that have been covered in this
booklet.
SL3151Ch09Frame Page 393 Thursday, September 12, 2002 6:05 PM
Design of Experiments
393
TABLE 9.14
An L8 with an L4 Outer Array
L8
Test
No.
1
2
3
4
5
6
7
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
L4
1
2
2
1
(on side)
→
1
1
2
1
1
2
2
2
Test Results
x1
x5
x2
x6
x3
x7
x4
x8
.
.
.
.
.
.
.
.
.
.
.
.
x29
x30
x31
x32
Note: The x values refer to experimental test results.
INNER ARRAYS
AND
OUTER ARRAYS
Factors are generally divided into three basic types:
1. Control factors are the factors that are to be optimized to attain the
experimental goal.
2. Noise factors represent the uncontrollable elements of the system. The
optimum choice of control factor levels should be robust over the noise
factor levels.
3. Signal factors represent different inputs into the system for which system
response should be different. For example, if several micrometers were
to be compared, the standard thickness to be measured would be levels
of a signal factor. The optimum micrometer choice would be the one that
operated best at all the standard thicknesses. Signal factors are discussed
in more detail on pages 430–441.
Control and noise factors are usually handled differently from one another in
setting up an experiment. Control factors are entered into an orthogonal array called
an inner array. The noise factors are entered into a separate array called an outer
array. These arrays are related so that every test setup in the inner array is evaluated
across every noise setup in the outer array. As an example, consider an L8 inner
(control) array with an L4 outer (noise) array, as shown in Table 9.14.
The purpose of this relationship is to equally and completely expose the control
factor choices to the uncontrollable environment. This ensures that the optimum
factor will be robust. A signal-to-noise (S/N) ratio can be calculated for each of the
control factor array test situations. This allows the experimenter to identify the
control factor level choices that meet the target response consistently.
SL3151Ch09Frame Page 394 Thursday, September 12, 2002 6:05 PM
394
Six Sigma and Beyond
RANDOMIZATION
OF THE
EXPERIMENTAL TESTS
In the orthogonal arrays, each test setup is identified by a test number. Generally,
the tests should not be run in the order of test number. If the tests were run in that
order, all the tests with the factor assigned to column one at level one would be run
before any of the tests with that factor at level two. A quick glance at an orthogonal
array will confirm this relationship. In fact, the columns toward the left of the array
change less often than the columns toward the right of the array. If an uncontrolled
noise factor changes during the testing process, the effect of that noise factor could
be mixed in with one or more of the factor effects. This could result in an erroneous
conclusion. The possibility of this occurring can be minimized by randomizing the
order of the experiment runs. If the order of the tests is randomized, the effect of
the changing uncontrolled noise factor will be more or less spread evenly over all
the levels of the controlled factors and although the experimental error will be
increased, the effects of the controlled factors will still be identifiable. Randomization can be done as simply as writing the test numbers on slips of paper and drawing
them out of a hat.
There are two situations where randomization may not be possible or where its
importance is lessened.
1. If it is very expensive, difficult, or time-consuming to change the level of
a factor, all tests at one level of a factor may have to be run before the
level of that factor can be changed. In this case, noise factors should be
chosen for the outer array that represent the possible variation in uncontrolled environment as much as possible.
2. If the noise factors in the outer array are properly chosen, the confident
experimenter may elect to dispense with randomization. In most cases,
the purpose of the experiment is to learn more about the situation, and
the experimenter does not have complete confidence. Therefore, the test
order should be randomized whenever the circumstances permit.
MISCELLANEOUS THOUGHTS
Dr. Taguchi stresses evaluating as many main factors as possible and filling up the
available columns. If it turns out that the experimental design will result in unassigned columns, some column assignment schemes are better than others in a few
situations. The rationale behind these choices is that they minimize the confounding
of unsuspected two-factor interactions with the main factors. A detailed discussion
is beyond the scope of this chapter. The user is invited to read Chapter 12 of Statistics
for Experiments, by G. Box, W. Hunter, and J.S. Hunter to learn more about this
concept.
Consider an L8 for which there are to be four two-level factors assigned. This
implies that there will be three columns that will not be assigned to a main factor.
There are 35 ways in which the four factors can be assigned to the seven columns.
The recommended assignment is to use columns 1, 2, 4, and 7 for the main factors.
The interactions to be evaluated, the linear graphs, and the column interaction table
SL3151Ch09Frame Page 395 Thursday, September 12, 2002 6:05 PM
Design of Experiments
395
TABLE 9.15
Recommended Factor Assignment by Column
Number
of Factors
4
5
6
7
8
9
10
11
12
13
14
15
a
L8 Array
L16 Array
1, 2, 4, 7
1,
1,
1,
1,
1,
a
a
a
—
—
—
—
—
—
—
—
2,
2,
2,
2,
2,
4,
4,
4,
4,
4,
8
8,
8,
7,
7,
L32 Array
15
11, 13
8, 11, 13
8, 11, 13, 14
a
a
a
a
a
a
a
1, 2, 4, 8, 16
1, 2, 4, 8, 16, 31
1, 2, 4, 8, 15, 16, 23
1, 2, 4, 8, 15, 16, 23, 27
1, 2, 4, 8, 15, 16, 23, 27, 29
1, 2, 4, 8, 15, 16, 23, 27, 29, 30
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21,
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21,
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21,
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21,
22
22, 25
22, 25, 26
22, 25, 26, 28
No recommended assignment scheme.
determine if the recommended column assignments are usable for a particular experiment. The recommended column assignments are given in Table 9.15.
Some of the linear graphs may be found in the Appendix of Volume V. However,
the user will find that the linear graphs in other books and reference materials may
not make these assignments available. There are many equally valid ways that linear
graphs for the larger arrays can be constructed from the column interaction table. It
is not feasible for any one book to list all the possibilities. An excellent source is
Taguchi and Konishi (1987).
In many cases, the brainstorming group may not have a good feel for whether
interactions exist or not. In these cases, two alternatives are usually considered:
1. Design an experiment that allows all two-factor interactions to be estimated.
2. Design an experiment in which no factor is assigned to a column that also
contains the interaction of two other factors, although pairs of two-factor
interactions may be assigned to the same column. The recommended
factor assignments given in Table 9.15 are examples of this approach.
The second approach is based on the assumption that few of the interactions will
be significant and that later testing can be used to investigate them in more detail. The
reader is urged to seek statistical assistance in approaching this type of experiment.
Sometimes, the response is not related to the input factors in a linear fashion.
Testing each factor at two levels allows only a linear relationship to be defined and,
in this more complex situation, can give misleading results. A detailed statistical
analysis tool called response surface methodology can be used to investigate the
complex relationship of the input factors to the response in these cases.
SL3151Ch09Frame Page 396 Thursday, September 12, 2002 6:05 PM
396
Six Sigma and Beyond
All of this seems to indicate that DOEs must be lengthy and complicated when
interactions or nonlinear relationships are suspected. In most situations, time and
resources are not available to run a large experiment. Sometimes, a transformation
of the measured data or of a quantitative input factor can allow a linear model to fit
within the region covered by the input factors. The linear model requires fewer data
points than a curvilinear model and is easier to interpret. Unfortunately, unless
multiple observations are made at each inner array setup, the choice of transformation
is guided mainly by the experience of the experimenter or by trying several transformations and seeing which one fits best.
The choice of the proper transformation to use is related to the choice of the
proper response. As an example, two common measures of fuel usage are “miles
per gallon” and “liters per kilometer.” With the multiplication of a constant, these
two measures are inverses of each other. A model that is linear in mi/g will be
definitely non-linear in l/km. Which measurement is correct? There is no easy
answer. The experimenter should evaluate several different transformations to determine the best model. Some transformations that are useful are:
y = Y1/2useful for count data (Poisson distributed) such as the number of flaws
in a painted surface
y = log(Y) or ln(Y)useful for comparing variances
y = Y–1/2
y = 1/Y
When there are several observations at each inner array test setup either through
replication or through testing with and outer array, another guide to choosing the right
transformation can be used. For the ANOVA to work correctly, the variances at all test
points should be equal. The observed variances should be compared as follows:
( )
1. Calculate the average X and the standard deviation(s) for each inner
array test setup.
2. Take the log or ln of each X and s.
3. Plot log s (y-axis) versus log X (x-axis) and estimate the slope.
4. Use the estimated slope as a rough guide to determine which transformation to use:
Slope
0.0
0.5
1.0
1.5
2.0
Transformation
no transformation
y = Y1/2
y = log(Y) or 1n(Y)
y = Y–1/2
y = 1/Y
It should be noted that the addition or subtraction of a constant before plotting
will not affect the standard deviation but will affect the relative spacing of the log
X and hence the slope of the line. This approach can be used to improve the fit of
the transformation. With the widespread use of computers, data analysis of this type
SL3151Ch09Frame Page 397 Thursday, September 12, 2002 6:05 PM
Design of Experiments
397
Loss
In $
Scrap
Rework
Spec
Limit
Spec
Limit
FIGURE 9.12 Traditional approach.
should be easy and should be pursued as a means to get the most information out
of the data.
Examples of this approach will be given later in the chapter. The reader is invited
to refer to Statistics for Experiments by G. Box, W. Hunter, and J.S. Hunter to learn
more about the use of transformations in analyzing data.
LOSS FUNCTION AND SIGNAL-TO-NOISE
This section discusses:
1. The Taguchi loss function and its cost-oriented approach to product design
2. A comparison of the loss function and the traditional approach to calculating loss
3. The use of the loss function in evaluating alternative actions
4. A comparison of the loss function and Cpk and the appropriate use of each
5. The relationship of the loss function and the signal-to-noise (S/N) calculation that Dr. Taguchi uses in design of experiments
LOSS FUNCTION
AND THE
TRADITIONAL APPROACH
In the traditional approach — see Figure 9.12 — to considering company loss, parts
produced within the spec limits perform equally well, and parts outside of the spec
limits are equally bad. This approach has a fallacy in that it assumes that parts
produced at the target and parts just inside the spec limit perform the same and that
parts just inside and just outside the spec limits perform differently.
Statistical Process Control (SPC) and process capability calculations (Cpk) have
brought to the manufacturing floor an awareness of the importance of reducing
process variability and centering around the target. However, the question still remains,
“How can this thought process carry over into product and process decision?”
The loss function provides a way of considering customer satisfaction in a
quantitative manner during the development of a product and its manufacturing
process. The loss function is the cornerstone of the Taguchi philosophy. The basic
premise of the loss function is that there is a particular target value for each critical
SL3151Ch09Frame Page 398 Thursday, September 12, 2002 6:05 PM
398
Six Sigma and Beyond
characteristic that will best satisfy all customer requirements. Parts or systems that
are produced farther away from the target will not satisfy the customer as well. The
level of satisfaction decreases as the distance from the target increases. The loss
function approximates the total cost to society, including customer dissatisfaction,
of producing a part at a particular characteristic value.
Taken for a whole production run, the total cost to society is based on the
variability of the process and the distance of the distribution mean to the target.
Decisions that affect process variability and centering or the range over which the
customer will be satisfied can be evaluated using the common measurement of loss
to society.
The loss function can be used when considering the expenditure of resources.
Customer dissatisfaction is very difficult to quantify and is often ignored in the
traditional approach. Its inclusion in the decision process via the loss function
highlights a gold mine in customer-perceived quality and repeat purchases that would
be hidden otherwise. This gold mine is often available at a relatively minor expense
applied to improving the product or process.
Note: Use of the loss function implies a total system that starts with the determination of targets that reflect the greatest level of customer satisfaction. Calculation
of losses using nominals that were set using other methods may yield erroneous
results.
CALCULATION
OF THE
LOSS FUNCTION
Dr. Taguchi uses a quadratic equation to describe the loss function. A quadratic form
was chosen because:
1. It is the simplest equation that fulfills the requirement of increasing as it
moves away from the target.
2. Taguchi believes that, historically, costs behave in this fashion.
3. The quadratic form allows direct conversion from signal-to-noise ratios
and decomposition used in analysis of experimental results.
The general form for the loss function is:
(
L( x) = k x − m
)
2
where L(x) is the loss associated with producing a part at “x” value; k is a unique
constant determined for each situation; x is the measured value of the characteristic;
and m is the target of the characteristic.
When the general form is extended to a production of “n” items, the average
loss is:
( )∑ ( x − m)
L= k/n
2
SL3151Ch09Frame Page 399 Thursday, September 12, 2002 6:05 PM
Design of Experiments
399
Ao
Cost
m–∆
m+∆
m
FIGURE 9.13 Nominal the best.
This can be simplified to:
(
)
2

L = k σ2 + µ − m 


where σ 2 the population piece-to-piece variance; µ is the population mean; and (µ – m)
is the offset of the population mean from the target.
In the Nominal-the-Best (NTB) situation shown in Figure 9.13, A0 is the cost
incurred in the field by the customer or warranty when a part is produced ∆ from
the target. ∆ is the point at which 50% of the customers would have the part repaired
or replaced. A0 and ∆ define the shape of the loss function and the value of “k.”
The loss resulting from producing a part at m – ∆ is:
(
) (
L m−∆ =k m−∆−m
)
2
A0 = k∆2
k = A0 / ∆2
In general, the loss per piece is:
()
(
L x = A0 / ∆2 * x − m
)
2
The loss for the population is:
(
L = A0 / ∆2 * σ2 + offset 2
)
EXAMPLE
A particular component is manufactured at an internal supplier, shipped to an assembly plant, and assembled into a vehicle. If this component deviates from its target
SL3151Ch09Frame Page 400 Thursday, September 12, 2002 6:05 PM
400
Six Sigma and Beyond
of 300 units by 10 or more, the average customer will complain, and the estimated
warranty cost will be $150.00. In this case,
k = $150.00/(10 units)2
= $1.50 per unit2
SPC records indicate that the process average is 295 units and the variability is eight
units2. The present total loss is:
(
)
2

L = k σ2 + µ − 300 


(
)
2

= $1.50 82 + 295 − 300 


= $133.50 per part
Fifty thousand parts are produced per year. The total yearly loss (and opportunity
for improvement) is $6.7 million.
Situation 1
It is estimated that a redesign of the system would make the system more robust,
and the average customer would complain if the component deviated by 15 units or
more from 300. In this case:
(
k = $150 / 15 units
)
2
= $0.67 per unit 2
The total loss would be:
(
)
2

L = $0.6782 + 295 − 300 


= $59.63 per part
The net yearly improvement due to redesigning the system would be:
(
)
Improvement = $1.33.50 − $59.63 * 5000
= $3, 693, 500
This cost should be balanced against the cost of the redesign.
SL3151Ch09Frame Page 401 Thursday, September 12, 2002 6:05 PM
Design of Experiments
401
Situation 2
It is estimated that a new machine at the component manufacturing plant would
improve the mean of the distribution to 297 units and improve the process variability
to 6 units2. In this case, the total loss would be:
(
)
2

L = $1.50 62 + 297 − 300 


= $67.50 per part
The net yearly improvement due to using the new machine would be:
(
)
Improvement = $1.33.50 − $67.50 * 50, 000
= $3, 300, 000
This cost should be balanced against the cost of the new machine.
From these situations, it is apparent that the quality of decisions using the loss
function is heavily dependent upon the quality of the data that goes into the loss
function. The loss function emphasizes making a decision based on quantitative total
cost data. In the traditional approach, decisions are difficult because of the unknowns
and differing subjective interpretations. The loss function approach requires investigation to remove some of the unknowns. Subjective interpretations become numeric
assumptions and analyses, which are easier to discuss and can be shown to be based
on facts.
In the smaller-the-better (STB) situation illustrated in Figure 9.14, the loss
function reduces to:
L = k [1/n ∑ x 2]
A0
Cost
X0
FIGURE 9.14 Smaller the better.
SL3151Ch09Frame Page 402 Thursday, September 12, 2002 6:05 PM
402
Six Sigma and Beyond
For the larger-the-better (LTB) situation illustrated in Figure 9.15, the loss
function reduces to:
L = k [1/n ∑1/x 2]
FIGURE 9.15 Larger the better.
COMPARISON
OF THE
LOSS FUNCTION
AND
Cpk
The loss function can be used to evaluate process performance. It provides an
emphasis on both reducing variability and centering the process, since those actions
have a net effect of reducing the value of the loss function. Process performance is
normally evaluated using Cpk. Cpk is calculated using the following equation:
 upper spec limit − X
C pk = minimum 
 3 * standard deviation

(
)
,
X − lower spec limit 

3 * standard deviation 
(
)
where X = the average of the process.
Both the loss function and Cpk emphasize minimizing the variability and centering the process on the target. The relative benefits of the two can be summarized
as follows:
Loss function
• Provides more emphasis on the target
• Relates to customer costs
• Can be used to prioritize the effect of different processes
Cpk
• Is easier to understand and use
• Is based only on data from the process and specifications
• Is normalized for all processes
The loss function represents the type of thinking that must go into making
strategic management decisions regarding the product and process for critical characteristics. Cpk is an easily used tool for monitoring actual production processes.
SL3151Ch09Frame Page 403 Thursday, September 12, 2002 6:05 PM
Design of Experiments
403
-16----20----24-
-16----20----24-
-16----20----24-
-16----20----24-
-16----20----24-
Case 1
Case 2
Case 3
Case 4
Case 5
Case 1
Case 2
Case 3
Case 5
Case 4
Average
Sigma
20
1.33
18
0.67
17.2
0.4
20
2.82
20
0.67
C pk
Loss (assume k = 2)
1
1
1
0.47
16
2
3.56
8.89
16
0.89
FIGURE 9.16 A comparison of Cpk and loss function.
Figure 9.16 shows Cpk and the value of the loss function for five different cases.
In each of these cases, the specification is 20 ± 4 and the value of k in the loss
function is $2 per unit2.
Both Cpk and the loss function emphasize reducing the part-to-part variability
and centering the process on target. The use of Cpk is recommended in production
areas to monitor process performance because of the ease of understanding the clear
relationship of Cpk and the other SPC tools. Management decisions regarding the
location of distributions with small variability within a large specification tolerance
should be based on a loss function approach. (See cases 2 and 5 in Figure 9.16.)
The loss function approach should be used to determine the target value and to
evaluate the relative merits of two or more courses of action because of the emphasis
on cost and on including customer satisfaction as a factor in making basic product
and process decisions. These questions also lend themselves to the use of design of
experiments. The relationship of the loss function to the signal-to-noise DOE calculations used by Dr. Taguchi will now be discussed.
SIGNAL-TO-NOISE (S/N)
Signal-to-Noise is a calculated value that Dr. Taguchi recommends to analyze DOE
results. It incorporates both the average response and the variability of the data. S/N
is a measure of the signal strength to the strength of the noise (variability). The goal
is always to maximize the S/N. S/N ratios are so constructed that if the average
response is far from the target, re-centering the response has a greater effect on the
S/N than reducing the variability. When the average response is close to the target,
reducing the variability has a greater effect. There are three basic formulas used for
calculating S/N, as shown in Table 9.16.
S/N for a particular testing condition is calculated by considering all the data
that were run at that particular condition across all noise factors. Actual analysis
techniques will be covered later.
SL3151Ch09Frame Page 404 Thursday, September 12, 2002 6:05 PM
404
Six Sigma and Beyond
TABLE 9.16
Formulas for Calculating S/N
Signal-to-Noise (S/N)
Smaller the better (STB)
[ ∑x ]
−10 log [1 / n∑1 / x ]
−10 log [1 / n( S − V ) / V ]
−10 log10 1 / n
2
2
Larger the better (LTB)
10
Nominal the best (NTB)
where
Loss Function (L)
10
m
o
o
[ ∑x ]
L = k [1 / n∑1 / x ]
L = k 1/ n
2
2
(
L = k σ 2 + offset 2
)
(∑ x ) / n
V = (∑ x − S ) / ( n − 1)
2
Sm =
2
o
m
The relationships between S/N and loss function are obvious for STB and LTB.
The expressions contained in brackets are the same. When S/N is maximized, the
loss function will be minimized. For the NTB situation, the total analysis procedure
of looking at both the raw data for location effects and S/N data for dispersion effects
parallels the loss function approach. Examples of these analysis techniques are given
in the next section. S/N is used in DOE rather than the loss function because it is
more understandable from an engineering standpoint and because it is not necessary
to compute the value of k when comparing two alternate courses of action.
S/N calculations are also used in DOE to search for “robust” factor values. These
are values around which production variability has the least effect on the response.
MISCELLANEOUS THOUGHTS
Many statisticians disagree with the use of the previously defined S/N ratios to
analyze DOE data. They do not recognize the need to analyze both location effects
and dispersion (variance) effects but use other measures. Dr. George Box’s 1987
report is recommended to the reader who wishes to learn more about this disagreement and some of the other methods that are available.
In brief, Dr. Box disagrees with the STB and LTB S/N calculations and finds
the NTB S/N to be inefficient. The approach that he supports is to calculate the log
(or ln) of the standard deviation of the data, log(s), at each inner array setup in place
of the S/N ratio. The log is used because the standard deviation tends to be lognormally distributed. The raw data should be analyzed (with appropriate transformations) to determine which factors control the average of the response, and the
log(s) should be analyzed to determine which factors control the variance of the
response. From these two analyses, the experimenter can choose the combination
of factors that gives the response that best fills the requirements.
The data in Table 9.17 illustrate some of the concerns with the NTB S/N ratio. The
first three tests (A through C) have the same standard deviation but very different S/N,
SL3151Ch09Frame Page 405 Thursday, September 12, 2002 6:05 PM
Design of Experiments
405
TABLE 9.17
Concerns with NTB S/N Ratio
Raw Data (4 Reps.)
Test
A
B
C
D
E
1
15
18
24
42.55
2
11
21
24
42.8
4
12
19
28.12
50
5
14
22
28.12
50
Standard
Deviation
NTB
S/N
1.83
1.83
1.83
2.38
4.23
3.89
17.03
20.78
20.78
20.78
while the last three tests (C through E) have the same S/N but very different standard
deviations. The NTB S/N ratio places emphasis on getting a higher response value.
This approach might lead to difficulties in tuning the response to a specific target.
It should be noted that Taguchi does discuss other S/N measures in some of his
works that have not been widely available in English. An alternate NTB S/N ratio
is available in the computer program ANOVA-TM, which is distributed by Advanced
Systems and Designs, Inc. (ASD) of Farmington Hills, Michigan and is based on
Taguchi’s approach. This S/N ratio is:
NTBπ S / N = −10 log( s 2 ) = −20 log( s )
Maximizing this S/N is equivalent to minimizing log(s). Examples using this
S/N ratio will be developed later.
ANALYSIS
The purpose of this section is to:
1. Introduce graphical and numerical analysis of experimental data
2. Present a method for estimating a response value and assigning a confidence interval for it
3. Discuss the use and interpretation of signal-to-noise (S/N) ratio calculations
GRAPHICAL ANALYSIS
In the example in Section 2, Timothy and Christine calculated and plotted the average
response at each factor level. Since the experimental design they used (an L8) is
orthogonal, the average at each level of a factor is equally impacted by the effect
of the levels of the other factors. This allows the graphical approach to have direct
usage. This example from section 2 is shown in Table 9.18. The factor level plots
are shown in Figure 9.17.
Factors C1, C2 and C11 clearly have a different response for each of their two
levels. The difference between levels is much smaller for the other factors. If the
SL3151Ch09Frame Page 406 Thursday, September 12, 2002 6:05 PM
406
Six Sigma and Beyond
TABLE 9.18
L8 with Test Results
Levels for Each Suspected Factor for Each of 8 Tests
Test
Number
C1
C2
C7
C11
C13
C15
C16
Test Result
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
10
13
15
17
14
16
19
21
1
2
3
4
5
6
7
8
Note: The C numbers (e.g., C11, C13) are factor names.
C2
–
–
1
2
LEVEL
C16
1
2
LEVEL
–
–
–
–
–
–
–
–
–
–
–
C15
–
–
–
–
–
–
1
2
LEVEL
1
2
LEVEL
–
–
–
–
–
–
–
–
–
–
C13
18
17
16
15
14
13
–
R
E
S
P
O
N
S
E
–
–
–
–
–
–
–
–
–
–
–
–
1
2
LEVEL
1
2
LEVEL
C11
C7
–
–
–
–
–
–
–
–
–
–
–
–
–
C1
18
17
16
15
14
13
–
R
E
S
P
O
N
S
E
1
2
LEVEL
FIGURE 9.17 Plots of averages (higher responses are better).
goal of the experiment was to identify situations that minimize or maximize the
response, C1, C2 and C11 are important while the others are not.
Graphical analysis is a valid, powerful technique that is especially useful in the
following situations:
1. When computer analysis programs are not available
2. When a quick picture of the experimental results is desired
3. As a visual aid in conjunction with computer analysis
SL3151Ch09Frame Page 407 Thursday, September 12, 2002 6:05 PM
Design of Experiments
407
TABLE 9.19
ANOVA Table
Column
Source
1
2
3
4
5
6
7
Error
(pooled error)
Total
C1
C2
C7
C11
C13
C15
C16
df
SS
MS
F Ratio
S’
%
1
1
1*
1
1*
1*
1*
28.125
45.125
0.125
10.125
0.125
0.125
0.125
28.125
45.125
0.125
10.125
0.125
0.125
0.125
225
361
28.000
45.000
33.38
53.65
81
10.000
11.92
4
7
0.500
83.875
0.125
11.982
0.875
83.875
1.04
Note: df = degrees-of-freedom; MS = mean square; SS = sum of squares.
Once the experiment has been set up correctly, the graphical analysis can be
easily used and can point the way to improvements.
ANALYSIS
OF
VARIANCE (ANOVA)
As was mentioned earlier, mathematical calculations and detailed discussions will
not be included in this chapter. The interested reader should consult Volume V of
this series or references listed in the Bibliography for rigorous mathematical discussions. The approach given here will focus on the interpretation of the ANOVA
analysis.
ANOVA is a matrix analysis procedure that partitions the total variation measured in a set of data. These partitions are the portions that are due to the difference
in response between the levels of each factor. The number of degrees of freedom
(df) associated with an experimental setup is also the maximum number of partitions
that can be made. Consider the L8 experiment from section 2 that was illustrated
previously in the graphical analysis section. Table 9.19, which is an ANOVA table,
summarizes the analysis.
The column number shows to what column of the orthogonal array the source
(factor) was assigned. Normally, the column number is not shown in an ANOVA
table. The df column shows the df(s) associated with the factor in the source column.
The SS column contains the sums of squares. The SS is a measure of the spread of
the data due to that factor. The total SS is the sum of the SS due to all of the sources.
The MS or mean square column shows the SS/df for each source. The MS is also
known as the variance.
The row with “error” in the source column is left blank in this experiment. If
one of the columns had not been assigned or if the experiment had been replicated,
then the unassigned dfs would have been used to estimate error. Error is the nonrepeatability of the experiment with everything held as constant as possible. The
ANOVA technique compares the variability contribution of each factor to the variability
SL3151Ch09Frame Page 408 Thursday, September 12, 2002 6:05 PM
408
Six Sigma and Beyond
due to error. Factors that do not demonstrate much difference in response over the
levels tested have a variability that is not much different from the error estimate.
The df and SS from these factors are pooled into the error term. Pooling is done by
adding the df and SS into the error df and SS. Pooling the insignificant factors into
the error can provide a better estimate of the error.
Initially, no estimate of error was made in the L8 example because no unassigned
columns or repetitions were present. Because of this, a true estimate of the error
could not be made. However, the purpose of the experiment was to identify the
factors that have a usable difference in response between the levels. In this experiment, the factors with relatively small MS were pooled and called “error.” Pooling
requires that the experimenter judge which differences are significant from an operational standpoint. This judgment is based on the prior knowledge of the system
being studied. In the example, factors C7, C13, C15, and C16 have much lower MS
than do the other factors and are pooled to construct an error estimate. The * next
to a df indicates that the df and SS for that factor were pooled into the error term.
The F ratio column contains the ratio of the MS for a source to the MS for the
pooled error. This ratio is used to statistically test whether the variance due to that
factor is significantly different from the error variance. As a quick rule of thumb, if
the F ratio is greater than three, the experimenter should suspect that there is a
significant difference. Dr. Taguchi does not emphasize the use of the F ratio statistical
test in his approach to DOE. A detailed description of the use of the F test can be
found in Box, Hunter, and Hunter (1978), and a practical explanation is included in
Volume V of this series.
In the determination of the SS of a factor, the non-repeatability of the experiment
is still present. The number in the “S” column is an attempt to totally remove the
SS due to error and leave the “pure” SS that is due only to the source factor. The
error MS times the df is subtracted from the SS to leave the pure SS or S′ value for
a factor. The amount that is subtracted from each non-pooled factor is then added
to the pooled error SS and the total is entered as the error S′. In this way the total
SS remains the same.
The % column contains the S′ value divided by the total SS times 100%. This
gives the percent contribution by that factor to the total variation of the data. This
information can be used directly in prioritizing the factors. In the experiment that
has been discussed, C2 makes the greatest contribution, C1 contributes less, and
C11 contributes still less. It can be argued that the graphical analysis can display
those conclusions quite well. In more complicated experiments with many factors
and factors with a large number of levels, however, the ANOVA table can display
the analysis in a more concise form and quickly lead the experimenter to the most
important factors.
ESTIMATION
AT THE
OPTIMUM LEVEL
The ANOVA table is used to identify important factors. The experimenter refers to
the average response at each level of the important factors to choose the best
combination of factor levels. All of the best levels can be combined to estimate the
responses at the best factor combination. Consider the case where the second level
SL3151Ch09Frame Page 409 Thursday, September 12, 2002 6:05 PM
Design of Experiments
409
of factor A (A2), the third level of factor B (B3), the first level of factor C (C1),
and the interaction of C1 and D1 are determined to be the best combination of
factors. An estimate of the response at these conditions can be made using the
equation:
(
) [(
) (
) (
) (
µˆ opt = T + ( A2 − T ) + B3 − T + C1 − T + C1D1 − T − C1 − T − D1 − T
)]
where T = the average response of all the data; A2 = the average of the data run at
A2; B3 = the average of the data run at B3; C1 = the average of the data run at C1;
and D1 = the average of the data run at D1.
Each factor that is a significant contributor appears in a manner similar to A2,
B3 and C1 above. The term in brackets [ ] addresses the optimum level of the CD
interaction and is an example of the way in which interactions are handled.
CONFIDENCE INTERVAL
AROUND THE
ESTIMATION
A 90% confidence interval can be calculated for confirmatory tests using the equation:
(
)
(
µˆ opt ± F1,dfe,.05 * MSe * 1 / ne + 1 / nr
)
where F1,dfe,.05 is a value from an F statistical table. The F values are based on two
different degrees of freedom and the desired confidence. In this case, the first degree
of freedom is always 1 and the second is the degree of freedom of the pooled error
(dfe). The desired confidence is .05 since .05 in each direction (±) sums to a 10%
confidence. MSe is the mean square of the pooled error term; nr is the number of
confirmatory tests to be run; and ne is the effective number of replications and is
calculated as follows:
ne =
Total Number of Experiments
Sum of the dfs of all the factors and interactions that are
significant and appear in the equation plus 1 df for the mean.
For the µ̂opt that was just considered, ne is calculated as follows:
Source
df
A
B
C
CD
Mean
Total
1
2
1
1
1
6
SL3151Ch09Frame Page 410 Thursday, September 12, 2002 6:05 PM
410
Six Sigma and Beyond
Quadratic Only
Response
Response
Response
Linear Only
1
2
3
Factor Level
1
2
3
Factor Level
Both Linear
and Quadratic
1
2
3
Factor Level
Response
FIGURE 9.18 ANOVA decomposition of multi-level factors.
1
2
3
Supplier
FIGURE 9.19 Factors not linear.
Consider that an L36 was run with no repetitions.
ne = 36/6 = 6.0
INTERPRETATION
AND
USE
The confidence interval about the estimated value is used as a check when verification
runs are made. If the average of the verification runs does not fall within the interval,
there is strong reason to believe that a very important factor may have been left out
of the experiment.
ANOVA DECOMPOSITION
OF
MULTI-LEVEL FACTORS
When a factor is tested at two levels, an estimate of the linear change in response
between the two levels can be made. When a factor is tested at more than two levels,
more complex relationships must be investigated. With a three-level factor, both the
linear and quadratic relationships can be investigated. These relationships are demonstrated in Figure 9.18.
This relationship is important to consider even when the factor levels are not
continuous (e.g., different machines or suppliers). Consider the situation in
Figure 9.19. The dotted line is the linear response and indicates no significant
difference. However, Supplier 2 is different from Suppliers 1 and 3. This difference
can be found only if the quadratic relationship is considered.
The number of higher order relationships that can be investigated is determined
by the degrees of freedom of the source — see Table 9.20.
SL3151Ch09Frame Page 411 Thursday, September 12, 2002 6:05 PM
Design of Experiments
411
TABLE 9.20
Higher Order Relationships
Levels of a Factor
df
Relationships
2
3
4
5
etc.
1
2
3
4
Linear
Linear, quadratic
Linear, quadratic, cubic
Linear, quadratic, cubic, quartic
TABLE 9.21
Inner OA (L8) with Outer OA (L4) and Test Results
L8
Test
No.
A
1
B
2
C
3
D
4
E
5
F
6
G
7
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
L4
(on side)
Test Results
Z
Y
X
1
1
1
2
2
1
2
1
2
1
2
2
25
25
18
26
15
18
20
19
27
27
21
23
11
15
17
20
30
21
19
27
12
17
21
20
26
19
22
28
14
18
18
17
In the ANOVA table, the number of relationships that should be investigated is
the same as the df. The total SS for factor is decomposed into parts with unit dfs.
These parts are the linear, quadratic, cubic, etc. parts of the relationship. Each part
can then be treated separately, and the parts with small MS are pooled into the error
term. The type of relationship that remains as significant can guide the experimenter
in investigating the level averages.
S/N CALCULATIONS
AND INTERPRETATIONS
Control factors and noise factors were introduced in Section 3. Control factors appear
in an orthogonal array called an inner array. Noise factors that represent the uncontrolled or uncontrollable environment are entered into a separate array called an
outer array. The following example of an L8 linear (control) array with an L4 outer
(noise) array was first presented in Section 3. Actual responses and factor names
are added here — see Table 9.21 — in the development of the example.
This type of experimental setup and analysis evaluates each of the control factor
choices (L8 array factors) over the expected range of the uncontrollable environment
SL3151Ch09Frame Page 412 Thursday, September 12, 2002 6:05 PM
412
Six Sigma and Beyond
TABLE 9.22
The STB ANOVA Table
Source
df
SS
MS
F Ratio
S’
%
A
B
C
D
E
F
G
Error
(pooled error)
Total
1
1
1
1
1*
1*
1
18.487
0.864
4.232
1.295
0.223
0.213
4.362
18.487
0.864
4.232
1.295
0.223
0.213
4.362
84.803
3.963
19.413
5.940
18.269
0.646
4.014
1.077
61.53
2.18
13.53
3.63
20.009
4.144
13.96
2
7
0.436
29.676
0.218
4.239
1.526
5.14
(L4 array factors). This assures that the optimal factor levels from the L8 array will
be robust. An S/N can be calculated for each test situation. These S/N ratios are
then used in an ANOVA to identify the situation that maximizes the S/N.
Smaller-the-Better (STB)
The following S/N ratios are calculated for the STB situation using the equations
given in Section 4 and assuming that the optimum value is zero and that the responses
shown represent deviations from that target:
Test Number
STB S/N
1
2
3
4
5
6
7
8
28.65
27.32
26.05
28.32
22.34
24.63
25.61
25.59
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The STB ANOVA table for the example is shown in Table 9.22. The ANOVA table
indicates that factors A, G, and C are the most significant contributors. Inspection
of the level averages shows that the highest S/N values (least negative), in order
of contribution, occur at A2, G2, C2, D1, B1. Estimation of the S/N at the optimal
levels can be made from the S/N level averages using the technique discussed
earlier in this section. Likewise, estimation of the raw data average response at
the optimal level can be made from the response level averages at the optimal S/N
factor levels.
SL3151Ch09Frame Page 413 Thursday, September 12, 2002 6:05 PM
Design of Experiments
413
TABLE 9.23
The LTB ANOVA Table
Source
df
SS
MS
F Ratio
S’
%
A
B
C
D
E
F
G
Error
(pooled error)
Total
1
1
1
1
1*
1*
1
18.292
1.121
4.160
1.271
0.396
0.264
4.947
18.292
1.121
4.160
1.271
0.396
0.264
4.947
55.442
3.397
12.605
3.852
17.966
0.791
3.830
0.941
58.99
2.60
12.58
3.09
14.991
4.617
15.16
2
7
0.660
30.454
0.330
4.351
2.310
7.59
Larger-the-Better (LTB)
The same data will be used to demonstrate the LTB notation. In this case, the
optimum value is infinity. Examples of this include strength or fuel economy. The
following S/N ratios are calculated using the LTB equation given in Section 4.
Test Number
LTB S/N
1
2
3
4
5
6
7
8
28.57
26.98
25.94
28.23
22.08
24.54
25.48
25.52
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The LTB ANOVA table for the example is shown in Table 9.23. Inspection of the
ANOVA table and the level averages shows that the highest S/N values occur at A1,
G1, C1, D2, B2. Interpretation of the LTB analysis is similar to that of the STB
analysis.
Nominal the Best (NTB)
Analysis of the NTB experiment is a two-part process. Again, the same data will be
used to illustrate this approach. The target value will be assumed to be 24 in this case.
SL3151Ch09Frame Page 414 Thursday, September 12, 2002 6:05 PM
414
Six Sigma and Beyond
TABLE 9.24
The NTB ANOVA Table
Source
df
SS
MS
F Ratio
A
B
C
D
E
F
G
Error
(pooled error)
Total
1*
1
1*
1*
1
1
1
0.193
9.618
0.006
0.333
17.816
2.477
10.424
0.193
9.618
0.006
0.333
17.816
2.477
10.424
3
7
0.532
40.867
0.177
5.838
S’
%
54.339
9.441
23.10
100.655
13.994
58.893
17.639
2.300
10.247
43.16
5.63
25.07
1.240
3.03
The S/N values are analyzed. The following S/N are calculated:
Test Number
STB S/N
1
2
3
4
5
6
7
8
21.93
15.96
20.78
21.60
17.03
21.59
20.33
22.56
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The NTB ANOVA table for the example is shown in Table 9.24.
The ANOVA table and the level averages indicate that E1, G1, B2, F1 are the
optimal choices from an S/N standpoint. These are the factor choices that
should result in the minimum variance of the response.
The ANOVA analysis and level averages of the raw data are then investigated
to determine if there are other factors that have significantly different
responses at their different levels but are not significant in the S/N analysis.
These factors can be used to tune the average response to the desired value
but do not appreciably affect the variability of the response. The ANOVA
table of the raw data is shown in Table 9.25. From this ANOVA table, it
can be seen that the significant contributors to the observed variability of
the data averages are the factors A, G, C, D, and F. This can be combined
with the S/N analysis and interpreted as follows:
a. Factors that influence variability only — B, E
b. Factors that influence both variability and average response — G
c. Factors that influence the average only — A, C
d. Factors that have little or no influence on either variability or average
response – D, F
SL3151Ch09Frame Page 415 Thursday, September 12, 2002 6:05 PM
Design of Experiments
415
TABLE 9.25
Raw Data ANOVA Table
Source
A
B
C
D
E
F
G
X
Y
Z
Error
(pooled error)
Total
df
1
1*
1
1
1*
1
1
1*
1*
1*
21
26
31
SS
MS
F Ratio
S’
%
392.000
8.000
72.000
18.000
2.000
18.000
98.000
0.125
3.125
0.000
106.750
120.000
718.000
392.000
8.000
72.000
18.000
2.000
18.000
98.000
0.125
3.125
0.000
5.083
4.615
23.161
84.940
387.385
53.95
15.601
3.900
67.385
13.385
9.39
1.86
3.900
21.235
13.385
93.385
1.86
13.01
143.075
19.93
The results from this experiment indicate that factors B, E, and G should be set
to the levels with the highest S/N. Factor G should be set to the level with the highest
S/N rather than using it to tune the average since its relative contribution to S/N
variability is greater than its contribution to the variability of raw data. This decision
might change based on cost implications and the ability to use factors A and C to
tune the average response. Factors A and C should be investigated to determine if
they can be set to levels that will allow the target value of 24 to be attained. This
may be possible with factors that have continuous values. Factors with discrete
choices such as supplier or machine number cannot be interpolated. Factors D and
F should be set to the levels that are least expensive. A series of confirmation runs
should be made when the optimum levels have been determined. The average
response and S/N should be compared to the predicted values.
COMBINATION DESIGN
Combination design was mentioned in Section 3 as a way of assigning two twolevel factors to a single three-level column. This is done by assigning three of the
four combinations of the two two-level factors to the three-level factor and not testing
the fourth combination. As an example, two two-level factors are assigned to a threelevel column as in Table 9.26.
Note that the combination A1B2 is not tested. In this approach, information about
the A.B interaction is not available, and many ANOVA computer programs are not
able to break apart the effect of A and B.
The sum of squares (SS) in the ANOVA table that is due to factor A.B contains
both the SS due to factor A and the SS due to factor B. These two SSs are not
additive since the factors A and B are not orthogonal. This means:
SL3151Ch09Frame Page 416 Thursday, September 12, 2002 6:05 PM
416
Six Sigma and Beyond
TABLE 9.26
Combination Design
Factor A
Factor B
Three Level Column
Combined Factor (A.B)
1
2
2
1
1
2
1
2
3
SSAB ≠ SSA + SSB
The SS of A and B can be calculated separately as follows:
(
= (T
SSA = TAB1 − TAB2
SSB
AB2
− TAB3
) / (2 * r )
) / (2 * r )
2
2
where TAB1 = the sum of all responses run at the first level of AB; TAB2 = the sum
of all responses run at the second level of AB; TAB3 = the sum of all responses run
at the third level of AB; and r = the number of data points run at each level of AB.
The MS of A and B then can be separately compared to the error MS to determine
if either or both factors are significant. The df for both A and B is 1. If one of the
factors is significant and the other is not, the ANOVA should be rerun with the
significant factor shown with a dummy treatment and the other factor excluded from
the analysis.
EXAMPLE
The following factors will be evaluated using an L9 orthogonal array:
Factor
Number of Levels
A
B
C
D
E
2
2
3
3
3
A and B will be combined into a single three-level column. The test array and results
are shown in Table 9.27.
The sum of the data at each level of AB is: for AB = 1, the sum is 17 + 9 + 8 =
34; for AB = 2, the sum is 40 + 28 + 17 = 85; for AB = 3, the sum is 28 + 22 + 27 = 77.
SL3151Ch09Frame Page 417 Thursday, September 12, 2002 6:05 PM
Design of Experiments
417
TABLE 9.27
L9 OA with Test Results
A
B
A.B
C
D
E
Test Results
1
1
1
2
2
2
2
2
2
1
1
1
1
1
1
2
2
2
1
1
1
2
2
2
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
7
3
5
22
13
9
12
12
15
10
6
3
18
15
8
16
10
12
Sum of the
Test Results
17
9
8
40
28
17
28
22
27
TABLE 9.28
ANOVA Table
Source
A.B
(A)
(B)
C
D
E
Error
(pooled error)
Total
df
2
1
1
2
2
2
9
9
17
SS
MS
F Ratio
S’
%
250.778
(216.750)
(5.333)
100.778
33.778
32.444
36.000
36.000
453.778
125.389
216.750
5.333
50.389
16.889
16.222
4.000
4.000
26.693
31.347
54.188
1.333
12.597
4.222
4.056
242.778
53.50
92.778
25.778
24.444
20.45
5.68
5.39
68.000
14.99
(
) ( )
2
SSA = 24 − 85 / 2 * 6
SSA = 216.75
(
) ( )
2
SSB = 85 − 77 / 2 * 6
SSB = 5.33
The ANOVA table is for the data shown — see Table 9.28. The decomposed SS for
A and B are shown in parentheses and are not added into the total SS.
SL3151Ch09Frame Page 418 Thursday, September 12, 2002 6:05 PM
418
Six Sigma and Beyond
TABLE 9.29
Second Run of ANOVA
Source
df
SS
MS
F Ratio
S’
%
A
C
D
E
Error
(pooled error)
Total
1
2
2
2
10
10
17
245.444
100.778
33.778
32.444
41.334
41.334
453.778
245.444
50.389
16.889
16.222
4.133
4.133
26.693
59.386
12.192
4.086
3.925
241.311
92.512
25.512
24.178
53.18
20.39
5.62
5.33
70.265
15.48
The F ratio for factor B indicates that the effect of the change in factor B on the
response is insignificant. Factor B is excluded from the analysis and factor A is
analyzed with a dummy treatment. The ANOVA table for this analysis is shown in
Table 9.29. The analysis continues using the techniques described in this section.
MISCELLANEOUS THOUGHTS
The purpose of most DOEs is to predict what the response will be at the optimum
condition. Confirmatory tests should be run to assure the experimenter that the
projected results are valid. Sometimes, the results of the confirmatory tests are
significantly different from the projected results. This can be due to one or more of
the following:
• There was an error in the basic assumptions made in setting up the
experiment.
• Not all of the important factors were controlled in the experiment.
• The factors interacted in a manner that was not accounted for.
• The response that was measured was not the proper response or was only
a symptom of something more basic (see Section 2).
• An important noise factor was not included in the experiment (e.g., the
experimental tests were run on sunny days while the confirmatory tests
were run on a rainy day).
• The experimental test equipment is not capable of providing consistent,
repeatable test results.
• A mistake was made in setting up one or more of the experimental tests.
The experimenter who is faced with data that does not support the prediction is
forced to ask which of these problems affected the results. It is important that all
of these problems be considered and investigated, if appropriate. If two or more of
these problems coexisted, correcting only one problem may not improve the experimental results.
Even though it may seem that the experiment was a failure, that is not necessarily
true. Experimentation should be considered an organized approach to uncovering a
SL3151Ch09Frame Page 419 Thursday, September 12, 2002 6:05 PM
Design of Experiments
419
TABLE 9.30
L8 with Test Results and S/N Values
L8
Test
No.
A
1
B
2
C
3
D
4
E
5
F
6
G
7
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
Z
Y
X
1
1
1
2
2
1
2
1
2
1
2
2
s
–20
log(s)
Test
Results
25
25
18
26
15
18
20
19
27
27
21
23
11
15
17
20
30
21
19
27
12
17
21
20
26
19
22
28
14
18
18
17
2.16
3.65
1.83
2.16
1.83
1.41
1.83
1.41
–6.690
–11.249
–5.229
–6.690
–5.229
–3.010
–5.229
–3.010
working knowledge about a situation. The “failed” experiment does provide new
knowledge about the situation that should be used in setting up the next iteration of
experimental testing.
The prior statement may sound too idealistic for the “real” world where deadlines
are very important. A failed experiment may cause some people to doubt the usefulness of the DOE approach and extol the virtues of traditional one-factor-at-a-time
testing. However, all of the problems listed above that could cause a DOE to fail
will also cause a one-factor-at-a-time experiment to fail. In DOE, the problem will
be found fairly early since relatively few tests are run. In one-factor-at-a-time testing,
the problem may not surface until many tests have been run, or the problem may
not even be identified in the testing program. In this case, the problem may not show
up until production or field use.
The importance of meeting real-world deadlines makes the planning stage of
the experiment critical. Proper planning, including consideration of the experience
and knowledge of experts, will enable the experimenter to avoid many of the possible
problems. Deadlines are never a good excuse for not taking the time to adequately
plan an experiment.
AN EXAMPLE
The data used to demonstrate the S/N calculations in this section will be analyzed
here using the approach, NTBII S/N = –10 log (s2) = –20 log (s). This approach was
discussed earlier in this chapter. The data set is repeated in Table 9.30.
The S/N ratios for the testing situations are then analyzed using an ANOVA table.
The NTBII ANOVA table for the example is shown in Table 9.31. To help interpret
the ANOVA table, the level standard deviation averages and the level S/N averages
are shown for the significant factors in Table 9.32.
To give a visual impact of the spread of the data and what the above table really
means, it would be wise to plot the data for each factor level. The plots of the average
standard deviation by factor level are shown in Figure 9.20.
SL3151Ch09Frame Page 420 Thursday, September 12, 2002 6:05 PM
420
Six Sigma and Beyond
TABLE 9.31
ANOVA Table for Data from Table 9.30
Source
df
SS
MS
F Ratio
S’
%
A
B
C
D
E
F
G
Error
(pooled error)
Total
1
1
1
1*
1
1*
1*
22.379
4.531
4.531
0.313
13.670
1.200
1.200
22.379
4.531
4.531
0.313
13.670
1.200
1.200
24.746
5.010
5.010
21.474
3.627
3.627
44.90
7.58
7.58
15.117
12.766
26.69
3
7
2.713
47.823
0.904
6.832
6.330
13.24
TABLE 9.32
Significant Figures from Table 9.31
Factor
Level
Average
Standard
Deviation
A
1
2
1
2
1
2
1
2
2.36
1.61
2.12
1.79
2.12
1.79
1.67
2.26
B
C
E
NTBII S/N
–7.465
–4.120
–6.545
–5.039
–6.545
–5.039
–4.485
–7.099
The ANOVA table and the level average standard deviations indicate that
A2B2C2E1 are the optimal choices from an NTBII S/N standpoint. The analysis of
the raw data remains the same as shown in the chapter. The average level of the
response should be targeted using the results of the raw data analysis. This is true
regardless of whether the goal is as small as possible, as large as possible, or to meet
a specific value. The variance should be minimized by maximizing the NTBII S/N.
The experimenter must make the trade-off between the choice of factor levels that
adjust the response average and the choice of factor levels that minimize the variance
of the response.
A comparison of the results of the two methods shows clear differences. As an
example, for the situation where a specific value is targeted (NTB), the factor level
choices are: NTB — B2E1G1 to minimize variability, A and C set to achieve target;
NTBII — B2E1 to minimize variability, G set to achieve target. If the target is
attainable using factor G, use A2C2 to minimize variability, otherwise, set C and/or
A to achieve target.
SL3151Ch09Frame Page 421 Thursday, September 12, 2002 6:05 PM
Design of Experiments
421
Standard Deviation
2.5
2
1.5
1
2
Factor A
1
2
Factor B
1
2
Factor C
1
2
Factor E
FIGURE 9.20 Plots of the average standard deviation by factor level.
There is no complete agreement among statisticians and DOE practitioners as
to which approach gives better results. As a general rule, the reader is encouraged to:
1. Plot the data including raw and/or transformed values, level averages and
standard deviations, and any other information that seems appropriate.
One picture is worth a thousand words.
2. Analyze the data using the appropriate analysis techniques.
3. Compare the results to the data plots in order to determine which set of
results makes the most sense. Perform this comparison fairly and resist
the temptation to choose the results solely on whether they support convenient conclusions.
4. Run confirmation tests.
DOE is a powerful tool that can help the experimenter get the most out of scarce
testing resources. However, as with any powerful tool, care must be taken to understand how to use the tool and how to interpret the results.
ANALYSIS OF CLASSIFIED DATA
The purpose of this section is to:
1. Discuss the classified attribute analysis and classified variable analysis
approaches to analyzing classified responses.
2. Present examples of how these techniques are used.
SL3151Ch09Frame Page 422 Thursday, September 12, 2002 6:05 PM
422
Six Sigma and Beyond
TABLE 9.33
Observed Versus Cumulative Frequency
Observed Frequency
Cumulative Frequency
2
1
1
2
3
4
Class I
Class II
Class III
CLASSIFIED RESPONSES
Some experimental responses cannot be measured on a continuous scale although
they can be divided into sequential classes. Examples include appearance and performance ratings. In these situations, three to five rating classes are generally the
optimum number because this number allows major differences in the responses to
be identified and yet does not require the rater to identify differences that are too
subtle. Two related techniques are used to analyze classified responses:
1. Classified attribute analysis is used when the total number of items rated
is the same for every test matrix setup.
2. Classified variable analysis is used when the total number of items rated
is not the same for every test matrix setup.
Three to five responses at each experimental setup are recommended to give a
good evaluation of the class distribution of responses at that setup. As with continuous measurements, more responses at each setup allow smaller differences to be
identified.
CLASSIFIED ATTRIBUTE ANALYSIS
This technique converts the observed frequency in each class into a cumulative
frequency for the classes. As an example, if there are three classes, the observed
and cumulative frequencies might be as shown in Table 9.33.
It is assumed that the user will use a computer program to analyze the classified
data. The specific input format will depend on the computer program used. The
mathematical derivations and philosophies of this approach will not be presented
here. For more information see Volume V of this series as well as Taguchi (1987)
and Wu and Moore (1985).
EXAMPLE
Three grades are used to evaluate paint appearance of a product. They are “Bad,”
“OK,” and “Good.” Seven factors (A through G), each at two levels, are evaluated
to determine the combination of factor levels that optimizes paint appearance. Five
products are evaluated at each testing situation in an L8 orthogonal array. Test results
are shown in Table 9.34.
SL3151Ch09Frame Page 423 Thursday, September 12, 2002 6:05 PM
Design of Experiments
423
TABLE 9.34
Attribute Test Setup and Results
Frequency in Each Grade
A
B
C
D
E
F
G
Bad
OK
Good
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
2
3
4
0
0
1
0
0
3
2
1
2
4
3
3
1
0
0
0
3
1
1
2
4
TABLE 9.35
ANOVA Table (for Cumulative Frequency)
Source
df
A
B
C
D
E
F
G
Error
(pooled error)
Total
2
2
2*
2*
2*
2
2*
64
72
78
SS
MS
11.668
6.678
0.125
3.668
2.259
7.935
2.259
45.409
53.720
80.000
5.834
3.39
0.063
1.834
1.130
3.986
1.130
0.710
0.746
1.026
F Ratio
S’
%
7.820
4.476
10.179
5.186
12.72
6.48
5.319
6.443
8.05
58.196
72.75
The ANOVA analysis for this set of data is shown in Table 9.35. Note that the
degrees of freedom are calculated differently from the non-classified situation. The
df of each source is:
(the number of levels of that factor – 1) * (the number of classes – 1)
In this example, the number of levels of each factor is two and the number of classes
is three. For each factor,
( )( )
df = 2 − 1 * 3 − 1 = 2
The total df = (the total number of rated items – 1) * (the number of classes – 1).
Thus, the total df for this example is:
SL3151Ch09Frame Page 424 Thursday, September 12, 2002 6:05 PM
424
Six Sigma and Beyond
TABLE 9.36
The Effect of the Significant Factors
Observed
Frequency
A1
A2
B1
B2
F1
F2
Total
% Rate of
Occurrence (R.O.)
Cumulative
Frequency
Cumulative
% R.O.
Bad
OK
Good
Bad
OK
Good
Bad
OK
Good
Bad
OK
Good
9
1
6
4
2
8
10
8
11
12
7
10
9
19
3
8
2
9
8
3
11
45
5
30
20
10
40
40
55
60
35
50
45
15
40
10
45
40
15
9
1
6
4
2
8
17
12
18
11
12
17
20
20
20
20
20
20
45
5
30
20
10
40
25
85
60
90
55
60
85
73
100
100
100
100
100
100
100
Cumulative Rate of
Occurrence - %
Factor Effects
100
90
80
70
60
50
40
30
20
10
0
A-1
A-2
B-1
B-2
F-1
F-2
Factor - Level
Bad
OK
Good
FIGURE 9.21 Factor effects.
(
)( )
df = 40 − 1 * 3 − 1 = 78
The error df is the total df minus the df of each of the factors.
From the ANOVA table, factors A, B, and F are identified as significant. The
effects of these factors are shown in Table 9.36 and Figure 9.21.
Although interpretation and use of the ANOVA table in classified attribute
analysis is the same as for the non-classified situation, a significant difference does
SL3151Ch09Frame Page 425 Thursday, September 12, 2002 6:05 PM
Design of Experiments
425
Factor Effects
Cumulative Rate of
Occurrence - %
100
90
80
70
60
50
40
30
20
10
0
A-1
A-2
A-3
B-1
B-2
B-3
C-1
C-2
C-3
Factor - Level
Class 1
Class 2
Class 3
FIGURE 9.22 Factor effects.
exist in estimating the cumulative rate of occurrence for each class under the optimum condition.
Percentages near 0% or 100% are not additive. The cumulative of occurrence
can be transformed using the omega method to obtain values that are additive. In
the omega method, the cumulative percentage (p) is transformed to a new value (Ω)
as follows:
(
)
Ω = −10 log10 l / p − 1 [the units of Ω are decibels (db).]
Using this transformation, the estimated cumulative rate of occurrence for each
class at the optimum condition (A2B2F1) is calculated as follows:
(
) (
) (
db of µ̂ = db of T + db of A2 − db of T + db of B2 − db of T + db of F1 − db of T
)
The estimated cumulative rate of occurrence for each class for the optimum
condition is:
Class 1
(
) (
)
db of µˆ = db of .25 + db of .05 − db of .25 + db of .20 − db of .25
(
)
+ db of .10 − db of .25
(
) (
) (
)
= −4.77 + −12.79 + 4.77 + −6.02 + 4.77 + −9.54 + 4.77
= −18.81
µˆ = 1%
SL3151Ch09Frame Page 426 Thursday, September 12, 2002 6:05 PM
426
Six Sigma and Beyond
TABLE 9.37
Rate of Occurrence at the Optimum Settings
Class
Cumulative
Rate of Occurrence
Rate of Occurrence
Bad
OK
Good
1%
27%
100%
1%
26%
73%
Class 2
(
) (
)
db of µˆ = db of .73 + db of .60 − db of .73 + db of .55 − db of .73
(
)
+ db of .60 − db of .73
= −4.25
µˆ = 27%
These results are summarized in Table 9.37.
CLASSIFIED VARIABLE ANALYSIS
Classified variable analysis is used when the number of items evaluated is not the
same for all test matrix setups. As with classified attribute analysis, the computer
analyzes the cumulative frequencies.
EXAMPLE
Four factors (A, B, C and D) are suspected of influencing door closing efforts for a
particular car model. An experiment was set up that evaluated each of these factors
at three levels. An L9 orthogonal array was used to evaluate the factor levels. Door
closing effort ratings were made by a group of typical customers. Each customer
was asked to evaluate the doors on a scale of one to three as follows:
Class
1
2
3
Description of Effort
Unacceptable
Barely acceptable
Very good feel
The experimental setup and test results are shown in Table 9.38 and Figure 9.22.
The ANOVA analysis for this set of data is shown in Table 9.39.
From the ANOVA table, factors A, B and C are identified as significant. The
effects of these factors are shown in Table 9.40.
SL3151Ch09Frame Page 427 Thursday, September 12, 2002 6:05 PM
Design of Experiments
427
TABLE 9.38
Door Closing Effort: Test Setup and Results
A
B
C
D
Number
of
Ratings
1
1
1
2
2
2
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
5
4
5
4
4
4
5
5
4
Class% Rate
of Occurrence
Ratings
by Class
Class Cumulative
Frequency (%)
1
2
3
1
2
3
1
2
3
1
2
2
0
0
0
3
4
3
3
1
3
0
1
1
2
1
1
1
1
0
4
3
3
0
0
0
20
50
40
0
0
0
60
80
75
60
25
60
0
25
25
40
20
25
20
25
0
100
75
75
0
0
0
20
50
40
0
0
0
60
80
75
80
75
100
0
25
25
100
100
100
100
100
100
100
100
100
100
100
100
TABLE 9.39
ANOVA Table for Door Closing Effort
Source
A
B
C
D
Error
(pooled error)
Total
df
SS
MS
F Ratio
S’
%
4
4
4
4*
1782
1786
1798
871.296
34.404
25.125
4.827
864.291
869.118
1800.000
217.824
8.601
6.296
1.207
0.485
0.487
1.001
447.277
17.661
12.928
869.348
32.456
23.234
48.30
1.80
1.29
874.962
48.61
The choice of the optimum levels is clear for factors A and B. A2 and B1 are the
best choices. Two different choices are possible for factor C, depending on the overall
goal of the design. If the goal is to minimize the occurrence of unacceptable efforts,
C1 is the best choice. If the goal is to maximize the number of customer ratings of
“very good,” then C2 is the best choice. For this example, C1 will be chosen as the
preferred factor setting. The estimated rate of occurrence for each class for the optimum
setting, A2B1C1, can be calculated using the omega method. The estimated rates are
shown in Table 9.41. The df for the factors are calculated in the same way as with the
Classified Attribute Analysis, i.e., df = (the number of levels of that factor – 1) * (the
number of classes – 1).
In Classified Variable Analysis, the total number of items evaluated at each condition
is not equal. To “normalize” these sample sizes, percentages are analyzed and the
“sample size” for each test setup becomes 100 (for 100%). The total df is (the number
of “sample sizes” – 1) * (the number of classes – 1). For this example, the total df is:
SL3151Ch09Frame Page 428 Thursday, September 12, 2002 6:05 PM
428
Six Sigma and Beyond
TABLE 9.40
The Effects of the Door Closing Effort
Factor
& Level
A1
A2
A3
B1
B2
B3
C1
C2
C3
Total
% Rate of Occurrence
Cumulative% Rate of Occurrence
Class 1
Class 2
Class 3
Class 1
36.7
0
71.7
26.7
43.3
38.3
33.3
41.7
33.3
36.1
48.3
16.7
28.3
33.3
23.3
36.7
35.0
16.7
41.7
31.1
15.0
83.3
0
40.0
33.3
25.0
31.7
41.7
25.0
32.8
36.7
0
71.7
26.7
43.3
38.3
33.3
41.7
33.3
36.1
Class 2
Class 3
85.0
16.7
100.0
60.0
66.6
75.0
68.3
58.4
75.0
67.2
100
100
100
100
100
100
100
100
100
100
TABLE 9.41
Rate of Occurrence at the Optimum Settings
Cumulative
Rate of Occurrence
Class
1 (unacceptable)
2 (barely acceptable)
3 (very good feel)
Rate of Occurrence
0%
13.4%
100%
(
0%
13.4%
86.6%
)( )
df = 900 − 1 * 3 − 1
= 1798
The error df is the total df minus the df of each of the factors.
DISCUSSION
OF THE
DEGREES
OF
FREEDOM
In both classified attribute analysis and classified variable analysis, the total degrees
of freedom are much greater than the number of items evaluated. The interpretation
of the F ratios and the calculation of a confidence interval are complicated by the
large number of degrees of freedom and will not be addressed here. The analysis
techniques for classified responses are not as completely developed as are the
techniques for the analysis of continuous data. In Dr. Taguchi’s approach, the emphasis is on using the percent contribution to prioritize alternative choices. Although
better statistical techniques may be developed to handle classified data, classified
attribute and classified variable analyses can be used to identify the large contributors
to variation in classified responses.
SL3151Ch09Frame Page 429 Thursday, September 12, 2002 6:05 PM
Design of Experiments
429
MISCELLANEOUS THOUGHTS
As we just mentioned in the discussion of the degrees of freedom, there is no
consensus among statisticians regarding the best method to use to analyze classified
data. A method that is an alternate to the ones described in this section is to transform
the 
Download