ภาพนิ่ง 1 - Great Factory Great Management

What is Six Sigma?
It is a business process that allows companies to
drastically improve their bottom line by designing and
monitoring everyday business activities in ways that
minimize waste and resources while increasing customer
satisfaction.
Mikel Harry, Richard Schroeder
What Six Sigma Can Do For Your Company?
Sigma level
4.8
6
4.7
5
MAIC
D
5
5.1
2
3
F
4
S
3
S
3
Average company
2
0
1
years of implementation
What Six Sigma Can Do For Your Company?
THE COST OF QUALITY
SIGMA LEVEL
DEFECTS PER MILLION OPPORTUNITIES
COST OF QUALITY
2
308,537 ( Noncompetitive companies )
3
66,807
25-40% of sales
4
6,210 ( Industry average )
15-25% of sales
5
233
5-15 of sales
6
3.4 ( World class )
< 1% of sales
Each sigma shift provides a 10 percent net income improvement
Not applicable
The Cost of Quality (COQ)
Traditional Cost of
Poor Quality (COQ)
5-8%
Inspection
Warranty
Rejects
Rework
ยอดขายลดลง
งานเอกสารส่ งผิดที่
เวลาผลิตยาวนาน
ค่าของเงินตามกาลเวลา
ใช้ เวลา Set up นาน
ค่าเร่ งการผลิต
Lost Opportunity
ความไม่ พอใจ
ข้ อมูลที่ไม่ ถูกต้ อง
แนวทางที่แตกต่ าง
ในการทาธุรกิจ
Note: % of sales
ค่าบริการขนส่ ง
15-20%
ค่าบัตรโทรศัพท์
การขนส่ งล่าช้ า
ความปลอดภัย
การติดตั้ง
รายการสั่งซื้อมากเกินไป
การสั่งวัตถุดบิ มาก
เกินความจาเป็ น
Less Obvious Cost of
Quality (COQ)
DMAIC : The Yellow Brick Road
C
O
R
E
P
H
A
S
E
Breakthrough & People
DEFINE MEASURE ANALYZE >>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>
Champion
Champion
DEFINE -
Definition of
Opportunity
1.>>>>>>>>>>>
Project Definition
2. Determine
Champion
Impact & Priority
3. Collect Baseliine
Metric Data
Definition of
4. Savings/Cost
Opportunity
Assessment
1. Project Definition
5. Establish
2. Determine
Planned
Impact & Priority
Timeline
3. Collect Baseliine
6. Search Library
Metric Data
7. Identify Project
4. Savings/Cost
Authority
Assessment
1. Problem
5. Establish
Statement
Planned
2. Goals/Objectives
Timeline
3. Projected
6. Search Library
Business
7. Identify Project
Benefits
Authority
4. Financial Value
1. Problem
5. Key Metrics
Statement
6. Team
2. Goals/Objectives
Assignment
3. Projected
Business
P1 (not
validated)
Benefits
IMPROVE >>>>>>>>>>>>
CONTROL-
>>>>>>>>>>>>>> >>>>>>>>>>>>>
Black Belt
Assess the Current Process
Blackbelts
IMPROVE -
Confirm f(x) for Y
MEASURE ANALYZE
1.>>>>>>>>>>>>>>>
Map the Process
1.>>>>>>>>>>>>>
Determine the Vital
2. Determine the Baseline
3. Prioritize the Inputs to
Assess
4. Assess the
Assess the Current Process
Measurement System
5. Capability Assessment
1. Map the Process
6. Short Term
2. Determine the Baseline
7. Long Term
3. Prioritize the Inputs to
8. Determine Entitlement
Assess
9. Process Improvement
4. Assess the
10. Financial Savings
Measurement System
5. Capability Assessment
1. Macro / Micro Process
6. Short Term
Charts
7. Long Term
2. Rolling Throughput
8. Determine Entitlement
Yield
9. Process Improvement
3. Fishbone, Cause Effect
10. Financial Savings
Matrix
4. GR&R Study
1. Macro / Micro Process
5. Establish Sigma Score
Charts
6. Apply ‘Shift & Drift’
2. Rolling Throughput
7. Baseline vs Entitlement
Yield
8. Translate to $$$
3. Fishbone, Cause Effect
P1 (validated)
Matrix
P5 Reviewed
Variables Affecting
Black
the Response
f(x) = Y
Optimize f(x) for Y
1.>>>>>>>>>>>>
Determine the Best
BeltCombination of ‘Xs’
for Producing the
Best ‘Y’
REALIZATION Financial Rep &
Process Owner
Finance Rep.&
Process
Owner
1.>>>>>>>>>>>>>>
Establish Controls for 1.>>>>>>>>>>>>>
Financial
Maintain Improvements
CONTROL-
Sustain the BenefitREALIZATION
2. KPIVs and their
Assessment
and &
Financial Rep
‘settings’
Input
Actual
Process Owner
3. Establish Reaction
Savings
Plans
2. Functional
Confirm f(x) for Y
Optimize f(x) for Y
Maintain Improvements
Sustain the Benefit
2. Confirm
Manager/Process
Relationships and
Owner – Monitor
1. Determine the Vital
1. Determine the Best 1. Establish Controls for 1. Financial
Establish the KPIV
3. Control/Implementa
Variables Affecting
Combination of ‘Xs’ 2. KPIVs and their
Assessment and
tion
the Response
for Producing the
‘settings’
Input Actual
f(x) = Y
Best ‘Y’
3. Establish Reaction
Savings
Plans
2. Functional
2. Confirm
Manager/Process
Relationships and
Owner – Monitor
1. Multi-Vari Studies
Design of Experiments
1. Process Control Plan 1. Monthly Benefit
Establish the KPIV
3. Control/Implementa
2. Correlation Analysis 1. Full Factorial
2. SPC Charting
Update
tion
3. Regression Analysis 2. Fractional Factorial
x-Bar
&
R

 Single / Multiple 3. Blocking
 Pre-Control
Experiments
4. Hypothesis Testing
 Etc
4.
Custom
Methods
Mean
Testing
(t,
Z)
3.
Gauge
Control Plans

5.
RSM
Variation
(Std
 Multi-Vari Studies
1.
Design of Experiments
1. Process Control Plan 1. Monthly Benefit
Dev)(F,etc) Analysis 1. Full Factorial
2. Correlation
2. SPC Charting
Update
 3. ANOVA
Regression Analysis 2. Fractional Factorial
 x-Bar & R
 Single / Multiple 3. Blocking
 Pre-Control
Experiments
4. Hypothesis Testing
 Etc
P5 Reviewed
P5
P5 Reviewed
4.Reviewed
Custom Methods
 Mean Testing (t, Z)
3. Gauge Control Plans
P8 (Sign Off)
Define



What is my biggest problem?
 Customer complaints
 Low performance metrics
 Too much time consumed
What needs to improve?
 Big budget items
 Poor performance
Where are there opportunities to improve?
 How do I affect corporate and business group objectives?
 What’s in my budget?
Define : The Project
 Projects DIRECTLY tie to department and/or business unit
objectives
 Projects are suitable in scope
 BBs are “fit” to the project
 Champions own and support project selection
Define : The Defect

High Defect Rates

Rework

Low Yields

Customer Complaints

Excessive Cycle Time

Excessive Test and Inspection

Excessive Machine Down Time

Constrained Capacity with High

High Maintenance Costs

High Consumables Usage
anticipated Capital Expenditures

Bottlenecks
Define : The Chronic Problem
Special Cause ( ปัญหานาน ๆ ครั้ง )
Reject Rate
ปัญหาฝังแน่ น (Chronic)
Optimum Level
Time
Define : The Persistent Problem
25
14
12
20
10
15
8
6
10
4
5
2
0
0
WW01
WW02
WW03
WW04
WW05
WW06
WW07
WW08
WW09
WW10
WW11
WW01
WW12
WW02
WW03
WW04
WW05
WW06
WW07
WW08
WW09
WW10
WW11
WW12
WW07
WW08
WW09
WW10
WW11
WW12
Is process in control?
40
14
35
12
30
10
25
8
20
6
15
4
10
2
5
0
0
WW01
WW02
WW03
WW04
WW05
WW06
WW07
WW08
WW09
WW10
WW11
WW12
WW01
WW02
WW03
WW04
WW05
WW06
Define : Refine The Defect
Assembly Yield Loss
% Yield Loss
Refined Defect = a1
a1
PSA
a2
RSA
a3
Gram
Load
a4
Bent
Gimbal
KPOV
a5
Solder
Defect
a6
Contam
a7
Damper
Defect
MAIC --> Identify Leveraged KPIV’s
Outputs
Tools
Process Map
30 - 50
C&E Matrix and FMEA
Inputs Variables
Potential Key Process
Gage R&R, Capability
Measure
Multi-Vari Studies,
Correlations
T-Test, ANOM, ANOVA
Analyze
10 - 15
8 - 10
Input Variables (KPIVs)
KPIVs
Screening DOE’s
DOE’s, RSM
Improve
4-8
Optimized KPIVs
3-6
Key Leverage
KPIVs
Quality Systems
SPC, Control Plans
Control
Measure
The Measure phase serves to validate the problem, translate the
practical to statistical problem and to begin the search for root causes
Measure : Tools
To validate the problem
 Measurement System Analysis
 To translate practical to statistical problem
 Process Capability Analysis
To search for the root cause
 Process Map
 Cause and Effect Analysis
 Failure Mode and Effect Analysis
Work shop #1:
• Our products are the distance resulting from the Catapult.
• Product spec are +/- 4 Cm. for both X and Y axis
• Shoot the ball for at least 30 trials , then collect yield
• Prepare to report your result.
Measure : Measurement System Analysis
Objectives:
 Validate the Measurement / Inspection System
 Quantify the effect of the Measurement System variability on
the process variability
Measure : Measurement System Analysis
Attribute GR&R : Purpose



To determine if inspectors across all shifts, machines, lines,
etc… use the same criteria to discriminate “good” from “bad”
To quantify the ability of inspectors or gages to accurately
repeat their inspection decisions
To identify how well inspectors/gages conform to a known
master (possibly defined by the customer) which includes:
 How often operators decide to over reject
 How often operators decide to over accept
Measure : Measurement System Analysis
Measure : Measurement System Analysis
% Appraiser Score
•% REPEATIBILITY OF OPERATOR # 1 = 16/20 = 80%
•% REPEATIBILITY OF OPERATOR # 2 = 13/20 = 65%
•% REPEATIBILITY OF OPERATOR # 3 = 20/20 = 100%
Measure : Measurement System Analysis
% Attribute Score
•% UNBIAS OF OPERATOR # 1 = 12/20 = 60%
•% UNBIAS OF OPERATOR # 2 = 12/20 = 60%
•% UNBIAS OF OPERATOR # 3 = 17/20 = 85%
% Screen Effective Score
•% REPEATABILITY OF INSPECTION = 11/20 = 55 %
% Attribute Screen Effective Score
•% UNBIAS OF INSPECTION 50 % = 10/20 = 50%
Measure : Measurement System Analysis
Variable GR&R : Purpose

Study of your measurement system will reveal the relative amount of
variation in your data that results from measurement system error.

It is also a great tool for comparing two or more measurement devices
or two or more operators.

MSA should be used as part of the criteria for accepting a new piece of
measurement equipment to manufacturing.

It should be the basis for evaluating a measurement system which is
suspect of being deficient.
Measure : Measurement System Analysis
Observed Variation
Actual Variation
Long-Term
Process
Varaition
Short-Term
Process
Variation
Measurement Variation
Variation
Within
Sample
Precision
Repeatability
Variation
due to
Gage
Stability
Reproducibility
Linearity
Accuracy
Measure : Measurement System Analysis
Resolution?
“Precision” (R&R)
Bias?
Calibration?
Linearity?
Stability?
Measurement System Metrics
Measurement System Variance:
2
s
meas
=
2
s
repeat
+
2
s
reprod
To determine whether the measurement system is “good” or “bad” for a certain
application, you need to compare the measurement variation to the product spec
or the process variation
• Comparing s2meas with Tolerance:
– Precision-to-Tolerance Ratio (P/T)
• Comparing s2meas with Total Observed Process Variation (P/TV):
– % Repeatability and Reproducibility (%R&R)
– Discrimination Index
Uses of P/T and P/TV (%R&R)
• The P/T ratio is the most common estimate of measurement
system precision
– Evaluates how well the measurement system can perform
with respect to the specifications
– The appropriate P/T ratio is strongly dependent on the
process capability. If Cpk is not adequate, the P/T ratio
may give a false sense of security.
• The P/TV (%R&R) is the best measure for Analysis
– Estimates how well the measurement system performs with
respect to the overall process variation
– %R&R is the best estimate when performing process
improvement studies. Care must be taken to use samples
representing full process range.
Number of Distinct Categories
•Automobile Industry Action Group (AIAG) recommendations:
•Categories Remarks
< 2 System cannot discern one part from another
= 2 System can only divide data in two groups
e.g. high and low
= 3 System can only divide data in three groups
e.g. low, middle and high
 4 System is acceptable
Measure : Measurement System Analysis
Variable GR&R : Decision Criterion
BEST
ACCEPTABLE
REJECT
% Bias
% Linearity
DR
%P/T
%Contribution
<5
<5
> 10
< 10
<2
5 - 10
5 - 10
5 - 10
10-30
2-7.7
> 10
> 10
<5
> 30
> 7.7
Note : Stability is analyzed by control chart
Example: Minitab
• Enter the data and tolerance information into Minitab.
– Stat > Quality Tools > Gage R&R Study (Crossed )
Enter Gage Info
and Options.
(see next page)
ANOVA method is preferred.
FN: Gageaiag.mtw
Enter the data and tolerance information into Minitab.
– Stat > Quality Tools > Gage R&R Study
– Gage Info (see below) & Options
Gage R&R Output
Gage name:
Date of study:
Reported by:
Tolerance:
Misc:
Gage R&R (ANOVA) for Response
Components of Variation
By Part
Percent
100
1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
%Cont ribut ion
%St udy Var
%Tolerance
50
0
Gage R&R
Repeat
Reprod
Part
Part -t o-Part
1
2
3
R Chart by Operator
Sample Range
0.15
1
2
0.05
R= 0.03833
0.00
LCL= 0
0
Operator
3
UCL= 0.8796
Mean= 0.8075
LCL= 0.7354
0
Average
Sample Mean
Xbar Chart by Operator
2
6
7
8
9
10
By Operator
UCL= 0.1252
1
5
1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
3
0.10
1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
4
1
2
Operator* Part I nteraction
1.1
1.0
0.9
0.8
0.7
0.6
0.5
0.4
Part
3
Operator
1
2
3
1
2
3
4
5
6
7
8
9
10
Gage R&R Output
Gage R&R, Variation Components
Variance due to the measurement system (broken down into
repeatability and reproducibility)
Source
VarComp
Total Gage R&R
Repeatability
Reproducibility
Operator
Operator*PartID
Part-To-Part
Total Variation
0.004437
0.001292
0.003146
0.000912
0.002234
0.037164
0.041602
Variance due
to the parts
%Contribution
(of VarComp)
10.67
3.10
7.56
2.19
5.37
89.33
100.00
Source
StdDev
(SD)
Total Gage R&R
Repeatability
Reproducibility
Operator
Operator*PartID
Part-To-Part
Total Variation
0.066615
0.035940
0.056088
0.030200
0.047263
0.192781
0.203965
0.34306
0.18509
0.28885
0.15553
0.24340
0.99282
1.05042
Total variance
Standard deviation for
each variance component
Study Var %Study Var %Tolerance
(5.15*SD) (%SV)
(SV/Toler)
32.66
17.62
27.50
14.81
23.17
94.52
100.00
22.87
12.34
19.26
10.37
16.23
66.19
70.03
Gage R&R, Results
Source
VarComp
Total Gage R&R
Repeatability
Reproducibility
Operator
Operator*PartID
Part-To-Part
Total Variation
0.004437 10.67
0.001292 3.10
0.003146 7.56
0.000912 2.19
0.002234 5.37
0.037164 89.33
0.041602 100.00
Source
StdDev
(SD)
Total Gage R&R
Repeatability
Reproducibility
Operator
Operator*PartID
Part-To-Part
Total Variation
0.066615
0.035940
0.056088
0.030200
0.047263
0.192781
0.203965
0.34306
0.18509
0.28885
0.15553
0.24340
0.99282
1.05042
%Contribution
(of VarComp)
2
s MS
Contribution  2
s total

P / TV 
0.004437
 0.1067
0.041602
smeas
 StudyVar
s total

0.3430
 0.3266
1.0504
Study Var %Study Var %Tolerance
(5.15*SD) (%SV)
(SV/Toler)
32.66
17.62
27.50
14.81
23.17
94.52
100.00
22.87
12.34
19.26
10.37
16.23
66.19
70.03
5.15 * s MS
 Tol
USL  LSL
0.3430

 0.2287
1.5
P /T 
Question: What is our conclusion about the measurement system?
Measure : Process Capability Analysis
•Process capability is a measure of how well the process is
currently behaving with respect to the output specification.
•Process capability is determined by the total variation that
comes from common causes -the minimum variation that can be
achieved after all special causes have been eliminated.
•Thus, capability represents the performance of the process
itself,as demonstrated when the process is being operated in a
state of statistical control
Measure : Process Capability Analysis
Translate practical problem to statistical problem
Characterization
Large
Off-Target
LSL
Variation
LSL
USL
Outliers
LSL
USL
USL
Measure : Process Capability Analysis
Two measures of process capability
 Process Potential
 Cp
 Process Performance
 Cpu
 Cpl
 Cpk
 Cpm

Measure : Process Capability Analysis
Process Potential
Cp


Engineerin g Tolerance
Natural Tolerance
USL  LSL
6s
Measure : Process Capability Analysis


The Cp index compares the allowable spread (USL-LSL)
against the process spread (6s).
It fails to take into account if the process is centered between
the specification limits.
Process is centered
Process is not centered
Measure : Process Capability Analysis
Process Performance
 The Cpk index relates the scaled distance between the process
mean and the nearest specification limit.
C pu
USL  

3s
C pl
  LSL

3s
C pk
 Minimum C pu , C pl  
NSL  
3s
Measure : Process Capability Analysis
There are 2 kind of variation : Short term Variation and Long term Variation
lity
Capabi
Studies
Entitlement
Performance
(Short Term)
(Long Term)
Type of Variability
Only common cause
# of Data Points
25-50 subgroups
Production
Example
(Lumen Output):
Commercial
Example
(Response Time):
-1 lot of raw mat’l
-1 shift; 1 set of people
-Single “set-up”
-“Best” Cust. Serv. Rep.
-1 Customer (i.e., Grainger)
-1 month in the summer
All causes
200 points
-3 to 4 lots of raw mat’l
-All shifts; All people
-Over Several “set-ups”
-All Cust. Serv. Reps
-All Customers
-Several months
including Dec/Jan
Rule of Thumb:
Poor Man’s -“Best 2 weeks”
Historical data
Process:
Running like it was designed
or intended!
Running like it
actually does!
Rev. 1 12/98
Measure : Process Capability Analysis
Short Term VS LongTerm ( Cp Vs Pp or Cpk vs Ppk )
Measure : Process Capability Analysis
Process Potential VS. Process Performance ( Cp Vs Cpk )
1.If Cp > 1.5 , it means the standard deviation is suitable
2.Cp is not equal to Cpk, it means that the process mean is off-centered
Workshop#3
1. Design the appropriate check sheet
2. Define the subgroup
3. Shoot the ball for at least 30 trials per subgroup
4. Perform process capability analysis, translate Cp, Cpk , Pp
and Ppk into statistical problem
5. Report your results.
Measure : Process Map
Process Map is a graphical representation of the flow of a “as-is”
process. It contains all the major steps and decision points in a
process.
It helps us understand the process better, identify
the critical or problems area, and identify where improvement
can be made.
Measure : Process Map
OPERATION
All steps in the process where the object
undergoes a change in form or condition.
TRANSPORTATION
All steps in a process where the object moves from
one location to another, outside of the Operation
STORAGE
All steps in the process where the object remains
at rest, in a semi-permanent or storage condition
DELAY
All incidences where the object stops or waits on a
an operation, transportation, or inspection
INSPECTION
All steps in the process where the objects are
checked for completeness, quality, outside of the
Operation.
DECISION
Measure : Process Map
Scrap
Bad
Bad
Good
Good
•
•
•
•
How many Operational Steps are there?
How many Decision Points?
How many Measurement/Inspection Points?
How many Re-work Loops?
• How many Control Points?
Warehouse
Measure : Process Map
High Level Process Map
KPIVs
Major Step
KPOVs
KPIVs
Major Step
Major Step
KPOVs
KPOVs
These KPIVs and KPOVs can then be used as inputs to
Cause and Effect Matrix
KPIVs
Workshop #2 : Do the process map and report the
process steps and KPIVs that may be the cause
Measure : Cause and Effect Analysis
A visual tool used to identify, explore and graphically display, in increasing
detail, all the possible causes related to a problem
or condition to discover root causes





To discover the most probable causes for further analysis
To visualize possible relationships between causes for any problem current or
future
To pinpoint conditions causing customer complaints, process errors or nonconforming products
To provide focus for discussion
To aid in development of technical or other standards or process improvements
Measure : Cause and Effect Matrix
There are two types of Cause and Effect Matrix
1. Fishbone Diagram - traditional approach to brainstorming and
diagramming cause-effect relationships. Good tool when
there is one primary effect being analyzed.
2. Cause-Effect Matrix - a diagram in table form showing the
direct relationships between outputs (Y’s) and inputs (X’s).
Measure : Cause and Effect Matrix
C/N/X
Methods
Materials
C
C
N
N
N
N
N
Problem/
Desired
Improvement
C
C
C
Machinery
C = Control Factor
N = Noise Factor
X = Factor for DOE (chosen later)
Manpower
Fishbone Diagram
Measure : Cause and Effect Matrix
Cause and Effect
Matrix
7
8
9
Requirement
Requirement
Requirement
Requirement
Requirement
Requirement
Requirement
Requirement
10
11
12
13
14
15
Requirement
6
Requirement
5
Requirement
4
Requirement
3
Requirement
2
Requirement
1
Requirement
Rating of
Importance to
Customer
Total
Process Step Process Input
Lower Spec
Target
Upper Spec
0
0
0
0
0
0
0
0
0
0
0
0
0
Total
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Workshop #4:
Team brainstorming to create the fishbone diagram
Measure : Failure Mode and Effect Analysis
FMEA is a systematic approach used to examine potential
failures and prevent their occurrence. It enhances an
engineer’s ability to predict problems and provides a system
of ranking, or prioritization, so the most likely failure modes
can be addressed.
Measure : Failure Mode and Effect Analysis
Measure : Failure Mode and Effect Analysis
RPN = S x O x D
Severity (ความรุนแรง ) X
Occurrence (โอกาสการเกิดขึน้ ) X
Detection (การตรวจจับ)
Measure : Failure Mode and Effect Analysis
สิ่งสำคัญมีน้อย (Vital Few)
สิ่งจิ๊บจ๊ อยมีมำก
(Trivial Many)
Workshop # 5 :
Team Brainstorming to create FMEA
Measure : Measure Phase’s Output

Check and fix the measurement system

Determine “where” you are


Rolled throughput yield, DPPM

Process Capability

Entitlement
Identify potential KPIV’s

Process Mapping / Cause & Effect / FMEA

Determine their likely impact
Analyze
The Analyze phase serves to validate the KPIVs, and to study the
statistical relationship between KPIVs and KPOVs
Analyze : Tools
To validate the KPIVs
 Hypothesis Test
 2 samples t test
 Analysis Of Variances
 etc.
 To reveal the relationship between KPIVs and KPOVs
 Regression analysis
 Correlation
Analyze : Hypothesis Testing
The Null Hypothesis
 Statement generally assumed to be true unless sufficient
evidence is found to the contrary
 Often assumed to be the status quo, or the preferred outcome.
However, it sometimes represents a state you strongly want to
disprove.
 Designated as H0
Analyze : Hypothesis Testing
The Alternative Hypothesis
 Statement generally held to be true if the null hypothesis is
rejected
 Can be based on a specific engineering difference in a
characteristic value that one desires to detect
 Designated as HA
Analyze : Hypothesis Testing


NULL HYPOTHESIS: Nothing has changed:
 For Tests Of Process Mean:
H0:  = 0
 For Tests Of Process Variance: H0: s2 = s20
ALTERNATE HYPOTHESIS: Change has occurred:
MEAN
VARIANCE
INEQUALITY
Ha:   0
H a: s 2  s 20
NEW  OLD
Ha:   0
H a: s 2  s 20
NEW  OLD
Ha:   0
H a : s 2  s 20
Analyze : Hypothesis Testing
State the practical problem
Common Language
Statistical Language
Ho
A is the same as B
A=B
Ha
A is not same as B
A>B (or) A = B (or) A<B
Collect and Analyze Data (in Minitab)
Result
P-Value  0.05
Do not reject Ho
P-Value < 0.05
Reject Ho
Conclusion about the claim:
If A is the same B
Then
If A is NOT better than B
Then
Actions to be taken:
Analyze : Hypothesis Testing
See Hypothesis Testing Roadmap
Example: Single Mean Compared to Target
• The example will include 10
measurements of a random sample:
– 55
56
57
55
58
54
54
54
53
53
The question is: Is the mean of the sample
representative of a target value of 54?
• The Hypotheses:
Ho:  = 54
Ha:   54
Ho can be rejected if p < .05
Single Mean to a Target - Using Minitab
Stat > Basic Statistics > 1-Sample t
One-Sample T: C1
Test of mu = 54 vs mu not = 54
Variable
N
Mean
C1
10
54.900
Variable
95.0% CI
C1
( 53.710, 56.090)
StDev SE Mean
1.663
T
0.526
P
1.71 0.121
Our Conclusion Statement
Because the p value was greater than our critical confidence level
(.05 in this case), or similarly, because the confidence interval on
the mean contained our target value, we can make the following
statement:
“We have insufficient evidence to reject the null hypothesis.”
Does this say that the null hypothesis is true (that the true
population mean = 54)? No!
However, we usually then choose to operate under the assumption
that Ho is true.
Single Std Dev Compared to Standard
•A study was performed in order to evaluate the effectiveness of two
devices for improving the efficiency of gas home-heating systems.
Energy consumption in houses was measured after 2 device
(damper=1& damper =2) were installed. The energy consumption
data (BTU.In) are stacked in one column with a grouping column
(Damper) containing identifiers or subscripts to denote the
population. You are interested in comparing the variances of the two
populations to the current (s=2.4).
•ฉ All Rights Reserved. 2000 Minitab, Inc.
Example: Single Std Dev Compared to Standard
(Data: Furnace.mtw, Use “BTU_in”)
Note: Minitab does not provide an
individual c2 test for standard
deviations. Instead, it is necessary to
look at the confidence interval on the
standard deviation and determine if
the CI contains the claimed value.
Example: Single Standard Deviation
Stat > Basic Statistics > Display Descriptive Statistics
Running the Statistics….
Descriptive Statistics
Variable: BTU.In
Damper: 1
Anderson-Darling Normality Test
A-Squared:
P-Value:
4
7
10
13
Mean
StDev
Variance
Skewness
Kurtosis
N
16
Minimum
1st Quartile
Median
3rd Quartile
Maximum
95% Confidence I nterval for Mu
0.475
0.228
9.90775
3.01987
9.11960
0.707524
0.783953
40
4.0000
7.8850
9.5900
11.5550
18.2600
95% Confidence I nterval for Mu
8.9419
9
10
11
95% Confidence I nterval for Sigma
2.4738
95% Confidence I nterval for Median
10.8736
3.8776
95% Confidence I nterval for Median
8.6170
10.3212
Running the Statistics….
Descriptive Statistics
Variable: BTU.In
Damper: 2
Anderson-Darling Normality Test
A-Squared:
P-Value:
4
7
10
13
16
95% Confidence I nterval for Mu
0.190
0.895
Mean
StDev
Variance
Skewness
Kurtosis
N
10.1430
2.7670
7.65640
-9.9E-02
-2.7E-01
50
Minimum
1st Quartile
Median
3rd Quartile
Maximum
2.9700
8.1275
10.2900
12.2125
16.0600
95% Confidence I nterval for Mu
9.3566
9
10
11
95% Confidence I nterval for Sigma
2.3114
95% Confidence I nterval for Median
10.9294
3.4481
95% Confidence I nterval for Median
8.7706
11.2363
Two Parameter Testing
Step 1: State the Practical Problem
Means: 2 Sample t-test
Sigmas: Homog. Of Variance
Medians: Nonparametrics
Failure Rates: 2 Proportions
Step 2: Are the data normally distributed?
Step 3: State the Null Hypothesis:
For s:
For :
Ho: spop1= spop2
Ho:  pop1 =  pop2 (normal data)
Ho: M1 = M2 (non-normal data)
State the Alternative Hypothesis:
For s:
For :
Ha: spop1  spop2
Ha:  pop1   pop2
Ha: M1  M2 (non-normal data)
Two Parameter Testing (Cont.)
Step 4: Determine the appropriate test statistic

F (calc) to test Ho: spop1 = spop2

T (calc) to test Ho:  pop1 =  pop2 (normal data)
Step 5: Find the critical value from the appropriate distribution
and alpha
Step 6: If calculated statistic > critical statistic, then REJECT Ho.
Or
If P-Value < 0.05 (P-Value < Alpha), then REJECT Ho.
Step 7: Translate the statistical conclusion into process terms.
Comparing Two Independent Sample Means
• The example will make a comparison between
two group means
• Data in Furnace.mtw ( BTU_in)
• Are the mean the two groups the same?
• The Hypothesis is:
– Ho:
1 2
– Ha : 1  2
• Reject Ho if t > t a/2 or t < -t
n2 - 2 degrees of freedom
a/2
for n1 +
t-test Using Stacked Data
Stat >Basic Statistics > 2-Sample t
t-test Using Stacked Data
Descriptive Statistics Graph: BTU.In by Damper
Two-Sample T-Test and CI: BTU.In, Damper
Two-sample T for BTU.In
Damper
N
Mean
StDev SE Mean
1
40
9.91
3.02
0.48
2
50
10.14
2.77
0.39
Difference = mu (1) - mu (2)
Estimate for difference: -0.235
95% CI for difference: (-1.464, 0.993)
T-Test of difference = 0 (vs not =): T-Value = -0.38 P-Value = 0.704 DF = 80
2 variances test
Stat >Basic Statistics > 2 variances
2 variances test
Test for Equal Variances for BTU.In
95% Confidence I ntervals for Sigmas
Factor Levels
1
2
2
3
4
F-Test
Test Statistic: 1.191
P-Value
: 0.558
Levene's Test
Test Statistic: 0.000
P-Value
: 0.996
Boxplots of Raw Data
1
2
4
9
14
BTU.I n
19
Characteristics About Multiple Parameter Testing
• One type of analysis is called Analysis of Variance (ANOVA).
– Allows comparison of two or more process means.
• We can test statistically whether these samples represent a single population,
or if the means are different.
• The OUTPUT variable (KPOV) is generally measured on a continuous scale
(Yield, Temperature, Volts, % Impurities, etc...)
• The INPUT variables (KPIV’s) are known as FACTORS. In ANOVA, the
levels of the FACTORS are treated as categorical in nature even though they
may not be.
• When there is only one factor, the type of analysis used is called “One-Way
ANOVA.” For 2 factors, the analysis is called “Two-Way ANOVA. And “n”
factors entail “n-Way ANOVA.”
General Method
Step 1: State the Practical Problem
Step 2: Do the assumptions for the model hold?
• Response means are independent and normally distributed
• Population variances are equal across all levels of the factor
–Run a homogeneity of variance analysis--by factor level—first
Step 3: State the hypothesis
Step 4: Construct the ANOVA Table
Step 5: Do the assumptions for the errors hold (residual analysis)?
• Errors of the model are independent and normally distributed
Step 6: Interpret the P-Value (or the F-statistic) for the factor effect
• P-Value < 0.05, then REJECT Ho
• Otherwise, operate as if the null hypothesis is true
Step 7: Translate the statistical conclusion into process terms
Step 2: Do the Assumptions for the Model Hold?
• Are the means independent and normally
distributed
– Randomize runs during the experiment
– Ensure adequate sample sizes
– Run a normality test on the data by level
• Minitab:
Stat > Basic Stats > Normality Test
• Population variances are equal for each factor level
(run a homogeneity of variance analysis first)
• For s
Ho: spop1 = spop2 = spop3 = spop4 = ..
Ha: at least two are different
Step 3: State the Hypotheses
Mathematical Hypotheses:
Ho: ’s = 0
Ha: k  0
Conventional Hypotheses:
Ho: 1 = 2 = 3 = 4
Ha: At least one k is different
Step 4: Construct the ANOVA Table
One-Way Analysis of Variance
Analysis of Variance for Time
Source
DF
SS
Operator
3
149.5
Error
20
229.2
Total
23
378.6
MS
49.8
11.5
F
4.35
SOURCE SS
df
MS
Between
SStreatment
g-1
MStreatment = SStreatment / (g-1)
Within
SSerror
N-g
MSerror = SSerror / (N-g)
Total
SStotal
N-1
Where:
g = number of subgroups
n = number of readings per subgroup
P
0.016
Test Statistic
F = MStreatment / MSerror
What’s important  the probability
that the Operator variation in means
could have happened by chance.
Steps 5 - 7
Step 5:Do the assumptions for the errors hold (residual analysis) ?
• Errors of the model are independent and normally distributed
– Randomize runs during the experiment
– Ensure adequate sample size
– Plot histogram of error terms
– Run a normality check on error terms
Residual
Analysis
– Plot error against run order (I-Chart)
– Plot error against model fit
Step 6:Interpret the P-Value (or the F-statistic) for the factor effect
• P-Value < 0.05, then REJECT Ho.
• Otherwise, operate as if the null hypothesis is true.
Step 7:Translate the statistical conclusion into process terms
Example, Experimental Setup
• Twenty-four animals receive one of four diets.
• The type of diet is the KPIV (factor of
interest).
• Blood coagulation time is the KPOV
• During the experiment, diets were assigned
randomly to animals. Blood samples taken
and tested in random order. Why ?
DIET A
62
60
63
59
DIET B
63
67
71
64
65
66
DIET C
68
66
71
67
68
68
DIET D
56
62
60
61
63
64
63
59
Example, Step 2
• Do the assumptions for the model hold?
• Population by level are normally distributed
– Won’t show significance for small # of samples
• Variances are equal across all levels of the factor
– Stat > ANOVA > Test for Equal Variances
Ho: _____________
Ha :_____________ Test for Equal Variances for Coag_Time
95% Confidence Intervals for Sigmas
Factor Levels
1
Bartlett's Test
Test Statistic: 1.668
2
3
P-Value
: 0.644
Levene's Test
Test Statistic: 0.649
P-Value
4
0
5
10
: 0.593
Example, Step 3
• State the Null and Alternate Hypotheses
Ho: µ diet1= µ diet2= µ diet3= µ diet4 (or) Ho: ’s = 0
Ha: at least two diets differ from each other(or) Ha:’s0
• Interpretation of the null hypothesis: the average blood
coagulation time of each diet is the same (or) what you
eat will NOT affect your blood coagulation time.
• Interpretation of the alternate hypothesis: at least one
diet will affect the average blood coagulation time
differently than another (or) what type of diet you keep
does affect blood coagulation time.
Example, Step 4
• Construct the ANOVA Table (using Minitab):
Stat > ANOVA > One-way ...
Hint: Store
Residuals &
Fits for later
use
Example, Step 4
One-way Analysis of Variance
Analysis of Variance for Coag_Tim
Source
DF
SS
MS
Diet_Num
3
228.00
76.00
Error
20
112.00
5.60
Total
23
340.00
Level
1
2
3
4
N
4
6
6
8
Mean
61.00
66.00
68.00
61.00
StDev
1.826
2.828
1.673
2.619
Pooled StDev = 2.366
F
13.57
P
0.000
Individual 95% CIs For Mean
Based on Pooled StDev
---+---------+---------+---------+--(------*------)
(-----*----)
(----*-----)
(----*----)
---+---------+---------+---------+--59.5
63.0
66.5
70.0
Example, Step 5
Do the assumptions for the errors hold?
Best way to check is through a “residual analysis”
Stat > Regression > Residual Plots ...
• Determine if residuals are normally distributed
• Ascertain that the histogram of the residuals looks
normal
• Make sure there are no trends in the residuals
(it’s often best to graph these as a function of the
time order in which the data was taken)
• The residuals should be evenly distributed about
their expected (fitted) values
Example, Step 5
Individual residuals trends? Or outliers?
How normal are
the residuals ?
Histogram - bell curve ?
Ignore for small data
sets (<30)
This graph investigates
how the Residuals
behave across the
experiment. This is
probably the most
important graph, since it
will signal that something
outside the experiment
may be operating.
Nonrandom patterns
are warnings.
Random about zero
without trends?
This graph investigates
whether the mathematical
model fits equally for low
to high values of the Fits
Example, Step 6
• Interpret the P-Value (or the F-statistic) for the factor effect
– Assuming the residual assumptions are satisfied:
– If P-Value < 0.05, then REJECT Ho
If P is less than 5% then
– Otherwise, operate as if null hypothesis at least one group mean
is different. In this case,
is true
Analysis of Variance for Coag_Tim
Source
Diet_Num
Error
Total
DF
SS
3 228.00 76.00 13.57 0.000
20 112.00 5.60
23 340.00
s1  s2  s3  s4
2
s
2

Pooled
2
2
4
When group sizes are equal
2
MS
F
P
F-test is close to 1.00
when group means
are similar. In this
case, The F-test is
much greater.
we reject the hypothesis
that all the group means
are equal. At least one
Diet mean is different.
An F-test this large could
happen by chance, but in
less than one time out of
2000 chances. This
would be like getting 11
heads in a row from a
fair coin.
Work shop#6:
Run Hypothesis to validate your KPIVs from Measure phase
Analyze : Analyze Phase’s output
Refine: KPOV = F(KPIV’s)
 Which KPIV’s cause mean shifts?
 Which KPIV’s affect the standard deviation?
 Which KPIV’s affect yield or proportion?
 How did KPIV’s relate to KPOV’s?
Improve
The Improve phase serves to optimize the KPIV’s and study the
possible actions or ideas to achieve the goal
Improve : Tools
To optimize KPIV’s in order to achieve the goal
 Design of Experiment
 Evolutionary Operation
 Response Surface Methodology
Improve : Design Of Experiment
Factorial Experiments
 The GOAL is to obtain a mathematical relationship which characterizes:
Y = F (X1, X2, X3, ...).
 Mathematical relationships allow us to identify the most important or
critical factors in any experiment by calculating the effect of each.
 Factorial Experiments allow investigation of multiple factors at multiple
levels.
 Factorial Experiments provide insight into potential “interactions”
between factors. This is referred to as factorial efficiency.
Improve : Design Of Experiment
 Factors: A factor (or input) is one of the controlled or uncontrolled
variables whose influence on a response (output) is being studied in
the experiment. A factor may be quantitative, e.g., temperature in
degrees, time in seconds. A factor may also be qualitative, e.g.,
different machines, different operator, clean or not clean.
Improve : Design Of Experiment
• Level: The “levels” of a factor are the values of the factor being
studied in the experiment. For quantitative factors, each chosen value
becomes a level, e.g., if the experiment is to be conducted at two
different temperatures, then the factor of temperature has two “levels”.
Qualitative factors can have levels as well, e.g for cleanliness , clean
vs not clean; for a group of machines, machine identity.
• “Coded” levels are often used,e.g. +1 to indicate the “high level” and
-1 to indicate the “low level” . Coding can be useful in both
preparation & analysis of the experiment
Improve : Design Of Experiment
 k1 x k2 x k3 …. Factorial : Description of the basic design.
The number of “ k’s ” is the number of factors. The value of each
“ k ” is the number of levels of interest for that factor.
Example : A2 x 3 x 3 design indicates three input variables.
One input has two levels and the other two, each have three levels.
 Test Run (Experimental Run ) : A single combination of factor
levels that yields one or more observations of the output variable.
Center Point
• Method to check linearity of model called Center Point.
• Center Point is treatment that set all factor as center for
quantitative.
• Result will be interpreted through “curvature” in ANOVA
table.
• If center point’s P-value show greater than a level, we can
do analysis by exclude center point from model. ( linear
model )
• If center point’s P-value show less than a level, that’s mean
we can not use equation from software result to be model.
( non - linear )
• There are no rule to specify how many Center point per
replicate will be take, decision based on how difficult to
setting and control.
Sample Size by Minitab
• Refer to Minitab, sample size will be in menu of
Stat->Power and Sample Size.
Sample Size By Minitab
Specify
number of
factor in
experiment
design.
Process
sigma
Specify number of
run per replicated.
Enter power value, 1-b,
which can enter more
than one. And effect is
critical difference that
would like to detect (d).
Center Point case
Exercise : DOECPT.mtw
“0” indicated that these
treatments are center point
treatment.
Center Point Case
Estimated Effects and Coefficients for Weight (coded units)
Term
Effect
Constant
Coef
StDev Coef
T
P
2506.25
12.77
196.29
0.000
A
123.75
61.87
12.77
4.85
0.017
B
-11.25
-5.62
12.77
-0.44
0.689
C
201.25
100.62
12.77
7.88
0.004
D
6.25
3.12
12.77
0.24
0.822
A*B
120.00
60.00
12.77
4.70
0.018
A*C
20.00
10.00
12.77
0.78
0.491
A*D
-17.50
-8.75
12.77
-0.69
0.542
B*C
-22.50
-11.25
12.77
-0.88
0.443
B*D
7.50
3.75
12.77
0.29
0.788
C*D
12.50
6.25
12.77
0.49
0.658
A*B*C
16.25
8.13
12.77
0.64
0.570
A*B*D
-11.25
-5.63
12.77
-0.44
0.689
A*C*D
-18.75
-9.38
12.77
-0.73
0.516
B*C*D
3.75
1.88
12.77
0.15
0.893
-22.50
-11.25
12.77
-0.88
0.443
-33.75
28.55
-1.18
0.322
A*B*C*D
Ct Pt
H0 : Model is linear
Ha : Model is non linear
P-Value of Ct Pt
(center point)
show greater than
a level, we can
exclude Center
Point from model.
Reduced Model
• Refer to effect table, we can excluded factor that
show no statistic significance by remove term
from analysis.
• For last page, we can exclude 3-Way interaction
and 4-Way interaction due to no any term that
have P-Value greater than a level.
• We can exclude 2 way interaction except term
A*B due to P-value of this term less than a level.
• For main effect, we can not remove B whether PValue of B is greater than a level, due to we need
to keep term A*B in analysis.
Center Point Case
Fractional Factorial Fit: Weight versus A, B, C
Estimated Effects and Coefficients for Weight (coded units)
Term
Effect
Constant
Coef
2499.50
SE Coef
8.636
T
P
289.41
0.000
A
123.75
61.87
9.656
6.41
0.000
B
-11.25
-5.62
9.656
-0.58
0.569
C
201.25
100.62
9.656
10.42
0.000
A*B
120.00
60.00
9.656
6.21
0.000
Final equation that we get for model is
Weight = 2499.5 + 61.87A – 5.62B + 100.62C + 60AB
DOE for Standard Deviations
• The basic approach involves taking “n”
replicates at each trial setting
• The response of interest is the standard
deviation (or the variance) of those n values,
rather than the mean of those values
• There are then three analysis approaches:
– Normal Probability Plot of log(s2) or log(s)*
– Balanced ANOVA of log(s2) or log(s)*
– F tests of the s2 (not shown in this package)
* log transformation permits normal distribution analysis approach
Standard Deviation Experiment
The following represents the results from 2
different 23 experiments, where 24 replicates
were run at each trial combination
A
B
-1
1
-1
1
-1
1
-1
1
C
-1
-1
1
1
-1
-1
1
1
-1
-1
-1
-1
1
1
1
1
Expt1 s^2
Expt2 s^2
0.823
0.596
1.187
1.55
3.186
2.025
2.34
2.242
0.651
3.212
1.477
2.882
2.048
3.847
1.516
6.265
File: Sigma DOE.mtw
*
Std Dev Experiment Analysis Set Up
After putting this into the proper format as a
designed experiment:
Stat > DOE > Factorial > Analyze Factorial Design
Under the Graph option / Effects Plots  Normal
A
B
-1
1
-1
1
-1
1
-1
1
A
-1
-1
1
1
-1
-1
1
1
C B
-1
1
-1
1
-1
1
-1
1
-1
-1
-1
-1
1
1
1
1
Expt1
C s^2
Expt2
Expt1s^2
s^2
Expt1
Expt2 ln(s^2)
s^2
Expt2
Expt1ln(s^2)
ln(s^2) Expt2 ln(s^
-1
0.823-1
0.596
0.823
-0.1942
0.596
-0.51693
-0.1942
-0.51
-1
1.187-1
1.187
1.55
0.17162
1.55
0.43811
0.17162
0.43
1
3.186-1
2.025
3.186
1.15888
2.025
0.70565
1.15888
0.70
1
2.34-1
2.242
2.34
0.84997
2.242
0.80715
0.84997
0.80
-1
0.6511
3.212
0.651
-0.42921
3.212
-0.42921
1.16704
1.16
-1
1.4771
2.882
1.477
0.38995
2.882
1.05863
0.38995
1.05
1
2.0481
3.847
2.048
0.71679
3.847
1.34727
0.71679
1.34
1
1.5161
6.265
1.516
0.41602
6.265
1.83501
0.41602
1.83
ln(s2)
Normal Probability Plots
• Plot all the effects of a 23 on a normal
probability plot
– Three main effects: A, B and C
– Three 2-factor interactions: AB, AC and BC
– One 3-factor interaction: ABC
• If no effects are important, all the points should
lie approximately on a straight line
• Significant effects will lie off the line
– Single significant effects should be easily
detectable
– Multiple significant effects may make it hard
to discern the line.
Probability Plot: Experiment 1
Results from Experiment 1 Using
Normal Probability Plot of the Effects
ln(s2)
(response is Expt 1, Alpha = .10)
1.5
B
Normal Score
1.0
A: A
B: B
C: C
0.5
Minitab does not identify these
points unless they are very
significant. You need to look
at Minitab’s Session Window
to identify.
0.0
-0.5
-1.0
-1.5
-0.5
0.0
0.5
Effect
The plot shows one of the points--corresponding to
the B main effect--outside of the rest of the effects
ANOVA Table: Experiment 1
Results from Experiment 1 Using ln(s2)
Analysis of Variance for Expt 1
Source
A
B
C
Error
Total
DF
1
1
1
4
7
SS
0.0414
1.2828
0.0996
0.5463
1.9701
MS
0.0414
1.2828
0.0996
0.1366
F
0.30
9.39
0.73
P
0.611
0.037
0.441
Sample Size Considerations
• The sample size computed for experiments involving
standard deviations should be based on a and b, as
well as the critical ratio that you want to detect--just
as it is for hypothesis testing
• The Excel program “Sample Sizes.xls” can be used
for this purpose
• If “m” is the sample size for each level (computed
by the program), and the experiment has k
treatment combinations, then the number of
replicates, n, per treatment combination
= 1 + 2(m-1)
k
*
Workshop # 7 : Run DOE to optimize the validate KPIV to
get the desired KPOV
Improve : Improve Phase’s output
 Which KPIV’s cause mean shifts?
 Which KPIV’s affect the standard deviation?
 Levels of the KPIV’s that optimize process
performance
Control
The Control phase serves to establish the action to ensure
that the process is monitored continuously for consistency
in quality of the product or service.
Control: Tools
To monitor and control the KPIV’s
 Error Proofing (Poka-Yoke)
 SPC
 Control Plan
Control: Poka-Yoke
Why Poka-Yoke?
Strives for zero defects
Leads to Quality Inspection Elimination
Respects the intelligence of workers
Takes over repetitive tasks/actions that depend on
one’s memory
Frees an operator’s time and mind to pursue more
creative and value added activities
Control: Poka-Yoke
Benefit of Poka-Yoke?
Enforces operational procedures or sequences
Signals or stops a process if an error occurs or a defect is created
Eliminates choices leading to incorrect actions
Prevents product damage
Prevents machine damage
Prevents personal injury
Eliminates inadvertent mistakes
Control: SPC


SPC is the basic tool for observing variation and using statistical
signals to monitor and/or improve performance. This tool can be
applied to nearly any area.

Performance characteristics of equipment

Error rates of bookkeeping tasks

Dollar figures of gross sales

Scrap rates from waste analysis

Transit times in material management systems
SPC stands for Statistical Process Control. Unfortunately, most
companies apply it to finished goods (Y’s) rather than process
characteristics (X’s).

Until the process inputs become the focus of our effort, the full
power of SPC methods to improve quality, increase productivity,
and reduce cost cannot be realized.
Types of Control Charts
The quality of a product or process may be assessed by
means of
• Variables :actual values measured on a continuous scale
e.g. length, weight, strength, resistance, etc
• Attributes :discrete data that come from classifying units
(accept/reject) or from counting the number
of defects on a unit
If the quality characteristic is measurable
• monitor its mean value and variability
(range or standard deviation)
If the quality characteristic is not measurable
• monitor the fraction (or number) of defectives
• monitor the number of defects
Defectives vs Defects
• Defective or Nonconforming Unit
• a unit of product that does not satisfy one or
more of the specifications for the product
– e.g. a scratched media, a cracked casing, a
failed PCBA
• Defect or Nonconformity
• a specific point at which a specification is not
satisfied
– e.g. a scratch, a crack, a defective IC
Shewhart Control Charts - Overview
Walter A Shewhart
Shewhart Control Charts for Variables
Control: SPC
Choosing The Correct Control Chart Type
Attributes
Defects
Area of opportunity
constant from sample to
sample?
Variables
Type of
data
Defectives
Counting defects
or defectives?
Yes
Data tends to be normally
distributed because of central
limit theorem
Individuals
Individual
measurements or
sub-groups?
Measurement
Sub-groups
c
Normally
Distributed
data?
No
Yes
X, mR
No
Interested
primarily in
sudden shifts in
No
mean?
Yes
u
Constant
sub-group size?
MA, EWMA,
Yes
p, np
or CUSUM
X-bar, R
X-bar, s
No
p
More effective in
detecting gradual
long-term changes
Use of modified control
chart rules okay on
x-bar chart
Control: Control Phase’s output
Y is monitored with suitable tools
X is controlled by suitable tools
Manage the INPUTS and good OUTPUTS will follow
Breakthrough Summary
DEFINE MEASURE ANALYZE >>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>
Champion
Champion
DEFINE -
Definition of
Opportunity
1.>>>>>>>>>>>
Project Definition
2. Determine
Champion
Impact & Priority
3. Collect Baseliine
Metric Data
Definition of
4. Savings/Cost
Opportunity
Assessment
1.
Project
5. EstablishDefinition
2. Planned
Determine
Impact & Priority
Timeline
3.
CollectLibrary
Baseliine
6. Search
Metric
Data
7. Identify Project
4. Authority
Savings/Cost
Assessment
1. Problem
5. Statement
Establish
Planned
2. Goals/Objectives
Timeline
3. Projected
6. Business
Search Library
7. Benefits
Identify Project
AuthorityValue
4. Financial
1.
Problem
5. Key
Metrics
Statement
6. Team
2. Assignment
Goals/Objectives
3. Projected
Business
P1 (not
validated)
Benefits
IMPROVE >>>>>>>>>>>>
CONTROL-
>>>>>>>>>>>>>> >>>>>>>>>>>>>
Black Belt
Assess the Current Process
Blackbelts
ANALYZE IMPROVE Confirm f(x) for Y
MEASURE 1.>>>>>>>>>>>>>>>
Map the Process
1.>>>>>>>>>>>>>
Determine the Vital
2. Determine the Baseline
3. Prioritize the Inputs to
Assess
4. Assess the
Assess
the Current
Process
Measurement
System
5. Capability Assessment
Map Term
the Process
6.1. Short
2.
Determine
7. Long Term the Baseline
Prioritize the
Inputs to
8.3. Determine
Entitlement
Assess
9. Process Improvement
4. Financial
Assess the
10.
Savings
Measurement System
Capability
Assessment
1.5. Macro
/ Micro
Process
6. Charts
Short Term
Long Term
2.7. Rolling
Throughput
8. Yield
Determine Entitlement
Process Improvement
3.9. Fishbone,
Cause Effect
10.Matrix
Financial Savings
Variables Affecting
Black
the Response
f(x) = Y
Optimize f(x) for Y
1.>>>>>>>>>>>>
Determine the Best
BeltCombination of ‘Xs’
for Producing the
Best ‘Y’
REALIZATION Financial Rep &
Process Owner
Finance
Rep.&
Sustain the Benefit
CONTROLProcess
Owner
1.>>>>>>>>>>>>>>
Establish Controls for 1.>>>>>>>>>>>>>
Financial
Maintain Improvements
REALIZATION -
2. KPIVs and their
AssessmentRep
and &
Financial
‘settings’
Input Actual
Process Owner
3. Establish Reaction
Savings
Plans
2. Functional
Confirm f(x) for Y
Optimize f(x) for Y
Maintain Improvements
Sustain the Benefit
2. Confirm
Manager/Process
Relationships and
Owner – Monitor
1. Establish
Determine
Vital
1. Determine the Best 1. Establish Controls for 3.1. Control/Implementa
Financial
thethe
KPIV
Variables Affecting
Combination of ‘Xs’ 2. KPIVs and their
Assessment and
tion
the Response
for Producing the
‘settings’
Input Actual
Best ‘Y’
3. Establish Reaction
Savings
f(x) = Y
Plans
2. Functional
2. Confirm
Manager/Process
Relationships
and
Owner Benefit
– Monitor
1. Multi-Vari Studies
Design of Experiments
1. Process Control Plan 1. Monthly
Establish
the
KPIV
3.
Control/Implementa
2. Correlation Analysis 1. Full Factorial
2. SPC Charting
Update
tion
3. Regression Analysis 2. Fractional Factorial
x-Bar
&
R

 Single / Multiple 3. Blocking
 Pre-Control
Experiments
4. Hypothesis Testing
 Etc
4.
Custom
Methods
Mean
Testing
(t,
Z)
3.
Gauge
Control Plans

4. GR&R Study
5.
RSM
Variation
(Std
 Multi-Vari Studies
Macro / Micro
1.
Design of Experiments
1. Process Control Plan 1. Monthly Benefit
5.1. Establish
SigmaProcess
Score
Dev)(F,etc)
Charts
2.
Correlation
Analysis
1.
Full
Factorial
2. SPC Charting
Update
6. Apply ‘Shift & Drift’
 3. ANOVA
2.
Rolling
Throughput
Regression
Analysis
2.
Fractional
Factorial
x-Bar
&
R

7. Baseline vs Entitlement
Yield to $$$
 Single / Multiple 3. Blocking
 Pre-Control
8. Translate
3. Fishbone, Cause Effect 4. Hypothesis Testing
Experiments
 Etc
P1 (validated)
P5 Reviewed
P5
P5 Reviewed
Matrix
4.Reviewed
Custom Methods
Mean Testing (t, Z)
3.
Gauge

P5 Reviewed
P8 (Sign Off)Control Plans
Hard Savings
Savings which flow to Net Profit
Before Income Tax (NPBIT)
Can be tracked and reported by the
Finance organization
Is usually a reduction in labor,
material usage, material cost, or
overhead
Can also be cost of money for
reduction in inventory or assets
Finance Guidelines - Savings Definitions
• Hard Savings
• Direct Improvement to Company Earnings
• Baseline is Current Spending Experience
• Directly Traceable to Project
• Can be Audited
Hard Savings Example
• Process is Improved, resulting in lower scrap
• Scrap reduction can be linked directly to the
successful completion of the project
Potential Savings
 Savings opportunities which have been
documented and validated, but require
action before actual savings could be
realized
 an example is capital equipment which has
been exceeded due to increased efficiencies in
the process. Savings can not be realized
because we are still paying for the equipment.
It has the potential for generating savings if
we could sell or put back into use because of
increases in schedules.
 Some form of a management decision or
action is generally required to realize the
savings
Finance Guidelines - Savings Definitions
Potential Savings
• Improve Capability of company Resource
Potential Savings Example
• Process is Improved, resulting in reduced
manpower requirement
• Headcount is not reduced or reduction cannot
be traced to the project
Potential Savings might turn into hard savings if the
resource is productively utilized in the future
Identifying Soft Savings
 Dollars or other benefits exist but they
are not directly traceable
 Projected benefits have a reasonable
probability (TBD) that they will occur
 Some or all of the benefits may occur
outside of the normal 12 month tracking
window
 Assessment of the benefit could/should
be viewed in terms of strategic value to
the company and the amount of baseline
shift accomplished
Finance Guidelines - Savings Definitions
Soft Savings
• Benefit Expected from Process Improvement
• Benefit cannot be directly traced to Successful
Completion of Project
• Benefit cannot be quantified
Soft Savings Example
• Process is Improved; decreasing cycle time
• Benefit cannot be quantified