Process Capability Analysis

advertisement
Process Capability and Advanced Topics
There are a number of ways to think of process capability. When we think of process capability from a
statistical perspective, we think of learning about the characteristic variability inherent in our process
when no special causes are present. Two ways to view this variability would be (1) the instantaneous
variability in our process at the current moment; and (2) the variability in our process over time. To use
the process capability for any kind of predictive purposes, it is necessary for our process to be in a state
of statistical control – meaning that there are no special causes of variation present in the data – and that
the only variation occurring in the process is due to common or random variation. While this is (strictly)
the final aspect of Statistical Process Control (see Figure 1), it is important that continuous improvement
be employed to continually seek out and remove variation. By this process, the firm gains a competitive
advantage over similar firms.
Process Capability Analysis is performed when
there are NO special causes of variability
present – ie. when the process is in a state of
statistical control, as illustrated at this point.
Improving Process Capability and Performance
Continually Improve the System
Characterize Stable Process Capability
Head Off Shifts in Location, Spread
Time
Identify Special Causes - Bad (Remove)
Identify Special Causes - Good (Incorporate)
Reduce Variability
Center the Process
LSL
0
USL
Figure 1. The Statistical Process Improvement Process
Natural Tolerance Limits
The natural tolerance limits assume that the process is well-modeled by the Normal Distribution, and that
three sigma is an acceptable proportion of the process to yield. The Upper and Lower Natural Tolerance
Limits are derived from the process mean () and standard deviation (), and illustrated in Figure 2:
UNTL    3
LNTL    3
106758930
Page 1 of 10
  1 : 68.26% of the total area
  2 : 95.46% of the total area
  3 : 99.73% of the total area
-3
or
LNTL
-2
-

+
+2
+3
or
UNTL
The Natural Tolerance Limits cover 99.73% of the process output
Figure 2. Natural Tolerance Limits Illustrated.
Process Capability Analysis
Process Capability Analysis (PCA) is used to characterize a process without regard to the customer's
specification limits. Typically, this analysis is undertaken as part of a quality improvement program. The
major uses of the data include:
1.
2.
3.
4.
5.
6.
Specification of equipment performance and selection of process equipment
Selection of vendors (more commonly called product characterization)
Prediction of tolerance holding capability
Design aids for product/process engineering
Selection of sampling intervals for process control
Sequencing of production processes to reduce product variability due to interactions
Initially, a PCA should consider the distribution of the process. While a three-sigma process will have a
yield of approximately 99.73%, when the distribution departs from the Normal, the process yield changes
greatly. Two methods are employed in a PCA: (1) constructing a process histogram and looking for a
Normal Distribution, and (2) performing a Normal Probability Plot and assessing the normality and
estimating the process parameters.
Histogram and Normal Probability Plot
In order to use the histogram to characterize the process distribution, we would like to have a fairly fine
resolution for our view of the curve. This will require about 20 columns, so we will need at least 100 data
points. The histogram is constructed as described in Section 1, and we look to verify that the shape is
approximately Normally distributed – being symmetric about a single mode and bell-shaped. The
histogram itself provides a good first check on the process capability. If the specification limits are
overlaid on the histogram, a visual feel for the process yield/fallout can be obtained. This visual method
also gives the analyst a good idea about how the process must be changed to adjust the location of the
distribution relative to the specification limits, and whether or not the process spread must be improved to
obtain the desired yield.
Since it is very hard to distinguish between a Normal Distribution and a t-Distribution, however, it is
usually a good idea to investigate the normality of the distribution with a Normal Probability Plot.
Construction of the plot was described in Section 2, and we use the plot for analysis in the following
manner:
106758930
Page 2 of 10



The “fat pencil” test is used to check for reasonable approximation of Normality. See Figure 3 for
some qualitative examples of non-Normal patterns.
The mid-point (50th percentile) of the distribution is used to estimate the location of the process
mean.
The slope of the “best fit” line is an estimate for the standard deviation (choose the 20 th and 80th
percentile points to calculate the slope: change in y divide by change in x).
C
u
m
C
u
m
C
u
m
F
r
e
q
F
r
e
q
F
r
e
q
X
a) Normally Distributed
X
(b) Fails “Fat Pencil” Test
X
(c) Fails Linearity (Fat Tail)
Figure 3. Normal and Non-Normal Probability Plot Patterns.
We could have used the histogram in a similar fashion to estimate the mean and standard deviation, and
in fact it is not a bad idea to use this as a check:


The mode (most frequent observed value) should be approximately equal to the mean in a
Normal distribution.
The difference between the 84th and the 50th percentiles (17th column center point and the
boundary between the 10th and 11th columns) should be approximately equal to the standard
deviation – (as should the range when it is divided by factor d2 from the Control Chart Factors
Table in Section 3).
Provided that we have a pretty Normal distribution, and as long as our process has no special causes of
variation present, then we can use these estimated parameters to compute the expected process
yield/fallout as described at the start of Section 2.
Another way to communicate our process capability is through the use of Process Capability Indices.
Unlike the natural tolerance limits, these methods relate the process spread and location to the prduct
specifications. These indices are described below.
Process Capability Indices – Cp
The first process capability index is Cp. This index measures the ratio of the difference between the
specification limits to the natural tolerance limits of the production process. If there is an upper and a
lower specification limit (the USL and LSL, respectively), then:
Cp 
USL  LSL
6
If there is only a single specification limit employed (i.e. tensile strength must be greater than a LSL, or
bacteria count must be lower than a USL), then the distance from the process mean to the appropriate
specification limit is employed, leading to:
106758930
Page 3 of 10
C pu 
USL  
3
where there is a single USL, or
Cpu 
  LSL
3
where there is a single LSL.
One way to think of the Cp ratio is that this is the potential process capability ratio of the process provided that the process is appropriately centered within the specification limits. If the process is
Normally distributed and has no special causes of variation present, the Figure 4 lists the process fallout
in parts per million (ppm) for various Cp ratios.
Cp
Ratio
0.50
0.60
0.80
1.00
1.20
1.40
1.50
1.60
1.80
2.00
Two-Sided Specification Fallout
(ppm)
133614
71861
16395
2700
318
27
7
2
0.06
0.0018
One-sided Specification Fallout
(ppm)
66807
35931
8198
1350
159
14
4
1
0.03
0.0009
Figure 4. Cp Ratio – Process Fallout Relationship
Process Capability Indices – Cpk
Much of the time, the process is not completely centered within the specification limits. A common ratio
for reporting under this case is Cpk, which takes into account the fact that sometimes it is not possible to
center the process due to limitations on the process equipment - such as operating equipment at the very
edges of current technical capability. In this case, one specification limit will likely be closer to the mean
process value than the other, and the process will have more fallout than Cp would report. In these
circumstances, the ratio Cpk is given by:
C pk  min( C pu , C pl )
where Cpu and Cpl are the corresponding one-sided Cp ratios
At best, when the process is properly centered, Cpu = Cpl and thus Cpk = Cp. Thus, it is commonly said
that Cpk measures the actual process capability (compared with the potential capability). This index also
assumes the process is Normally distributed and has no special causes of variation present.
Process Capability Indices – Cpm, Cpkm
The final two process capability ratios attempt to account for the fact that the process spread affects the
interpretation of the capability indices as well. It is good practice to reduce the amount of variation in the
process as part of our ongoing process improvement efforts. However, if a process with less variation is
not centered, it can have the same Cpk value as a centered but more variable process (see Figure 5).
106758930
Page 4 of 10
LSL
USL
Figure 5. Two Processes With Identical Cpk.
The first ratio is Cpm, which takes the target location for the process center (halfway between the
specification limits) into account. This ratio is computed as:
C pm 
USL  LSL
6 
2
 (  T)
where T is the target location for centering: T 
2
1
(USL  LSL )
2
The other process capability ratio is Cpkm, which has the added value of increased sensitivity to
departures from the desired target for the process mean:
C pkm 
C pk
T
1 

  
2
Beyond the process capability ratios, the measurement system used to evaluate the process itself is often
a subject of interest. It makes no sense to improve a system if you can’t measure the improvement – and
this gives rise to the statistical study of the measurement system capability, described in two ways, below.
Measurement System Capability
It is important to note that part of the variability observed in the process is due to variability in the
measurement system itself. This variability may be due to differences in the operators and/or the gaging
system itself. The total variability of the process may be expressed as:
2
 2total   product
  2gage
Since we generally have estimates of the total system variability (that is what we are plotting in our Rcharts, and what we can compute from our process samples), we can compute the true product variability
if we can construct a test to determine the variability in our gaging system.
One way to test our gaging system is to set up X-bar and R-Charts for our measurement system. If we
pick our rational sample to be a single unit of the product, and if we measure each sample twice, then the
only variability we should observe in our R-Chart should be due to the variation in the gaging system.
Any out-of-control points in the R-Chart then would be due to problems in measurement. The magnitude
of the gage variation is given by the centerline of the R-Chart, and out-of-control points would indicate
that the system operator is having difficulty in using the gage.
106758930
Page 5 of 10
By the same token, the out-of-control points in the x-bar chart would be interpreted as the discrimination
capability of our gaging system - the ability to detect different units of the product - since the X-bar Chart
measures variation between samples (individual units of product in this case). See Figure 6 for an
illustration of these interpretations.
Out of control points indicate ability to distinguish
between product samples (Good)
Out of control points indicate inability of
operators to use gaging system (Bad)
UCL
UCL
x
LCL
R
LCL
Sample Number
X-Bar Control Chart
Sample Number
R - Control Chart
Figure 6. Use of X-bar and R-Charts for Measurement System Capability Analysis.
It is a common engineering practice to compare the gage capability to the width of the specifications. If
we use the R-Chart centerline as an estimate for the standard deviation for the gage:
 gage 
R
d2
where d2 is the factor from the table of control chart factors (Section 3)
then we can estimate the precision-to-tolerance ratio:
6 gage
P

T USL  LSL
A value of 0.1 or smaller would indicate that the gage is adequate for measurement. This is sometimes
referred to as the "rule of ten" - which states that any measurement system should be at least ten times
more accurate than the required measurement accuracy.
A further element of a gage capability study is to investigate the two components of the measurement
error (gage): (1) the repeatability – the inherent precision of the gage, and (2) the reproducibility – or the
variability of the gage under different conditions (environment, operator, time periods, etc.). Thus, the
mathematical expression of the gage capability is:
2
2
 2gage   repeatabil
ity   reproducab ility
To perform the study, we obtain 20 – 25 parts, and take a random sample of the operators (or all of the
operators if the ranks are small enough to obtain the entire population). As before, we have each
operator take two measurements on each part, and we calculate the mean and the range for each partoperator combination.
106758930
Page 6 of 10
To estimate the repeatability of the gage, we use only the range data, since it represents the variability
due to the instrument (and not the operator). We compute the mean of all the range data across all the
operators, and divide by the factor d2 = 1.128 (because each operator took a sample size of 2 readings)
to obtain the estimate of the repeatability:
2
 repeatabil
ity 
R
d2
To estimate the reproducibility, or the variation due to the operators, we utilize the mean value computed
over the 20 – 25 parts that each operator measured. Since the operators all measured the same parts,
we calculate the range of operator means (maximum operator mean – minimum operator mean), and
divide by the factor d2 that corresponds to the number of operators tested, obtaining the estimated
reproducibility:
2
 reproducab
ility 
R
x
d2
Interpreting the variabilities would allow us to identify the component of the gaging system that
contributes the most variation. If the gaging system is not capable, then the improvement effort should be
directed at the component with the most variation – perhaps better operator training if reproducibility is
the larger problem, or a better measurement device if repeatability is the bigger issue.
Tolerance Stacking
Additive processes are a common occurrence in manufacturing, with the majority of steps being
sequential and cumulative in nature. In a wafer processing plant, the final thickness is the result of
adding several coatings in a linear fashion. If each of the coating processes is normally and
independently distributed, then the final thickness dimension should be normally distributed with a
variance equal to the sum of each layer variance:
n
 2final 

2
i
where i is the index for each layer in the sequence
i1
Gage Capability Analysis Using Designed Experiments
The gage capability data in the study described above is really a full factorial designed experiment.
Another way of analyzing the data, called Analysis of Variance (or ANOVA) allows us to see which
components of the measuring system have a significant impact on our measurements. Ideally, only the
gage itself should impact our system, but it is possible that the operator(s) and/or the interaction between
the operator(s) and the parts to be measured may play a significant role. It is an important aspect of our
measurement system to address these issues.
If our individual measurement (yijk) consists of the true size of the part (), plus the variation components
from the parts (), operators (), part-operator interactions (), and random measurement errors (), then
we could model it as:
y ijk     i   j  ij   ijk
106758930
where there are:
Page 7 of 10
i  1, 2, ..., a parts
j  1, 2, ..., b operators
k  1, 2, ..., n measuremen ts
Since our measurement is the (linear) sum of these components (similar to our tolerance stacking
process), the variance of any individual measurement is:
2
V( y ijk )   2   2   
 2
The total variability in our experiment can be represented as the sum of squares (SStotal), and is the sum
of the sum of squares for each component (SSparts, SSoperators, SSpXo interaction, and SSerror,) so that:
 x
m
SStotal = SSparts + SSoperators + SSpXo interaction + SSerror
where
SS x 
l
x

2
l1
For an ANOVA, we look at the mean of the sum of squares (called mean squares or MS), where for any
component x, the mean square is:
MS x 
SS x
deg rees of freedom
for
x
Since to estimate any sum of squares we lose one degree of freedom in estimating the mean, and we
have a observations on the parts component:
MS parts 
SS parts
a 1
Similarly, there are b observations in our data on the operators, so:
MS operators 
SS operators
b 1
and there are (a)(b) observations on the part-operator interactions; and (a) (b) (n) observations left on the
random measurement error, so:
MS pXo
int eraction

SS pXo
int eraction
a  1b  1
and
MS error 
SS error
abn  1
Since the errors in measurement should be random, we can construct an F-test to see how significant
each of the remaining component variations (mean squares) are to the variation of the measurement
error (MSerror). The form of these tests are:
Fx 
MS x
MS error
and we check to see if Fx > F,1,2 where 1 and 2 are the numerator and denominator degrees of
freedom, respectively. We perform this test for each of the components, and compare them for
significance at some low level of  (usually .05). A tabular format is common for an ANOVA, and that
helps keep the calculations and terms in order (see Figure 7).
106758930
Page 8 of 10
Source
DF
SS
Parts
a–1
SSparts
Operators
b–1
SSoperators
(a – 1)(b – 1)
SSpXo interaction
Error
a b (n – 1)
SSerror
Total
abn–1
PxO Interaction
MS
SSparts
a–1
SSoperators
b–1
SSpXo interaction
(a – 1)(b – 1)
SSerror….
ab(n – 1)
F
MSparts
MSerror
MSoperators
MSerror
MSpXo interaction
MSerror
Critical Region
F.05, a – 1,ab(n – 1)
F.05, b – 1,ab(n – 1)
F.05, (a – 1)(b – 1), ab(n – 1)
Figure 7. ANOVA Table Formulation
To analyze the ANOVA information, we look to see if the F-statistic would lie beyond the critical region
using the F-Distribution Table(s) from Section 2. Analysis of the data would follow this pattern:




If the F-statistic for the interaction term is non-significant, then we would conclude that there is no
problem with our operators applying the gage to different parts.
If the operator F-statistic is significant, then we would improve the measurement system by
improving the operator training. If this F-statistic is not significant, then we would have good
reproducibility.
If only the F-statistic for the parts is significant (the ideal situation), then it is indicating that we can
easily distinguish between the different parts, and our gaging system is capable (and we have
good repeatability).
If none of the F-statistics are significant, then we are using a gaging system that does not have
enough capability.
In summary, it is important to note that the Gage Capability Analysis does not address the issue of
accuracy in a measurement. Accuracy is defined as the ability to obtain the true value of a measurement.
The SPC techniques address the issues of variability and their sources – reproducibility and repeatability.
It still takes comparison of the measurement to a physical standard to address the measurement system
accuracy.
Hands On Measurement System Analysis
Utilize the directions below on micrometer measurements to conduct a measurement system analysis.
Reading the micrometer
The micrometer we will use is able to measure a distance of 0 to 1 inch. Holding the micrometer as
depicted in the bottom diagram of Figure 8, you see vertical and horizontal divisions on the inner sleeve.
Every fourth vertical division is labeled with a digit, i.e., 0, 1, 2,…, 9, 0. Each digit represents 0.100 (or
100 thousandths) of an inch. There are four spaces between each digit representing 0.025 (25
thousandths) of an inch therefore, the inner micrometer sleeve divides 1 inch into 0.025 (25 thousandths)
inch increments.
The thimble (the outer sleeve that rotates around the inner sleeve) has 25 lines numbered from 0 to 24.
Each line on the thimble represents 0.001 (1 thousandth) of an inch. For example, the 13th thimble line
represents 0.013 (13 thousandths) of an inch. One complete revolution of the thimble is equal to 0.025
(25 thousandths) of an inch, which is also equal to one division on the inner sleeve.
106758930
Page 9 of 10
The horizontal lines on the inner sleeve are used to measure increments of 0.0001 (0.1 thousandths) of
an inch. To do this, find the horizontal line on the inner sleeve that lines up with a line on the thimble.
The digit that labels the horizontal line is the number of 0.1 thousandths of an inch to include in the
measurement.
The measured distance of an object will be the total of (0.100 x last digit showing on inner sleeve) +
(0.025 x number of vertical divisions on inner sleeve beyond inner sleeve digit) + (0.001 x the number
showing on the thimble just below the zero horizontal line) + (0.0001 x the number on the horizontal line
on the inner sleeve that lines up with a line on the thimble).
An example of a micrometer reading is shown in Figure 1. We see the edge of the thimble is two vertical
divisions beyond 5 on the inner sleeve, so the reading on the sleeve is (0.100 x 5) + (0.025 x 2) = 0.550
inch. The 0 horizontal line on the sleeve is between the numbers 13 and 14 on the thimble, which means
the reading on the thimble results in (0.001 x 13) = 0.013. Add this to previous value: 0.550 + 0.013 =
0.563. Now when you look at the horizontal lines on the thimble, you see that the 6 horizontal line is lined
up with a line on the thimble. The total measurement is then 0.563 + (0.0001 x 6) = 0.5636.
Figure 8. Micrometer Reading Illustration.
106758930
Page 10 of 10
Download