Statistical Process Control

advertisement
Page 1/Lect. 11
Statistical Process Control (SPC)
(Continuation)
Components of Shewharts’ model in question:
Process
Action execution
Decision-making
(action choice for
correction)
Product
Observation
(sampling)
Diagnostics
failure
determination
Data evaluation
(data
preprocessing)
Previous steps:
 Step 1: Process observation (sampling selected feature(s))
 Step 2: Data evaluation (data preprocessing and analysis via X, R
diagrams)
Next steps:
 Step 3: Diagnostics (identification of failure origins)
Diagnostics – basic approaches and methods
 Typical application of simple but efficient graphical methods
 Main purpose: Assistance at search for failure origins, mostly in
combination of all the methods
(1) Scatter diagrams
(2) Pareto diagrams
(3) Cause-and-effect diagrams
Page 2/Lect. 11
1. Scatter diagrams
Provide support in identification of dependencies between parameters,
which characterize quality or efficiency on a process output (2) and
factors (1), which influence the output.
So that:
(1) Stands for process input parameter (material properties, manpower,
process internal parameters, etc.)
(2) Denotes process output (quality level, process throughput, process
costs per product unit, etc.)
 Scatter diagrams provide quantitative measures of input/output
dependencies
Ex. 1: The aspects of x and y correlation:
 Weak or strong? (= ratio between the hull axes)
 Direction? (= direction of the hull main axis)
y
y
x
Positive correlation
…raising x causes
increase of y
x
y
Negative correlation
…raising x causes
drop of y
x
Uncorrelated
…no significant
dependency between
x and y
Page 3/Lect. 11
Ex. 2: (a negative case)
1. Assume having X, R diagrams corresponding to smoothness of a
product quality
2. Also assume situation without SPC, so that the X values (the
smoothness) might run out of the admissible interval (as there is no
force to keep the X within the given bounds)
But the value R (range) can satisfy the requirements (irrespective of
what is happening with the X value as there is no SPC loop closed)
? What may cause this sort of situation?
Explanation: Presume, the surface smoothness depends on status of a
particular tool (e.g. sharpness) used for the production, so the situation
might look like:
Surface smoothness, µm
90
85
80
Poor
Sharpness
70
Good
Sharpness
Tool sharpness
65
60
Surface smoothness
vs. tool sharpness
 The range of smoothness R for “poor sharpness” or “good
sharpness” is relatively low (the requirements are satisfied)
 The “smoothness” value X is large for both the possible states of
the tool, so that not compliant with the requirements (deviated
mean of X)
Conclusion: It is always necessary to examine dependency of the
observed parameter on the system overall status
(System status ~ combination of other input parameters, which can
influence the system behavior)
□
Page 4/Lect. 11
2. Pareto diagrams
 Supports selection of events that occur more often than other ones
→ these ale likely to have more influence on the system behavior
 Helps to identify priorities for problem-solving, targeted selection
of the problems with highest occurrence…..
 Based on, so called Pareto principle: If a certain number of
occurrences of a particular event originates from multiple
resources, it is highly probable that the majority of the event
occurrences is coming from relatively small no. of these resources
(possibly also from a single one)
 The previous idea for identification of these sources of failures
provides an efficient way of failure extinguishing as:
1. Localize the highest rate failure #1 process
2. Extinguish the failure #1, what brings also improvement
respecting occurrence of the other type of failures
3. Go for localization of the next failure (respecting its’
rate), proceed as for previous case, etc.
Ex.: Failure rate and application of the Pareto principle
No. of occurrences
Failure #1
Failure #2
Failure #3
Order
□
Page 5/Lect. 11
3. Cause-and-effect diagrams
 Whenever the Pareto diagrams help to indicate priorities for
problem-solving, the cause-and-effect diagrams support search for
origins of failures
 Gives a basic framework for systematic search for causes of
various effects (mainly failures)
 Similar to state-space search, stands in systematic (graphical)
buildup of a path to the origin of the problem
 Backward approach, building the graph from effect → cause
Ex.:
Process: Vegetables storage (potatoes)
Problem identified: Stored product decay
Influence of workers
Low-expertise
manipulation
Storage method
Outdoor
storage
Low
payment
Working
conditions
Natural
(cellar)
Insufficient
motivation
Airconditioned
Indoor
storage
Potato
decay
Improper
technology
Early sort
Decaycontaminated
Other incompatibility
Low
resistance
Manipulation technology
Potato sort
□
Page 6/Lect. 11
The process of sampling
 Standard approach to sampling under satisfaction of the Shannon’s’
theorem (not very efficient in this context)
 Often used method called: “rational sampling” - takes into account
the following features:
o Sample size
o Sampling rate (frequency)
o Sample gathering method
 Examples of improper (wrong) sampling in real production
processes:
o Too dense or sparse sampling rate
o Periodic sampling or sample gathering @ extraordinary time
points
o Sampling from multiple sources (sample stratification &
sample mixing)
 Choice of a correct sampling method is the key issue for proper
operation of the Shewharts’ model of SPC.
What is considered for a correct sampling of particular
process?
The concept of rational sampling ~ rational subset of the obtained
measurements (in a sample) is a set of measurements having their
variance originating purely from random variations in the system which
is in a balanced state (~ no extraordinary events occur – e.g. transient
phenomenon, technology crashes, etc.)
Page 7/Lect. 11
General criteria for sample choice
Basic motivations:
1. To gain maximum of information within a single sample (can be
derived e.g. from sample scatter or sample range, etc)
2. To maximize differences between subsequent samples for the case, an
extraordinary even occurred in the mean time (to provide opportunity
for detection of this event)
Standard
sample
variation
Extraordinary
event
occurrence in
between
samples
The basic rules for rational sampling selection are:
1. The sample has to represent only standard variances in the system
a. This invokes requirement for “small” sample (more sensitive
to fast changes of the observed features in the system).
vs.
b. Large sample includes also extraordinary effects, e.g. mean
value shift (large sample causes drop of the SPC model
sensitivity to fast or extraordinary changes).
Page 8/Lect. 11
2. The samples should guarantee normal distribution of the observed
feature mean value
a. As buildup of an X-diagram is based on normal distribution
of mean(X) the larger sample provides better approximation
of distribution of mean(X).
b. Recommended sample size ≥ 4 is typically satisfactory for
the most of practical cases.
3. The samples should be capable to detect “extraordinary effects”:
the larger is the sample → the better is detection of an
extraordinary effect (e.g. mean value drift).
4. The samples should be small enough:
a. To allow the sample gathering at all
b. To satisfy economic requirements (sample price)
Sample subset choice
o How to sample at all (under the Shannon’s’ theorem and its’ limits)
o Optimal sampling selection (multiple possibilities):
1. Periodic sample gathering at a certain time point or very
short time interval (all the measurements creating the
sample)
2. Random choice of the time-snap for the sample gathering
Consequences:
1. Minimized within sample variances, good possibility to
detect extraordinary effect in between the samples
2. Minimized influence of the process random events
(variances) as e.g. drifts in mean value, tool swapping,
and lifetime)
Ex.: Sampling rate choice: 1hour ±15minutes
□
Page 9/Lect. 11
Sampling rate (frequency)
o Frequent mistake (!) – under sampling causes loss of meaning of
the X, R diagrams (hard to identify this type of principal mistake)
Rules for sampling rate choice:
1. Sampling rate vs. process stability – procedure to set up an optimal
sampling rate and ensure SPC stability.
Initial situation: A process without previous SPC (open loop) or
process being unstable (oscillations or chaotic behavior):
o Begin with high sampling rate
o Apply the SPC closed loop followed by adjustment of the
systems’ behavior (bring the system to stable performance)
o Decrease the sampling rate at operating SPC, observe the
systems’ and SPC responses.
2. Occurrence of particular event frequencies in the process – to take
into account possible need to observe occurrences of such events.
o Event ~ new material delivery, shift change, technology
adjustment, startup, etc.
o Has to be compliant with the constrains set by the Shannon’s’
theorem
3. Costs of sampling
o Costs for samples (proportional to no. of samples)
Vs.
o Loss from process failures (or insufficient quality), sparse
sampling causes loosing control
Page 10/Lect. 11
Accumulated vs. distributed sampling
Both the approaches provide different behavior while drifts in
observed feature levels appear, in short or long time interval. The
interval length definition is related to the interval for taking a sample)
o For long time interval of the level variation → advantageous
use of accumulated sampling
o Short time interval of the level variation → better to apply
distributed sampling approach
Ex. 1:
Equidistant sampling vs. mean variation → good detection of slow
variations:
Process mean
Slow level variation
(1)
(2)
(3)
(4)
(5)
Time
Equidistant sampling
□
Page 11/Lect. 11
Ex. 2:
Random sampling vs. mean variation → good detection of rapid
variations:
Process mean
Rapid level variation
(1)
(2)
(3)
(4)
(5)
Time
Randomly distributed sampling
□
Some other negative sampling effects
o Some sampling setups offer to combine output of parallel
processes (as these are assumed to be identical)
Problems of:
o Sample stratification (knowing the situation setup)
o Sample mixing (the situation setup is hidden)
Page 12/Lect. 11
Sample stratification:
o Grouping of sample values into level layers (stratus), note the
X, R diagram shaping
o Origin of each sample measurement is known and can be
tracked
Situation:
Machine 1
Machine 2
Machine 3
Machine 4
Stratified sample
(subset selection)
The obtained sample offers possibility for detection of process mean
deviation (process 1-4) via increase of the expected sample range
Nevertheless, the stratified sample observes the following setup with
all its’ consequences:
Distribution of the
sample mean
Each machine
(1)
(2)
(3)
(4)
Stratified sample observes this range
Page 13/Lect. 11
Moreover, the sample stratification causes situations similar to
oscillations as observable in the X, R diagrams:
mean (X)
range(R)
UCL
avg(X)
mean(R)
LCL
L
□
Sample mixing:
o Sampling of measurements after previous aggregation
o Similar consequences as for sample stratification
o Sample mixing is a hidden process (not known at its’
appearance), main difference form the stratification: it is not
possible to track origin of particular samples
Situation:
Machine 1
Machine 2
Machine 3
Machine 4
Measurement
buffering (mixing)
Mixed sample
(subset selection)
□
Download