Graduate Lectures and Problems in Quality Control and Engineering Statistics:

advertisement
Graduate Lectures and Problems in Quality
Control and Engineering Statistics:
Theory and Methods
To Accompany
Statistical Quality Assurance Methods for Engineers
by
Vardeman and Jobe
Stephen B. Vardeman
V2.0: January 2001
c Stephen Vardeman 2001. Permission to copy for educational
°
purposes granted by the author, subject to the requirement that
this title page be a¢xed to each copy (full or partial) produced.
A Useful Probabilistic
Approximation
Here we present the general “delta method” or “propagation of error” approximation that stands behind several variance approximations in these notes as
well as much of §5.4 of V&J. Suppose that a p £ 1 random vector
1
0
X1
B X2 C
C
B
X=B . C
@ .. A
Xp
has a mean vector
0
B
B
¹=B
@
EX1
EX2
..
.
EXp
1
0
C B
C B
C=B
A @
¹1
¹2
..
.
¹p
1
C
C
C
A
and p £ p variance-covariance matrix
0
VarX1
Cov (X1 ; X2 ) ¢ ¢ ¢ Cov (X1 ; Xp¡1 )
Cov (X1 ; Xp )
B Cov (X1 ; X2 )
VarX
¢
¢
¢
Cov
(X
;
X
)
Cov
(X2 ; Xp )
2
2
p¡1
B
B
..
.
.
.
.
.
.
.
.
§ = B
.
.
.
.
.
B
@ Cov (X1 ; Xp¡1 ) Cov (X2 ; Xp¡1 ) ¢ ¢ ¢
VarXp¡1
Cov (Xp¡1 ; Xp )
Cov (X1 ; Xp )
Cov (X2 ; Xp ) ¢ ¢ ¢ Cov (Xp¡1 ; Xp )
VarXp
0
1
2
¾1
½12 ¾ 1 ¾2
¢ ¢ ¢ ½1;p¡1 ¾1 ¾ p¡1
½1p ¾1 ¾ p
2
B ½12 ¾1 ¾ 2
C
¾
¢
¢
¢
½
¾
¾
½2p ¾2 ¾ p
2
2;p¡1 2 p¡1
B
C
B
C
.
.
.
.
.
..
..
..
..
..
= B
C
B
C
2
@ ½2p ¾2 ¾ p ½2;p¡1 ¾ 2 ¾p¡1 ¢ ¢ ¢
¾p¡1
½p¡1;p ¾ p¡1 ¾ p A
½1p ¾1 ¾ p
½2p ¾ 2 ¾p
¢ ¢ ¢ ½p¡1;p ¾p¡1 ¾ p
¾ 2p
¡
¢
= ½ij ¾i ¾j
(Recall that if X1 and Xj are independent, ½ij = 0.)
127
1
C
C
C
C
C
A
128
A USEFUL PROBABILISTIC APPROXIMATION
Then for a k £ p matrix of constants
A = (aij )
consider the random vector
Y = A X
k£1
k£p p£1
It is a standard piece of probability that Y has mean vector
0
1
EY1
B EY2 C
B
C
B .. C = A ¹
@ . A
EYk
and variance-covariance matrix
Cov Y = A § A0
(The k = 1 version of this for uncorrelated Xi is essentially quoted in (5.23)
and (5.24) of V&J.)
The propagation of error method says that if instead of the relationship
Y = A X, I concern myself with k functions g1 ; g2 ; :::; gk (each mapping Rp to
R) and de…ne
1
0
g1 (X)
B g2 (X) C
C
B
Y =B
C
..
A
@
.
gk (X)
a multivariate Taylor’s Theorem argument and the facts above provide an approximate mean vector and an approximate covariance matrix for Y . That is,
if the functions gi are di¤erentiable, let
Ã
!
¯
@gi ¯¯
D =
k£p
@xj ¯
¹1 ;¹2 ;:::;¹p
A multivariate Taylor approximation says that for each xi near ¹i
0
1 0
1
g1 (x)
g1 (¹)
B g2 (x) C B g2 (¹) C
B
C B
C
y=B
C¼B
C + D (x ¡¹)
..
..
@
A @
A
.
.
gk (x)
gk (¹)
So if the variances of the Xi are small (so that with high probability Y is near
¹, that is that the linear approximation above is usually valid) it is plausible
129
that Y has mean vector
0
B
B
B
@
EY1
EY2
..
.
EYk
and variance-covariance matrix
1
0
C B
C B
C¼B
A @
g1 (¹)
g2 (¹)
..
.
gk (¹)
Cov Y ¼ D § D0
1
C
C
C
A
Download