ch5.1 (Linear Prediction).ppt

advertisement
Linear Prediction
1
Linear Prediction (Introduction):

The object of linear prediction is to
estimate the output sequence from a linear
combination of input samples, past output
samples or
both
:
q
p
yˆ (n)   b( j ) x(n  j )   a(i) y (n  i)
j 0
i 1
 The
factors a(i) and b(j) are called predictor
coefficients.
2
Linear Prediction (Introduction):

Many systems of interest to us are describable
by a linear, constant-coefficient difference
equation :
p
q
i 0
j 0
 a(i) y(n  i)   b( j ) x(n  j)

If Y(z)/X(z)=H(z), where H(z) is a ratio of
polynomials N(z)/D(z), then
q
p
j 0
i 0
N ( z )   b( j ) z  j and D( z )   a(i ) z i

Thus the predictor coefficients give us immediate access to
the poles and zeros of H(z).
3
Linear Prediction (Types of System Model):

There are two important variants :
 All-pole
model (in statistics, autoregressive
(AR) model ) :

The numerator N(z) is a constant.
 All-zero
model (in statistics, moving-average
(MA) model ) :

The denominator D(z) is equal to unity.
 The
mixed pole-zero model is called the
autoregressive moving-average (ARMA)
model.
4
Linear Prediction (Derivation of LP equations):

Given a zero-mean signal y(n), in the AR
p
model :
yˆ (n)   a(i) y(n  i)
i 1

The error is : e( n)  y (n)  yˆ (n)
p
  a (i ) y (n  i )
i 0

To derive the predictor we use the orthogonality
principle, the principle states that the desired
coefficients are those which make the error
orthogonal to the samples y(n-1), y(n-2),…, y(n-p).
5
Linear Prediction (Derivation of LP equations):
 Thus

we require that
 y (n  j )e(n)  0 for j  1, 2, ..., p
Or,
p
y (n  j ) a(i) y(n  i)  0
i 0

Interchanging the operation of averaging and
summing, and representing < > by summing over
n, we have
p
 a(i) y(n  i) y(n  j)  0, j  1,..., p
i 0

n
The required predictors are found by solving these
equations.
6
Linear Prediction (Derivation of LP equations):
 The
orthogonality principle also states that resulting
minimum error is given by
E  e 2 ( n )  y ( n )e( n )

Or,
p
 a(i) y(n  i) y(n)  E
i 0
 We

can minimize the error over all time :
p
 a(i)r
i 0
p

n
i j
 a(i)r
i 0

where
i
 0, j  1,2, ...,p
E
ri 

 y ( n) y ( n  i )
n  
7
Linear Prediction (Applications):

Autocorrelation matching :
 We
have a signal y(n) with known
autocorrelation ryy (n) . We model this with the
AR system shown below :
e(n)
y (n)
σ
1-A(z)
H ( z) 

A( z )


p
1   ai z i
i 1
8
Linear Prediction (Order of Linear Prediction):

The choice of predictor order depends on
the analysis bandwidth. The rule of thumb
2 BW
is :
p
c
1000
 For
a normal vocal tract, there is an average
of about one formant per kilo Hertz of BW.
 One formant requires two complex conjugate
poles.
 Hence for every formant we require two
predictor coefficients, or two coefficients per
kilo Hertz of bandwidth.
9
Linear Prediction (AR Modeling of Speech Signal):

True Model:
Pitch
Gain
s(n)
DT
Voiced Impulse
generator
G(z)
Glottal
Filter
Speech
Signal
U(n)
Voiced
V
Volume
velocity
U
H(z)
Vocal tract
Filter
R(z)
LP
Filter
Uncorrelated
Unvoiced
Noise
generator
Gain
10
Linear Prediction (AR Modeling of Speech Signal):

Using LP analysis :
Pitch
Gain
DT
Voiced Impulse
generator
estimate
Speech
V
U
White
Noise
Unvoiced
generator
s(n)
All-Pole
Filter
(AR)
Signal
H(z)
11
Download