Massachusetts Institute of Technology

advertisement

Massachusetts Institute of Technology

Department of Electrical Engineering and Computer Science

6.341: Discrete-Time Signal Processing

OpenCourseWare 2006

Lecture 13

The Levinson-Durbin Recursion

In the previous lecture we looked at all-pole signal modeling, linear prediction, and the stochastic inverse-whitening problem.

Each scenario was related in concept to the problem of processing a signal s [ n ] by: s [ n ]

−�

1

A

⎤ 1

� p k =1 a k z − k �

−� g [ n ] , such that e [ n ]

� g [ n ]

φ [ n ] was minimized in some sense.

In the deterministic case, we

⎤ and in the stochastic case, we chose to minimize � = E e

2

[ chose n ] � to minimize � = � n e 2 [ n ],

.

The causal solution for the parameters a k was found in each case to be of the form

� s s

.

[0]

[1]

.

.

� s

[1]

· · ·

� s

[ p

1] ⎡

� s

.

[0]

.

.

·

.

·

.

.

·

� s

[ p

.

.

.

2]

� a

1

⎦ a

2

.

.

.

=

� � s

� s

.

[1]

[2]

.

.

, (1)

� s

[ p

1] � s

[ p

2]

· · ·

� s

[0] a p

� s

[ p ] or equivalently,

T p

ε p

= r p

.

(2)

In for this the lecture, vector we

ε p

).

address

The the fact issue that ε of p how can exactly be found to solve simply for by the parameters calculating ε p a

= k

T

(or

− p

1 r equivalently p is worth a mention, but the focus of this lecture is on an elegant technique for finding ε p which is generally less computationally expensive than taking a matrix inversion and which is also recursive in the filter order p .

This method for finding ε p is called the Levinson-Durbin recursion.

In addition to being recursive in p , we’ll see that certain intermediate results from the recursion give coefficients for a lattice implementation of an LTI filter h [ n ] which implements our signal model as

H ( z ) = S � ( z ) =

A

1

� p k =1 a k z − k

.

We’ll also see that the Levinson-Durbin recursion prescribes a similar implementation for the inverse filter 1 /H ( z ).

1

Let’s first prove the Levinson-Durbin recursion.

The basic idea of the recursion is to find the solution ε p +1 for the ( p + 1)st order that this hinges on the fact that T p is case a p from

× p the solution

Toeplitz

ε matrix, p for that the is, p th that order

T p is case.

We’ll symmetric, see and that all entries along a given diagonal are equal.

This property of T p is a result of its definition:

( T p

) ij

� s

[ i

− j ] .

To begin the proof, consider what happens in the p = 1 case.

Equation 2 becomes for p = 1

T

1

ε

1

= r

1

, or

� s

[0] a

1

= � s

[1] , giving the solution a

1

=

� s

� s

[1]

.

[0]

The recursion will now be developed by evaluating Equation 2 for order p + 1:

(3)

T p +1

ε p +1

= r p +1

.

Because we notation a

( � ) k will to not equal to a

( m ) k be

.) refer dealing to the with

Equation 2 is a solution parameter a k therefore vector for the α

ε

� represented for in multiple th-order model.

matrix orders

(Note form as

α , we that a will

( � ) k is adopt the generally

� s

[ p s s

[0]

[1]

.

.

.

1] � s

� s

.

.

.

[ p s

[1]

· · ·

� s

[ p

1] � s

[ p ] ⎡

[0]

· · ·

� s

[ p

2] � s

[ p

1]

2]

·

.

.

.

· ·

� s

.

.

.

[0] � s

.

.

.

[1]

� a

( p +1)

1

( p +1)

⎦ a a

2

( p +1) p

= ⎥

� s

� s s

.

.

.

[

[1]

[2] p ]

� s

[ p ] � s

[ p

1]

· · ·

� s

[1] � s

[0] a

( p +1) p +1

� s

[ p + 1]

.

The matrix T p +1 relates to T p as

T p +1

=

T p

� s

[ p ] � s

[ p

1]

· · ·

� s

[1]

� s

� s s s

[

.

.

.

p

[1]

[0]

]

[ p

1]

,

� and the vector r p +1 relates to r p as r p +1

=

� r p

� s

[ p + 1]

.

2

Defining a new vector � p as r p upside-down, or

� p

=

� s

� s

[ p ] ⎡

[ p

1]

.

.

.

,

� s

[1] gives a more compact representation of T p +1 in terms of T p

:

T p +1

=

( �

T p p

)

T

� p

� s

[0]

.

Let’s now represent the ( p + 1)st-order parameter vector ε p +1

ε p

, a correction term k p +1 and a correction vector � p as in terms of the p th order vector

ε p +1

=

� ε p

0

+

� � p k p +1

.

(4)

Equation 2 for order p + 1 is therefore represented in terms of the p th order equation as

( �

T p p

)

T

� s p

[0]

� �� ε

0 p

� �

+

� p k p +1

�� �

= r p

� s

[ p + 1]

, which, sorting the equations out, implies

T p

ε p

+ T p

� p

+ � p k p +1

= r p

(5) and

( � p

)

T

ε p

+ ( � p

)

T

� p

+ � s

[0] k p +1

= � s

[ p + 1] .

Substituting Equation 2 for order p = 1 into Equation 5 then gives

(6)

T p

� p k −

1 p +1

=

� p

.

(7)

Note now that since T p is Toeplitz, the for order p (Equation 1) implies also matrix realization of the causal Yule-Walker equations

� s

[ s s p

[0]

[1]

..

.

1] � s

[ p s s

[1]

· · ·

� s

[ p

1]

[0]

· · ·

� s

[ p

2]

..

.

2]

. ..

· · ·

� s

.

..

[0]

� a a

( p p

( p )

)

⎥ p − 1

..

.

⎦ a

( p )

1

� s

=

.

⎡ � s

[ p ]

[ p

1]

..

,

� s

[1] or

T p

ρ p

= � p

,

3

where ρ p is ε p upside-down.

This then implies, after substituting in Equation 5,

� p

=

− k p +1

ρ p

.

(8)

We’ll now use this and Equation 6 to find k p +1 the recursion.

and thus have all the pieces needed to complete

Pre-multiplying both sides of Equation 8 by ( � p

)

T gives the scalar relation

( � p

)

T

� p

=

( � p

)

T

ρ p k p +1

.

Note, version however, of � p in that the the same scalar way relation that ε p

( � p

)

T

ρ p

= ( r p

)

T

ε p also holds, since r p is a re-ordered is a re-ordered version of ρ p

.

This allows us to re-write the above equation as

( � p

)

T

� p

=

( r p

)

T

ε p k p +1

, which implies from Equation 6

( � p

)

T

ε p −

( r p

)

T

ε p k p +1

+ � s

[0] k p +1

= � s

[ p + 1] , or k p +1

=

� s

[ p + 1]

( � p

)

T

� s

[0]

( r p

)

T

ε p

ε p

.

(9)

Equations 4, 8, and 9 therefore define a recursion formula for finding the ( p + 1)st-order model parameters a

( p +1) k in terms closed-form solution for a

(1)

1

, of the which previously-obtained provides a starting p th-order point for solution.

the

Equation recursion.

a considerable algebraic detour, we’ll now summarize what was concluded.

Since

3 gives this a ends

4

The Levinson-Durbin recursion

Problem statement: for order p , solve � p a

( p ) k =1 k

� s

[ i

− k ] = � s

[ i ]

Definitions:

ε p

ρ p

= a

( p )

1

, a

( p )

2

, . . . , a

( p p ) ⎣

T

= a

( p p )

, a

( p ) p − 1

, . . . , a

( p )

1

T r p

= [ � s

[1] , � s

[2] , . . . , � s

[ p ]]

T

� p

= [ � s

[ p ] , � s

[ p

1] , . . . , � s

[1]]

T

Solution for p = 1: a

(1)

1

=

� s

[1]

� s

[0]

Recursion: k p +1

=

� s

[ p + 1]

( � p

)

T

� s

[0]

( r p

)

T

ε p

ε p

� p

=

− k p +1

ρ p

ε p +1

=

� ε p

0

� �

+

� p k p +1

⎥ a

( p )

1

( p )

� ⎥

⎥ a

2

=

..

⎦ a

.

( p ) p

0

− k p +1

⎥ a a

(

( p ) p p ) p −

1

..

.

( p )

� a

� 1

1

In parting, note that when these coefficients k

� are used to implement an with reflection coefficients k

, the lattice filter’s response is described by all-pole lattice filter

H ( z ) = S � ( z ) =

A

1

� p m =1 a m z − m

.

In other words, the impulse response of an all-pole lattice filter with reflection coefficients k

,

α = 1 , . . . , p , as prescribed by the Levinson-Durbin recursion, is the p th-order model of our original signal s [ n ].

(Likewise, the corresponding all-zero lattice filter implements 1 /S � ( z ).)

Note further that it is straightforward to determine the stability of an all-pole filter when im­ plemented in lattice form.

Specifically, if the coefficients k

� all have magnitude < 1, the all-pole filter is guaranteed to be stable.

(It is left as an exercise to gain intuition for why this is so.) This stability criterion is of particular interest in applications where the all-pole filter is

5

implemented with finite-precision coefficients, since determining the stability of a comparable direct-form all-pole implementation requires the generally more-expensive process of determin­ ing all of the filter’s pole locations.

In addition to the computational benefits gained from the

Levinson-Durbin recursion, its connection with the lattice structure (and the structure’s asso­ ciated stability metric) enables a wide array of applications, including real-time speech coders and synthesizers.

6

Download