Introduction to adaptive filtering & it*s applications

advertisement
By
Asst.Prof.Dr.Thamer M.Jamel
Department of Electrical Engineering
University of Technology
Baghdad – Iraq
Introduction
Linear filters :
the filter output is a linear function of the filter input
Design methods:
 The classical approach
frequency-selective filters such as
low pass / band pass / notch filters etc
 Optimal filter design
Mostly based on minimizing the mean-square value
of the error signal
2
Wiener filter
 work of Wiener in 1942 and Kolmogorov in
1939
 it is based on a priori
statistical information
 when such a priori
information is not available,
which is usually the case,
it is not possible to design
a Wiener filter in the first
place.
Adaptive filter
 the signal and/or noise characteristics are often
nonstationary and the statistical parameters vary
with time
 An adaptive filter has an adaptation algorithm, that is
meant to monitor the environment and vary the filter
transfer function accordingly
 based in the actual signals received, attempts to find
the optimum filter design
Adaptive filter
 The basic operation now involves two processes :
1. a filtering process, which produces an output signal
in response to a given input signal.
2. an adaptation process, which aims to adjust the filter
parameters (filter transfer function) to the (possibly
time-varying) environment
Often, the (average) square value of the error signal is
used as the optimization criterion
Adaptive filter
•
Because of complexity of the optimizing algorithms
most adaptive filters are digital filters that perform
digital signal processing
 When processing
analog signals,
the adaptive filter
is then preceded
by A/D and D/A
convertors.
Adaptive filter
•
The generalization to adaptive IIR filters leads to
stability problems
•
It’s common to use
a FIR digital filter
with adjustable
coefficients
Applications of Adaptive Filters:
Identification
 Used to provide a linear model of an unknown
plant
 Applications:
 System identification
Applications of Adaptive Filters:
Inverse Modeling
 Used to provide an inverse model of an unknown
plant
 Applications:
 Equalization (communications channels)
Applications of Adaptive Filters:
Prediction
 Used to provide a prediction of the present
value of a random signal
 Applications:
 Linear predictive coding
Applications of Adaptive Filters:
Interference Cancellation
 Used to cancel unknown interference from a primary
signal
 Applications:
 Echo / Noise cancellation
hands-free carphone, aircraft headphones etc
Example:
Acoustic Echo Cancellation
LMS Algorithm
•
Most popular adaptation algorithm is LMS
Define cost function as mean-squared error
•
Based on the method of steepest descent
Move towards the minimum on the error surface to get to
minimum
gradient of the error surface estimated at every iteration
LMS Adaptive Algorithm
• Introduced by Widrow & Hoff in 1959
• Simple, no matrices calculation involved in the
adaptation
• In the family of stochastic gradient algorithms
• Approximation of the steepest – descent method
• Based on the MMSE criterion.(Minimum Mean
square Error)
• Adaptive process containing two input signals:
•
1.) Filtering process, producing output signal.
•
2.) Desired signal (Training sequence)
• Adaptive process: recursive adjustment of filter tap
weights
LMS Algorithm
Steps
M 1
*




y
n

u
n

k
w

k n
 Filter output
k 0
 Estimation error
en  d n  yn
 Tap-weight adaptation
wk n  1  wk n  un  ke n
*
 update value   old value
  learning-  tap  

 
 

 error 

 of tap - weigth   of tap - weight   rate
 input 
 vector
  vector
  parameter vector signal

 
 


17
Stability of LMS
 The LMS algorithm is convergent in the mean square
if and only if the step-size parameter satisfy
 Here max is the largest eigenvalue of the correlation
matrix of the input data
 More practical test for stability is
 Larger values for step size
 Increases adaptation rate (faster adaptation)
 Increases residual mean-squared error
STEEPEST DESCENT EXAMPLE
•
Given the following function we need to obtain the vector that would give us
the absolute minimum.
Y (c1 , c2 )  C12  C22
•
It is obvious that
y
C1  C2  0,
give us the minimum.
(This figure is quadratic error function (quadratic bowl) )
C2
Now lets find the solution by the steepest descend method
C1
STEEPEST DESCENT EXAMPLE
•
•
•
We start by assuming (C1 = 5, C2 = 7)
We select the constant  . If it is too big, we miss the minimum. If it is too
small, it would take us a lot of time to het the minimum. I would select  =
0.1.
The gradient vector is:
 dy 
 dc  2C 
y   1    1 
 dy  2C2 
 dc 
 2
• So our iterative equation is:
C1 
C1 
C1 
C1 
C   C   0.2  y  C   0.1 C   0.9
 2 [ n1]  2 [ n]
 2 [ n ]
 2 [ n ]
C1 
C 
 2 [ n ]
STEEPEST DESCENT EXAMPLE
y
C1  5 
Iteration1 :     
C2  7 
Initial guess
C  4.5
Iteration2 :  1    
C 2  6.3
C1  0.405
Iteration3 :    

C 2  0.567
Minimum
......
C1  0.01 
Iteration 60 :    

C2  0.013
C1 
0 
lim n     
C2  [ n ] 0
C1
C2
As we can see, the vector [c1,c2] converges to the value which would yield the
 .
function minimum and the speed of this convergence depends on
LMS – CONVERGENCE GRAPH
Example for the Unknown Channel of 2nd order:
Desired Combination of taps
This graph illustrates the LMS algorithm. First we start from
guessing the TAP weights. Then we start going in opposite the
gradient vector, to calculate the next taps, and so on, until we get
the MMSE, meaning the MSE is 0 or a very close value to it.(In
practice we can not get exactly error of 0 because the noise is a
random process, we could only decrease the error below a desired
minimum)
SMART ANTENNAS
Adaptive Array Antenna
 Adaptive Arrays
Linear Combiner
Interference
Adaptive Array Antenna
Applications are many
Digital Communications
(OFDM , MIMO , CDMA, and
RFID)
Channel Equalisation
Adaptive noise cancellation
Adaptive echo cancellation
System identification
Smart antenna systems
Blind system equalisation
And many, many others
Introduction
Wireless communication is the most
interesting field of communication
these days, because it supports mobility
(mobile users). However, many
applications of wireless comm. now
require high-speed communications
(high-data-rates).
What is the ISI
Inter-symbol-interference, takes place when a
given transmitted symbol is distorted by other
transmitted symbols.
Cause of ISI
ISI is imposed due to band-limiting effect of
practical channel, or also due to the multi-path
effects (delay spread).
Definition of the Equalizer:
the equalizer is a digital filter that provides an
approximate inverse of channel frequency
response.
Need of equalization:
is to mitigate the effects of ISI to decrease the
probability of error that occurs without
suppression of ISI, but this reduction of ISI
effects has to be balanced with prevention of
noise power enhancement.
Types of Equalization techniques
Linear Equalization techniques
which are simple to implement, but greatly enhance
noise power because they work by inverting channel
frequency response.
Non-Linear Equalization techniques
which are more complex to implement, but have much
less noise enhancement than linear equalizers.
Equalization Techniques
Fig.3 Classification of equalizers
Linear equalizer with N-taps, and (N-1) delay elements.
Go
Table of various algorithms and their trade-offs:
algorithm
Multiplyingoperations
2N 1
LMS
MMSE N 2toN 3
2
2.5
N
 4.5 N
RLS
Fast 20 N  5
kalman
RLS- 1.5N 2  6.5N
DFE
complexity convergence
Low
Very high
High
Fairly
Low
High
tracking
slow
fast
fast
fast
poor
good
good
good
fast
good
Adaptive Filter Block Diagram
Adaptive Filter Block Diagram
d(n) Desired
+
e(n)
Error Output
x(n)
Filter Input
Adaptive Filter
y(n)
e(n)
Filter Output
The LMS Equation
 The Least Mean Squares Algorithm (LMS)
updates each coefficient on a sample-by-sample
basis based on the error e(n).
wk (n  1)  wk (n)    e(n) xk (n)
 This equation minimises the power in the error
e(n).
The Least Mean Squares Algorithm
 The value of µ (mu) is critical.
 If µ is too small, the filter reacts slowly.
 If µ is too large, the filter resolution is poor.
 The selected value of µ is a compromise.
LMS Convergence Vs u
Audio Noise Reduction
 A popular application of acoustic noise reduction is
for headsets for pilots. This uses two microphones.
Block Diagram of a Noise Reduction Headset
Near Microphone
d(n) = speech + noise
+
e(n)
Speech Output
Far Microphone
x(n) = noise'
Adaptive Filter
y(n)
e(n)
Filter Output
(noise)
The Simulink Model
Setting the Step size (mu)
 The rate of
convergence of the
LMS Algorithm is
controlled by the
“Step size (mu)”.
 This is the critical
variable.
Trace of Input to Model
“Input” = Signal + Noise.
Trace of LMS Filter Output
“Output” starts at
zero and grows.
Trace of LMS Filter Error
“Error” contains
the noise.
Typical C6713 DSK Setup
USB to PC
Headphones
to +5V
Microphone
Acoustic Echo Canceller
New Trends in Adaptive Filtering
 Partial Updating Weights.
 Sub-band adaptive filtering.
 Adaptive Kalman filtering.
 Affine Projection Method.
 Time-Space adaptive processing.
 Non-Linear adaptive filtering:Neural Networks.
The Volterra Series Algorithm .
Genetic & Fuzzy.
• Blind Adaptive Filtering.
Download