ARTIFICIAL NEURAL NET BASED NOISE CANCELLOR

advertisement
ARTIFICIAL NEURAL NET BASED NOISE CANCELLOR
P.K. Dash, S. a h a , P.K. Nanda,
Electrical Engineering Department, Regional Engineering College,
Rourkela 769 008
H.P. Khincha,
Electrical Engineering Department,
Indian Institute of Science,
Bangalore 560 012
ABSTRACT
T h i s paper p r e s e n t s a new method for
n o i s e c a n c e l l a t i o n with
an
Artificial
Neural
Network.
The network
used is a
feedforward one with t h r e e l a y e r s .
The
backpropagati on and
st a s t i cal
Cauchy ’s
1ear n i ng a1gor i t hms
are
empl oyed
for
a d a p t a t i o n of t h e i n t e r n a l parameters of
t h e network.
The
constrained
tangent
h y p e r b o l i c f u n c t i o n is used t o a c t i v a t e t h e
neurons and
to
provide
the
desired
non-1 i n e a r i t y . Promising s i m u l a t i o n r e s u l t s
for noise
cancellation
intensify
the
val i d i t y of s u p e r s e d i n g t h e proposed scheme
To
for
many
existing
techniques.
demonstrate t h e e f f e c t i v e n e s s . t h e proposed
different
input
method is a p p l i e d t o .
SNRs.
With
conditions
with
varying
incomplete s i g n a l samples t h e n e t is found
t o produce o u t p u t
having
a
striking
resemblance with t h a t of t h e d e s i r e d ones.
A performance comparision of
the
two
a l g o r i t h m s is p r e s e n t e d i n t h e paper f o r
better appraisal.
I NTRODUCTI ON
The usual
method of
estimating
a
s i g n a l c o r r u p t e d by a d d i t i v e n o i s e is t o
p a s s i t through a f i l t e r t h a t t e n d s to
suppress t h e noise while leaving t h e signal
r e l a t i v e l y unchanged. The f i l t e r s for !.he
above purpose may be f i x e d or a d a p t i v e ,
where t h e l a t t e r one h a s t h e a b i l i t y t o
a d j u s t t h e i r own p a r a m e t e r s a u t o m a t i c a l l y
and t h e i r
d e s i g n r e q u i r e s l i t t l e or no
a p r i o r i knowledge of
signal
or
noise
c h a r a c t e r sti cs.
In
c i rcumstances
where
a d a p t i v e n o i s e c a n c e l l a t i o n is a p p l i c a b l e
C11
levels
of
noise
rejection
are
a t t a i n a b l e t h a t would be d i f f i c u l t
or
i m p o s s i b l e t o a c h i e v e by d i r e c t f i l t e r i n g .
A good number of a d a p t i v e a l g o r i t h m s have
been r e p o r t e d i n t h e l i t e r a t u r e over t h e
p a s t f e w decades t o u p d a t e t h e i n t e r n a l
p a r a m e t e r s w i t h a view t o o b t a i n h i g h
t h e o u t p u t s i g n a l i n many
f i d e l i t y of
complex envi r onments.
However t h e major
f o c u s of r e s e a r c h over t h e p a s t few y e a r s
is t o d e v e l o p
robust
algorithms
for
a d a p t i v e systems.
H o w e v e r t h e past f e w years witnessed
t h e f l i n c h i n g of t h e r e s e a r c h e r s t o t h e
Neural
NetCANN3.
The
f i e l d of A r t i f i c i a l
p o t e n t i a l f e a t u r e s of t h e ANN motivated t o
p u r s u e r e s e a r c h i n t h e f i e l d with much
a c c e l e r a t e d v i geur w i t h a -hope of a c h i e v i n g
human l i k e performance i n t h e f i e l d s of
speech and image r e c o g n i t i o n 12.31.
In
above f e a t u r e s n e u r a l
addition to t h e
networks
can
provide
solution
to
o p t i m i z a t f o n , h a n d w r i t t e n d i g i t - r e c o g n i ti o n ,
radar
target
detection.
text- to- speech
s y n t h e s i s and computational models of b r a i n
f u n c t i o n s . Adaptive non- parametric n e u r a l
networks
designed
for
pattern
c l a s s i f i c a t i o n w o r k well for many real
world problems C41. Neural n e t models are
spcified by
the
net
topology..
node
c h a r a c t e r i s t i c s . and t r a i n i n g or l e a r n i n g
r u l e s . Both t h e d e s i g n p r o c e d u r e s and t h e
training
rules
for
supervised
and
unsupervised t r a i n i n g a r e t h e t o p i c of much
c u r r e n t r e s e a r c h C5.6.71.
The a p p l i c a t i o n
of
f eedf or w a r d mu1ti 1a y e r ed neur a1
net
e s t i m a t i o n have heen
models t o s p e c t r a l
r e p o r t e d i n few r e s e a r c h p a p e r s 18.91The
a p p l i c a t i o n of n e u r a l n e t t o t h e f i e l d of
n o i s e c a n c e l l a t i o n is h a r d l y found i n t h e
1i t e r a t u r e .
N e t based N o i s e
I n t h i s paper a Neural
C a n c e l l e r i s proposed.
A t h r e e layered
f eedf o r w a r d ne t w o r k i s employed f o r noi se
c a n c e l l a t i o n . The backpropagation a l g o r i t h m
and S t a t i s t i c a l Cauchy’s a l g o r i t h m w i t h
Bol tzrmnn’ s p r o b a b i l i t y d i s t r i b u t i o n
is
employed
for
updating
the
internal
Since
parameters i . e . weight of t h e n e t .
both p o s i t i v e and n e g a t i v e t i m e domain
output s a m p l e r are of equal importance. t h e
tangent hyperbolic function. constrained to
o p e r a t e i n n o n - l i n e a r zone.
is used t o
a c t i v a t e t h e neurons and t o p r o v i d e t h e
d a r i rod
non-1 i near i t y .
The
tangent
hyperbolic function s a t u r a t e s a t u n i t y for
small magnitudes of b o t h p o s i t i v e
and
n e g a t i v e v a l u e s . hence t h e maximum v a l u e of
tho
activation
function
is
suitably
a d ‘ u o t o d and tho o u t p u t
of
a
neuron is
roishhod b e e r n a C r & r r i C
&he nm-LLnear
zone.
pr-edictei‘ o
p u t bs o t for
inout
samplesThe
contaminated
wui tt h
h harmonic
. -
i n t o r f e r e n c e and
additive
uncorrelated
w h i t e n o i s e b e a r s a s t r i k i n g resemblance
w i t h t h a t of t h e d e s i r e d o n e s .
The fundamental s i g n a l components p r i r m r i l y
of power
f r q u e n c y and i t s c o r r e s p o n d i n g
t h i r d and f i f t h harmonic compononts are
s u c c o r sf u l 1 y e x t r a c t e d f r o m t h e c o r r u p t e d
signal with l o w SNRs.
The n e t is also
exposed t o many never s e e n d a t a o t h e r t h a n
t h e t r a i n e d ones and t h e n o i s e c a n c e l l a t i o n
f e a t u r e is commendable. The n e t p o s s e s s e s
potentiality
to
extract
the
desired
components even w i t h 50% s u p p r e s s e d i n p u t
d . c and w i t h
d a t a . b o t h with predominant
appreciable
harmonic
s t r e n g t h , without
computer
signal degradation.
Sufficient
s i m u l a t i o n r e s u l t s aro p r e s e n t e d f o r a wide
yar i e t y of cases.
37
ARTIFICIAL. NEURAL NETWORK
I n t e r e s t i n a r t i f i c i a l n e u r a l networks
h a s grown r a p i d l y over t h e p a s t f e w years.
This i n s u r g e n c e of i n t e r e s t fired by b o t h
theoritical
and
application
successes
m o t i v a t e d t o m a k e machines t h a t l e a r n and
remmber i n w a y s t h a t boar a s t r i k i n g
p r o c e s s e s and
resemblance t o human mental
a s s i g n e d a new s i g n i f i c a n t meaning
to
A r t i f i c i a l I n t e l l i g e n c e . I n 1QSOs and 1Q600
a g r o u p of
researchers
combined
the
b i o l o g i c a l and p s y c h o l o g i c a l
insights to
produce t h e f i r s t a r t i f i c i a l n e u r a l network
model.
Due
to
limitation
in
r e p r e s e n t a t i o n a l c a p a b i l i t y of t h e s i n g l e
neuron
and
single
layer
neurons,
mu1 t i 1a y e r ed per c e p t r o n s wor e
d e v e l oped
of
which
invoked
the
development
a1gor i t hm
back pr o p a g a t i on
t r ai n i ng
by
P a r k e r C103.
Development of
d e t a i l e d mathematical
models began more t h a n f o u r decades a g o
w i t h t h e w o r k of McCulloch and P i t t s . Hebb.
Rasenbl a t t W i d r o w and o t h e r s . More r e c e n t
w o r k by H o f f i e l d . Rumelhart McCelland C111.
Sejnowski ,
Feldman.
Grossberg.
Widrow
t l 2 . 1 3 1 and o t h e r s l e d t o a new i n s u r g e n c e
i n t h e f i e l d . Over t h e p a s t few y e a r s t h e
r e s e a r c h about n e u r a l n e t h a s been p e r s u e d
w 1 t h accel er a t e d v i gour s , si n c r t h e advent
of new n e t t o p o l o g i e s ,
robust
algorithms.
a n a l o g VLSI implementation and t h e c o n c e p t
of p a r a l l e l p r o c e s s i n g .
These models are
s p e c i f i e d by t h e
net
topology,
node
c h a r a c t e r i s t i c s and t r a i n i n g & l e a r n i n g
set
r u l e s . These r u l e s s p e c i f y a n i n i t i a l
of w e i g h t s and i n d i c a t e how w e i g h t s s h o u l d
the
b e a d a p t e d d u r i n g u s e t o improve
performance. A d a p t a t i o n or l e a r n i n g is a
major f o c u s of n e u r a l
net research.
The
a b i l i t y t o a d a p t and c o n t i n u e l e a r n i n g is
essential
in
areas
such
as
speech
P
0,
W
3
a
C
rt
Bidden
output
layer
layer
j
k
i
Fig. 1, A typical 3 layered feedforward
Artificial Neural Network.
TRAI N I NG ALGORITHMS
The n e t t r a i n e d to j u s t i f y
the
noise
cancellation capability
i s shown
i n F i g . C l > . The s u b s c r i p t s i,.j and k r e f e r
to t h e input.
h i d d e n and o u t p u t
layer
r e s p e c t i v e l y and t h e s u b s c r i p t s n. m and 1
c o r r e s p o n d t o a n y u n i t irr I n p u t . h i d d m and
output layer.
The n e t is t r a i n e d
with
s u p e r v i s i o n by t h e r e q u i r e d i n p u t and t a r g e t
vectors.
The t w o a l g o r i t h m s employed for t r a i n i n g
are b a c k p r o p a g a t i o n and s t a t i s t i c a l Cauchy’s
a l g o r i t h m which are p r e s e n t e d b e l o w .
.
r e c o g n i t i o n where t r a i n i n g d a t a i s
Backpropagation Algorithm:
For e a c h t i m e s t e p t h e f o l l o w i n g s t e p s are
to bo exacuted.
I n i t t l i s a t i o n of weights.
t h e w e i g h t s of b o t h t h e l a y e r s
random V a l uos.
SteD-1:
Stev-2:
Tho i n p u t
vector
t h e t a r g e t v e c t o r do,
Xo,.
. . . , dN-l
Set t h e
to small
. . . .XN-l
are
and
formed
and a p p l i e d t o t h e n e t .
Sterr-8: C a l c u l a t e t h e a c t u a l o u t p u t s of b o t h
limited
t h o layers
and new t a l k e r s . new words. new p h r a s e s and
new
envi r onments
are
c o n t i nousl y
encountered.
with
However t h e m u l t l l a y e r e d ANN
back pr opagat i on a1 gor i t hm is mst common1y
used for many p r a c t i c a l problems. A t y p i c a l
is
t h r e e l a y e r e d network shown i n F i g . C l 3
t r a i n e d w i t h s u p e r v i s i o n . The t h r e e l a y e r e d
network h a s i n p u t l a y e r . h i d d e n l a y e r
and
output
layer
consisting
of
multiple
n o n - l i n e a r neurons associated w i t h s u i t a b l e
a c t i v a t i o n f u n c t i o n . Each i n p u t p a t t e r n t o
t h e n e t f e e d s up t h r o u g h t h e hidden l a y e r s
upto t h e output l a y e r .
Each neuron forms
t h e weighted sum of t h e i n p u t s and p a s s i t
t h r o u g h t h e hidden l a y e r s u p t o t h e o u t p u t
l a y e r . Each neuron forms t h e weighted sum
of t h e i n p u t s and p a s s i t t h r o u g h t h e
n o n - l i n e a r a c t i v a t i o n f u n c t i o n t o produce a
o u t p u t which s e r v e s as t h e i n p u t t o t h e
next l a y e r i.e. output l a y e r .
The n e t is
t r a i n e d t o minimize t h e o b j e c t i v e f u n c t i o n
which is d e f i n e d a s t h e h a l f of t h e sum of
t h e s q u a r e s of t h e d i f f e r e n c e s between t h e
pr edi cted o n e s
and
the
cor r espondi ng
d e s i r e d component.
Once t h e network is
t r a i n e d it h a s t h e c a p a b i l i t y of producing
o u t p u t v e r y close t o t h e d e s i r e d ona w i t h
i n p u t p a t t e r n s other than t h e t r a i n e d ones
t h u s e s t a b l i s h i n g t h o v a l i d i t y for real
t i m o implemontation.
A
y
=
f<
k . x . cw1>
c13
A
y = E s t i m a t e d Output
=
Weighting
factor
to
constrain t h e a c t i v a t i o n function i n t h e
non-1 i n e a r zone.
CWI= Weight matrix between any
t w o layers.
?.
fC.3= A.tanhC.>
S t e ~ - 4 : C a l c u l a t e t h e error
function.
e
* = d - y
E =Cl&>
P
1
Cdk
-
&
c 23
objective
r: 31
yk>
A 2
c 4>
Ep = o b j e c t i v e f u n c t i o n
Adapt t h e weight.
tW1 C t + l 3 = CWlijCt> + v . 6 X
where i J
CWI C t > i s t h o w i g h
jtj
SteD-5:
C5)
from
iJ
hidden node i or f r o m a n i n p u t to node j a t
tima t. X i r e i t h w r t h o output of node i or
-j
is a n i n p u t , r) is a convergence c o - e f f i c i e n t
varied from 0.1 t O 1.0. bJ is an error term
f o r node j. If node j is an output node thela
38
when d
J
i s t h e d e s i r e d o u t p u t of node j
12
are
the
target
waveforms
for
f undamental , 3r d harmonic and 5 t h harmonic
respectively.
Fig.
5 & 1 3 exhibit
tha
i n a d e q u a t e i n p u t samples and t h e p r e d i c t e d
o u t p u t s which ara u p t o t h o m a r k s and hence
i t is c l a i m e d t h a t Neural
Net
based n o i s e
c a n c e l l e r is supcriar to
the
exiistiny
mot hods.
CONCLUSION
Exhaustive
computer
simulation
results
e s t a b l i s h t h o v a l i d i t y t o o p t for
the
proposed schenw over t h e e x i s t i n g d i g i t a l
f i l t e r s . The performance i n e x t r a c t i n g t h e
signal components from i n a d e q u a t e c o r r u p t e d
d t a excel o v e r t h e e x i s t i n g t e c h i n i q u e r .
.;?a local
minima t r a p p i n g e f f e c t of
the
objective function
is
circumvented
by
s w i t h i n g over t o Cauchy's
algorithm.
The
combined
backpropagation
and
Cauchy's
a l g o r i t h m i s b e i n g s t u d i e d t o overcome local
r i n i m a t r a p p i n g and t o
achieve
faster
convergence. The n e t is found t o ' be f a u l t
t o l e r a n t and real t i m e implementation of t h e
proposed scheme seems t o be more promising.
I k v e l opmant and
i mpl e m e n t a t i on
of
the
g e n e r a l i r e d filter is
in
Progress
by
employing b o t h a d a p t i v e P a t t e r n Racognit.ion
concept and r o b u s t t r a i n i n g a l g o r i t h m .
&
and yJ is t h e a c t u a l o u t p u t .
If node j is a n i n t e r n a l hidden t h e n
c ?>
'- 6
CA- Xy/A3
6kCW1
Jk
- J
where k is ovor a l l nodes to t h e r i g h t
of node j.
The s t e p s f r o m 1 t o , 5 are r e p e a t d t i l l
t h e o b j u t i v e f u n c t i o n is minimized.
Cauchy's Algorithm:
The f o l l o w i n g s t o p s are adopted for
t r a i n i ng t h e network
Step-1: A v a r i a b l e
T t h a t represents an
i n i t i a l a r t i f i c i a l t e m p e r a t u r e i s chooson.
U s u a l l y T is i n i t i a l i z e d t o a large v a l u e .
Also a n o t h e r variable. t = a r t i f i c i a l t i m e
analogous t o a s t o p .
is
initialised.
U s u a l l y t i s set t o a small v a l u e .
Here
T=0808.0, t = 2 . 0 .
Step 2: A set of i n p u t is a p p l i e d t o t h e
network, t h o c o r r e s p o n d i n g o u t p u t and t h e
o b j e c t i v e f u n c t i o n is e v a l u a t e d .
Step-3: The w e i g h t s are changed by a random
v a l u e which is g i v e n as
x = pCTCt>.tanCFYX>l
C83
.
where
p
co-ef f i c i e n t
x
=
the
learning
rate
ACKNOWLEDGEMENT
= t h e weight change
The a u t h o r s acknowledge w i t h t h a n k s t h e
M i n i s t r y of Human Resource
Development ,
Go-.
of
I n d i a for p r o v i d i n g
necessary
PCX> = a radom numbor s e l e c t e d f r o m
uniform d i s t r i b u t i o n over
the
open
i n t e r v a l -n& t o n&.
Step- 4:
If
the objective
function
is
improvedCreduced>, r e t a i n t h e weight change.
a
funds .
REFERENCES
B.Widrow
et
a1,"Adaptive
noise
c a n c e l l i n g p r i n c i p l e s and
applications" .
IFEE.
vo1.63.
pp.
1892-1716. k c .
Proc.
1875.
c 8 1 R. P. Lippmann.
" An
Introduction
to
computing w i t h Neural Nets".IEEE
ASSP
Mag.
4C23. pp 4-22, A p r i l , 1987.
t31
A.
Waibel.
H.Sawai
and
K.
Shi kind. "Modul ar i t y and Scal i ng i n Large
phonemic Neural N e t w o r k " . I E E E Trans. on ASSP
V o l . 37. N o . 1 2 . Dec.1989.
C41
R. P. Lippmann."Pattern
classification
network" ,
IEEE Communication
using neural
Magazine PP 27-62. Nov. 1989.
C53
M.R.Azimi-Sadjadi.S.Citrrn
and S a s s a n
Shaedr ash. "Super v i sed L e a r n i ng P r o c e s s of
Multi- Layer P e r c e p t r o n Neural N e t w o r k using
Fast L e a s t Squares'., ICASSP,
April
3-6,
1900.N e w Mexico. U. S. A.
[GI
P. Eurrascano and P. Lucci ,"A
Learnlng
r u l o d o m i n a t i n g Local minima i n
multiloyap
p e r c e p t o n s " ICASSP A p r i l 3-6 1N e w Mexico
C13
C Q>
Step-5: If t h e weight change r e s u l t s i n a n
increase
in
the
objective
function.
calculate t h e p r o b a b i l i t y of a c c e p t i n g t h a t
change from t h e Boltzmann's d i s t r i b u t i o n as
f 01lows.
PCc> = exp C-cAl3
c10>
where PCc>=the p r o b a b i l i t y of a change of
c i n t h e obiective function.
k
= a constanL
analogous
to
Boltzmann's c o n s t a n t t h a t must
b e choosen.
Here k =
is s e l e c t e d .
T = t h e artificial temperature.
The above s t e p s are repeated u n t i l
t h e o b j e c t i v e f u n c t i o n i s minimized.
RESULTS & DISCUSSION
I n t h i s research t h e
network
is
t r a i n e d i n d e p e n d e n t l y for e a c h case namely.
fundamental, 3rd Harmonic & 5 t h Harmonic
etc. The s i g n a l g e n e r a t e d is goverened by
yCt>
=
2
sinCkot> + VCt>
0.S. A.
C 71 S. Y . Kung and K. I .
Diamantras, "A Neural
N e t w o r k Lear n i ng a1gor i thm for a d a p t a t i VIZ
Principal
Componenet
Extraction
CAPEX>"
ICASSP April 3-6. 1QQO.New Mexico U . S . A .
C81
Chia-Jio
Wang & M a r k
A.Wickert
and
C . H . W i e . "Three layer Neural Network for.
s p e c t r a l e s t i m a t i o n " ICASSP.
April
3-6.
1990. N e w Mexico, USA.
t01
P. K. Dash,
P. K. Nanda and
S. S h e .
'*
Spectral e s t i m a t i o n E i n g & L i f i C i r l
Neural
Network" .
International
Conference
on
SIE%i!-S 5 SYSEYS.
Dec. 10-12. T i r u p a t i .
India.
C 101
Phi 11i p
D.
Warsor man,
"Neura L
Computing. T h e o r y and Practice" Van Nortrand
& Reinhold 1989.
C 11I
D. E. R u m e l h r r ' t
and
J. L. McCe11and.
"Para.1 l e l D ~ ~ t r i ~ t e d . P r o c c s s tV~a"l .. 1 &
2 Cambridge. MA. MIT Press 1-.
c11>
.
where k = 1 . 3 , 5 . 7 . .
n 8, V C t > is zero mean
w h i t e n o i s e . The o b j e c t i v e f u n c t i o n s for
t h e fundamental,
3 r d harmonic and
5th
harmonic are p r e s e n t e d i n F i g s . 2.6.7 & 11.
Fig. 6 shows t h e local minima t r a p p i n g w i t h
backpropagation
algorithm
for
third
harmonic which i s a11 evi a t e d u s i ng Cauchy' s
algorithm although
it
is
slower
in
convergence as d e p i c t e d i n ' Fig.
7. The
input
with
varying
a.Rs
and
the
corresponding o u t p u t s bearing
a
close
resemblence to t h a t of t h e desired o n e s . a r e
r e p r e s e n t e d i n F i g s . 4.9.10 & 1 4 Fig. 3, 8
39
/?
B. Widrow. R. G. Winter and R. A. B a x t e r ,
"Layered
Neural
Nets
for
pattern
No.
7.
r e c o g n i t i o n " . IEEE Trans. on ASSF'.
J u l y 1988.
1 1 3 1 M. Stevenson,
R. Winter and B. Widrow.
"Sensi vi t y of f eedf or w a r d Neur a1 Networks
t o weight errors " . IEEE Trans.
on Neural
N e t w o r k . V o l . 1. No. 1. March I Q Q O .
C121
I
I
v)
aJ
a
Input
output
I
1.6
1.0
0.6
1
2tn -1.0
25.001
0.00
*
~.
.lJ 0.0
4
C - 06
A
!
2.01
I
-1.6
-2.0
-2.6
0
ie
12
8
4
Samples
-->
Fig. 5
I
I
-
-
10.00
m
'
'El
125.00
0
1500
C
4500
3000
Adaptations -->
Fig. 2
-20.00
500
250
0
Adaptations -->
Fig. 6
A
-2.6
- ' . O i + y - y - -
12
Samples
IO.OO[
18
-->
Fig. 3
-..
A
*
2.01
Input
-
Output'
-50.00'
0
2000
4000
8000
BOO0
Adaptations -->
Fig. 7
0
12
8
4
Samples
16
-->
Fig. 4
Fig.
2:
L e a r n i n g Curve for
Fundamental
Samples u s i n g b a c k p r o p a g a t i o n a l g o r i t h m c 3
sets of t r a i n i n g s i g n a l s ) , r)=O. 45. X=O. 0068
A = 3 . 0 . f = 800 Hz.
S
F i g . 3: Tar g e t Wavrf or m for fundamental C SoHz3
F i g . 4: I n p u t s i g n a l w i t h SNR=OdB
the
p r e d i c t e d o u t p u t for fundamental.
4.
40
F i g . 5: I n p u t s i g n a l . SNR=OdB w i t h a w n
samples suppr errcdC i . e 2 . 4 , . .16> and t h e
horresponding
predicted
output
for
fundamental .
, Fig. 6 : L e a r n i n g Curve for 3 r d HarmonicCBack
p r o p a g a t i o n Al. > .r)=O. 8 . A=O. 009. A=2.4.
F i g . 7 : L e a r n i n g Curve for 3 r d Harmonic u s i n a
Cauchy ' s A l . ,p=O. 01 , A =O. 0 7 , A = 2 . 0 .
4
-16
-2.0
I
- 4 :
'
'
' : ' ' ' : ' ' '
0
;
8
4
'
12
'
'
!
18
Samples -->
Fig. 12
-+-Input
0
-2.6'
:
'
'
'
0
:
I
I
0
4
,
I
8
Output
8
4
12
16
Samples -->Fig. 13
, ,
,
-
18
12
Samples -->
Fig.. 9
A
-+- Input
2.0
A
-
2.6
I
I
Output
0
P)
a
:I
-+-
-
Input
.Output
3
CI
-4
c
0,
Id
L
-2 -
-3
t
0
-2.9
-2.6: '
0
'
'
: '
4
'
'
:
'
'
8
'
'
'
12
'
4
8
12
18
Samples -->.
Fig. 14
!
16
Samples -->
Fig. 10
Fig.
8:Target
for
3rd Harmonic.
F i g . Q: Input s i g n a l , SNR=-2OdB & t h e
output Cbackpropagation Al>
Fig. 10: Input S i g n a l , S)rlR=-20dB & Output
CCauchy's Algorithm3
Fig. 11: Learing Curve for 5 t h harmonic
Cbackpropagation Al3
F i g . 1 2 : Target for 5 t h harmonic.
Fig. 13: Input s i g n a l %R=odB
with even
samplr'suppreesd
t h e output.
F i g . 14: Input s i g n a l SNR=OdB and t h e output
-ao.oo
0
is00
MOO
Adaptations-->
Fig. 11
41
Download