"Event Compression Using Recursive Least

advertisement
"Event Compression Using Recursive Least
Squares Signal Processing"
Webster Pope Dove
TECHNICAL REPORT 492
July 1980
Massachusetts Institute of Technology
Research Laboratory of Electronics
Cambridge, Massachusetts 02139
This work was supported in part by the Advanced Research Projects
Agency monitored by ONR under Contract N00014-75-C-0951-NR 049-328
and in part by the National Science Foundation under Grants
ENG76-24117 and ECS79-15226.
tINCLA_,S
IC0:.CI'Y Ctt_AC.' FiCATttl
I IIC
D
q OF T
nRnf
EPORT
aaREPORT
S s
C.- (I?,'..
C ,. l,-nOoiltd
DOCU MENTATION
PIwOCU.mETATIO
1
TITLE (d
P AGE
READL
rA INSTRUCTIONS
BEFORE COMPLETING FORM
__
12. GOVT ACCESSION NO. 3. RECIPIENT'S CATALOG NUMBER
1. REPORT NUMBER
4.
PAGE
. TYPE OF REPORT & PERIOD COVERED
Subtitle)
Technical Report
EVENT COMPRESSION USING RECURSIVE LEAST
SQUARES SIGNAL PROCESSING
7.
6. PERFORMING ORG. REPORT NUMBER
Webster P'. Dove
N00014-75-C-0951
10.
PERFORMING ORGANIZATION NAME AND ADDRESS
S.
Research Laboratoiy of Electronics
Massachusetts Institute of Technology
02139
Cambridge, MA
12. REPqRT DATE
July 1980
Advanced Research Projects Agency
1400 Wilson Boulevard
Arlington, Virginia 22217
13. NUMBER OF PAGES
151
MONITORING AGENCY NAME & ADDRESSif different from Controllin
Ofice)
IS.
Office of Naval Research
SECURITY CLASS. (of thi
15s. DECLASSIFICATION/DOWNGRADING
Arlington, Virginia 22217
SCHEDULE
DISTRIBuTION STATEMENT (of this Report)
Approved for public release; distribution unlimited
t7. DISTRIBUTION STATEMENT (of
Tho
1.
SUPPLEMENTARY NOTES
t9.
KEY WORDS (Contnu
on rersmo
bstrct ont-d in Block 20, if
id
differant
rot Rport)
if neceswy end dentify by block nmmbe)
recursive least squares
event compression
linear prediction
20.
ABSTRACT (Continu- on r".,se
aid.
if nceearmy and identt
b
block nu.r)
see other side
DD ,,R.
1473
report)
Unclassified
Information Systems Program
Code 437
16.
PROGRAM ELEMENT. PROJECT. TASK
AREA &WORK UNIT NUMBERS
NR 049-328
t'. CONTROLLING OFFICE NAME AND ADDRESS
14.
CONTRACT OR GRANT NUMBER(s)
s.
AUTHOR(s)
UNCLASSIFIED
-----_II
I
-
_
____
ATlCj
_
O
or
Tw'
Tk
.
1
0i.
___
___
-C-·--·------·_C-·
---
-
20. Abstract
This work presents a technique for time compressing the
events in a multiple event signal using a recursive least
squares adaptive linear prediction algorithm. Two event
compressed signals are extracted from the update equations
for the predictor; one based on the prediction error and the
other on the changes in the prediction coefficients as the
data is processed.
Using synthetic data containing three all-pole events,
experiments are performed to illustrate the performance of
the two signals derived from the prediction algorithm. These
experiments examine the effects of initialization, white
gaussian noise, interevent interference, filtering and
decimation on the compressed events contained in the two
signals.
___
UNCLASSIFIED
SECURITY CLASSIFICATION OF THIS PAGE(Whof Dote
Entered)
- 2 EVENT COMPRESSION USING RECURSIVE LEAST SQUARES
SIGNAL PROCESSING
by
Webster
Pope Dove
Submitted to the Department of Electrical Engineering and
Computer Science, on 15 July 1980, in partial fulfillment of
the requirements for the Degree of Master of Science in
Electrical Engineering.
ABSTRACT
This work presents a technique for time compressing
the events in a multiple event signal using a recursive least
squares adaptive linear prediction algorithm.
Two
event
compressed signals are extracted from the update equations for
the predictor; one based on the prediction error and the other
on the changes in the prediction coefficients as the data is
processed.
Using synthetic data containing three all-pole events,
experiments are performed to illustrate the performance of the
two signals derived from the prediction algorithm.
These
experiments
examine
the
effects of
initialization,
white
gaussian
noise,
interevent
interference,
filtering
and
decimation cn
the compressed
events contained
in the two
signals.
THESIS SUPERVISOR:
TITLE:
Professor
Alan V. Oppenheim
f
Electrical Engineering
- 3 -
I/
To Monica
f
- 4 -
Acknowledgements
I give my thanks to all the people in the
Digital Signal Processing Group at MIT.
Many
fruitful discussions with them at odd hours
contributed to this work.
Of particular help
were Dr. Lloyd Griffiths with his insight into
least squares algorithms and Steve Lang for
infinite
patience
in
listening
to
my
ramblings.
Last and most I thank Prof. Alar.
Oppenheim for the original problem suggestion,
much needed guidance during the investigation
and the pressure necessary to produce this
document.
If I can acquire a tenth of his
clarity of thought while I am here, I will
consider myself lucky.
- 5-
CONTENTS
Abstract ............
Dedication
2
...........................
3
..........
Acknowledgements
1.
...........................
INTRODUCTION
.................... ...........
...................................
1.1 Previous Work .....
2.
3.
LINEAR PREDICTION
6
,......
9
.............................
13
2.1 The Covariance Method .... .................
2.2 Recursive Least Squares ..
2.3 Discussion ...............
14
17
38
EVENT COMPRESSION WITH RLS
39
3.1
3.2
3.3
3.4
3.5
3.6
4.
..................
4
....................
The Data Model .......... ..................
Event Compression Signals .........
Experimental Results .... ..................
Linear Filtering ........ .................
Decimation .............. .................
Summary ................. .................
CONCLUSIONS
I.......
..................................
40
48
54
112
130
144
146
- 6 1.
INTRODUCTION
various
In
which
contain
detected.
signal
pulses
or
Problems
analysis.
All
to
improved
of
Such
processing
a signal
of events in an
separations,
the
and
event make
an
or
and
SONAR
seismic
data
need
for
a
signal
in
the
This type
successive events
by
locating
increasing
the
events
the
easier.
procedure, which reduces the duration
input sequence without
is
event
changing
compression
deconvolution,
their
relative
algorithm.
matched
Examples
linear
filtering,
deconvol ution.
Let
us
compression.
targets
time.
may cause
events
illustrate
Consider
requirements
the
each
homomorphic
predictive
in
share
detectability
impulsiveness
multiple
and
arise
located
RADAR
can reduce the overlap between
of processing
include
include
detection
fields
these
be
shorter events in the processed data.
into
lead ing
pitch
must
that
type
signals
problems,
technique which can compress events occuring
processing
raw data
of
events
this
of
speech
rangefinding,
processing
with
in
application
the
the
RADAR
problem
or
of
SONAR.
transmission, the
source
finding
To
of
the
reduce
pulses
are
event
range
peak
of
power
dispersed
In addition, the reflection functions of the targets
the
in
returns
the
to
be spread
received
signal
out
must
targets are to be located accurately.
even more.
be
Therefore
compressed
if
the
-
Surface
disturbance
at
mechanical
7 -
seismograms
are
the
surface
of
or
an
vibrator
with
created
the
by
earth,
explosion,
generating
either
and
a
with
recording
a
the
subsequent seismic vibrations with an array of geophones also
located
at
they are
layers
used
the
As waves
surface.
partially reflected
to
locate
the
As
reflectivities.
the source
pulse and
individual
reflection
at boundaries
into
the
between
earth,
differing
recordings made at the surface can be
ideally the
and
propagate
depths
in
the
the
these
of
boundaries
RADAR situation,
extension
functions
of
leads
that
to
the
and
duration of
duration
the
their
need
by
for
the
event
compression of the data.
One approach
the
time
these
before
between
to vocal pitch
successive
estimation
pulses.
glottal
to measure
However,
pulses are filtered by the vocal tract impulse
emanating
from the lips, the
period
compress
to
the
the next cycle.
pulses of
the
It
speech
since
response
speech waveform does not
offer well defined points at each cycle
the
is
is
from which to measure
therefore
waveform
necessary to
without
changing
their positions for this scheme to be effective.
Another
example
is the
which motivated this thesis.
sonic
well
figure 1.1,
log
is
to
lower
problem
of sonic
The procedure for
a
tool,
shown
well logging
generating
schematically
a
in
down an oil well and then raise it at a fixed rate
Well Log
Diagram
I
I
i
HOLE 'I
I
ROC K
I
I
I
I
%
I
R
61
1
CO
0
SH E AR
WAT E R--1-->/
l%1
\
T
I
I
I
5
I
I
I
Figure 1.
II__
-
9 -
back to the
top of the well.
During its ascent the ultrasonic
transmitter
located
bottom
(roughly once
the
every foot
for
waveform at the
at
in
receiver
of
the
tool
is
pulsed
of well depth) and the pressure
the
top of
the
tool
is
recorded
for later analysis.
In
a
problem, there
take
in getting
model
simplified
very
are
three
paths
for
from the transmitter
which has a characteristic velocity.
of
the
the
physics
ultrasonic
of
this
to
energy
to tne receiver, each of
The slowest path is that
of a pressure wave traveling up the mud with which the well is
packed
(to prevent collapse); the medium velocity path is that
of a shear wave coupled to the rock wall, and the fastest path
is for the compression wave traveling up the
rock wall.
The
quantities of interest to geologists are the velocities of the
salear wave and compression wave, which could be determined by
knowing
times
their
of
times
of
waves
these
arrival.
it
is
To
ascertain
necessary
to
the
arrival
compress
the
in the source data since the overlap between
individual events
them is considerable.
1.1
Previous Work
Approaching the problem of event compression typically
entails
modeling
the
physical
fashion and then designing
an
in
an
appropriate
algorithm which
is
expected
situation
solve the problem for data which fits that model.
____II
__
to
Subsequent
-
investigation
realistic
of
the
10
-
performance
environment
may
then
of
lead
the
to
algorithm
alterations
in
in
a
the
model, the algorithm or both.
For
detecting
example,
events
that
events
they
are
known
separated
some
(the
set
In addition, he
the
(this
Unfortunately,
in
either
the
is
event
situations
of
of
source
the data's
he
derives
event at
a
each point
compressed
are not
that
set
the
(in particular
statistics
a
knowledge of
beginning of an
noise
data model
by
assumptions
the
of
distance and
determined
these
being
some
is
problem
combination
requires
From
ratio for
data
logging)
linear
the
His
by some minimum
unknown
statistics.
his
analyzes
the context of RADAR.
exponentials
likelihood
in
are
each
waveform).
noise
in
[Young,1965]
signal).
sonic
well
known, detailed
information about the events is not available or the number of
available
exponentials
from
which
they
could
be
composed
of
compressing
is
large making this formulation inappropriate.
A
seismic
common
data
convolving
reflector
is
made
a fixed
series.
source wavelet
the
approach
source
By
the
assuming
wavelet
acquiring
series
and
(or
problem
the
data
with
an
was
estimate
to deconvolve and
something
[Tribolet,1977]
generated by
impulsive
an accurate
it would be possible
reflector
[Ulrych,1971]
by
to
have
close
used
to
seismic
of
the
recover
it).
Homomorphic
-
techniques
to
estimate
[Peacock,19693
the
-
11
and
wavelet
used linear prediction to
series
reflector
and
estimate the wavelet.
Similar work has also been done on the speech pitch estimation
problem
the
of
convolution
impulse
excitation
a
vocal
to
recover
an
of
The nature
series.
impulsive
or
signal
speech
response
tract
and
a
glottal
methods
these
is
is
is applied to the data in
that a single deconvolution operator
order
the
that
by assuming
(Markel,19721
nearly
impulsive
As such these techniques will only be effective
sequence.
if the events
in the data have similar spectra.
In this thesis we apply a recursive least square (LS)
adaptive
linear
prediction
to
algorithm
compression,
event
using synthetic data from a data model based on an abstraction
of the sonic well logging problem.
independent
data
have
time
invariant
event
spectra,
filtering
Because the events in this
would not
compression
effective
be
by
(the inverse
By using
filter for each arrival would have to be different).
an
algorithm
adaptive
we
perform
time
linear
event
varying
compression on the input data sequence.
To
from
the
implement event compression we derive
RLS
algorithm,
one
is
the
two
signals
post-adaption prediction
error at each point and the other is a measure of the changes
in the prediction coefficients as the data
two
signals are compared
experimentally
is processed.
using
synthetic
The
data
-
12 -
corrupted with white
gaussian noise.
of other
on
include
processing
prefiltering
event
the
To illustrate the impact
compression,
data,
some
decimation
of
experiments
the
data
and
postfiltering the error signal.
The next chapter
equations and
with
a
issues
discussion
covariance
method
is an analytical
involved
of
of
in linear prediction.
linear
linear
presentation of the
prediction
prediction
and
It starts
describes
the
[Makhoul,1975].
The
following section presents a derivation of the Recursive Least
Squares
algorithm
(from the
covariance
method
equations)
and
the last section of the chapter examines the
problem of proper
initialization
chapter
experimental
of
the
comparison
mentioned above, and
the
recursion.
important
of
The
the two
third
event
offers
compression
an
signals
the thesis concludes with a discussion of
results
of
these
exper iments
and
some
suggestions for future work on this problem.
-
-
2.
1
-
LINEAR PREDICTION
Linear
question:
prediction
attempts
Given a sequence
coefficients
for
formulation
answer
of data, what
predicting
sequence with a linear
to
the
value
is
of
the
each
combination of previous
is presented
in equation
the
following
best set of p
sample
of
samples?
the
This
(2.1)
p
sk=
where
the
for
{Sk}
cn are
L CnSk-n
n=l
is
the
the
dat3 sequence,
coefficients of
determining
squared
(2.1)
which
prediction
{gk }
the
over
the
predictions and
predictor.
coefficients
error
are
are
the
The
best
chosen
is
error
criterion
the
total
region.
Specifically
E=
(s
k
-k
)
k
Q
(2.2)
k
and
the coefficients
are
chosen to
minimize
the
total squared
error E.
1.
In the general case one could use an arbitrarily chosen
set of previous samples, but this discussion assumes that they
are the p contiguously previous samples to the one to be
predicted.
__
-
The
choice
of
differentiates between
14 -
the
the
error
two most
prediction.
If Q extends fom -
padded
zeroes
with
or
region
Q
is
common methods of
to +
extrapolated
what
linear
(the data sequence is
in
other
some
way)
the
resulting set of equations describe the Autocorrelation Method
of
prediction.
linear
involves
In
this
inverting a Toeplitz
case
solving
for
the
matrix of autocorrelations and
is usually done by some variation of Levinson's Inversion
for example
of
(see
[Makhoul,1975]).
The
method
ck
other
linear
common
method
prediction.
in
As
use
is
described
the
in
covariance
the
next
section, it requires that only the available data sequence be
used to generate the predictor.
set
by requiring
Therefore the limits of Q are
that all the sk mentioned
in eqs.
(2.1) and
(2.2) lie in that finite set of available signal points.
2.1
The Covariance Method
The equations defining the Covariance method are:
p
Sk=
cnsk-n
(2.3)
n=l
____
- 15 -
k
e
E=
(s
k=k
-k)
2 = minimum
(2.4)
S
Here, p is the number
of predictions coefficients and k
ke are the endpoints cf the prediction region Q.
and
By defining
the following structures
S
S
kS 1
ks- P
S -<
(2.5)
Agehe:e
1
Is
'
-Sk -1
e ' "e
S
-n
r'
S
s A
=
(2.6)
sk
e
__I_
I_
__
-
-
16
c =
(2.7)
C
the
covariance
matrix
method
can
be
formulated
as
the
following
projection problem.
S c - s
(2.8)
Where the
desired
minimizes
the distance
vector c which
solution
is
that coefficient vector
between Sc
and s.
= ST
unique
(SS)
- 1
is no unique
then some
restriction
(e.g.
The solution is any
s.
(2.9)
If the matrix sTs is invertible then
If there
which
solves the normal equations
STS c
c =
c
the
---------
the solution
is unique.
STs
(2.10)
solution
must
order
of
to eq.
be placed
the
(2.9)
on
c
predictor
(STS is singular)
to make
p
could
- -
the
be
answer
lowered
-
17 -
reducing the number of unknowns).
The
covariance
method
generates
a
single
predictor
with the minimum possible total squared error over the finite
prediction
region
calculating
a
successively
sequence
longer
computational
covariance
Q.
of
RLS
method
covariance
prediction
savings
method
The
compared
predictor
method
regions
to
for
is
means
for
predictors
over
{Qi}.
directly
each
a
Qi'
It
offers
calculating
The
a
the
derivation
of
this algorithm is presented in the next section.
2.2
Recursive Least Squares
The
RLS
algorithm
predictors {c[k 0]
finds
a
sequence
of
for an input signal by minimizing the total
up to s k
squared error from a fixed starting sample sk
S
a
signal
{s
k}
and
optimal
a
p
coefficient
predictor
.
For
0
c,
the
error
sequence produced by the k 0 th predictor is given by
ek[k]=sk
p
I s k -ncn[kl0
n=l
and the quantity to be minimized is
k
Q
(2.11)
-
18 -
k
2
E k0 3 =
(2.12)
Z1 ek[k0.
k=k
This
matrix
RLS
for
does
predictor
data
point
new
state
calculation
each desired
an
sk
calculating
be
predictor
iterative
clk 0 -l]
could
(see eq.
calculation
together
with
to calculate
the
a
The
successive
predictors,
calculation
only
each
state
advantage
over
O(p 2
)
inverting a
2.10).
uses
using
the
new
c[k 0 ] and
the
method
the
region,
method would need O(p 3 )
previous
and
this
operations
Instead,
the
matrix
of
new prediction
requires
Whereas the covariance
which
by
new predictor
matrix.
method directly on
performed
is
per
for
covariance
that
this
predictor.
operations
per
predictor for each matrix inversion.
The derivation of the iteration can be shown in matrix
form as follows.
Let the signal be formed into a vector and a
matrix as before,
Sk 5 -1
S[k 0
k
s5 -p
'
]
sk
O
Sk
s[k.
k o -p
(2.13)
.
-
19
-
Then the error vector is
ek
Ik0 ]
S
e[ko]
= s [k0 ]-S [ko] c[k 0]
=
ek
(2.14)
[k0]
0
with
Cl [k0]
(2. 15)
Cp[k]
ck
cp [ko]J
In this notation, the e total error is
E [ko]
e TT
[k
0
and the optimal choice for c[k 0]
c[ko]
(2. 16)
]e[k0 ]
is
= (ST[koS [ko] )-ST [koS[k o]
(2. 17)
-
20 -
The derivation of the RLS iteration from eq.
performed
by substituting
available
at
efficient
the
manner.
k 0 th
The
k0 + 1
point
for
k0
to
new terms
and
using
evaluate
in
the
(2.17) is
information
c[k 0 +l]
equation
in
after
an
the
substitution are
[ k 0k
s[k0+l
(2. 18)
=__
0Sk+l
and
S [k
S k
0
I
0]
+l] =
Sk
*
(2.19)
k-P+l
and
c[k0+l
=
c~k
~k-10 +1]
ST[k0+
]
By introducing
=
i
T
]S[k0+l]
STk0+l]1s[k0++1]
(2.20)
-
21 -
s
k0
(2.21)
kk-p+l
0
one
can
substitute
and
(2.19) to get
for
terms
in eqs.
(2.20) using
eq. (2.18)
-1
+rr )
0[kS[k0 0
c[k00 +l]=(T
(sTk 0 ]s[k0
+rsk +1)
k0+10
(2.22)
At this point introduce the following matrix identity:
For any symmetric invertible matrix V and any
vector r with the same size as a column of V,
-1
(V
1
+ r
r T)
= V -
VrrTv
l+r Vr
(2.23)
Now let
V-
and
L sT[k
0
]s[k 0 ]
(2. 24)
c'
(2.25a)
= c[k0o+l
= V[k 0 +1]=(S
0
V'
22 -
(2.25b)
[k]S([k 0 ]+rrT)
0
(2.25c)
= SSk0+.
S'
0
By leaving out the indices from eq. (2.22) one can write
C'
Rearranging
=
-
V
terms
VrrT ) (STs+rs)'
l+r Vr
and
combining with eq.
(2.26)
(2.17) leads to
an
expression which describes how to update the predictor.
c'
c
+
Vr
s-rTc)
(2.27)
l+r Vr
The
other
RLS
(2.24) and
equation
is derived
by combining
eqs.
(2.23),
(2.25b) and it shows how to update the state matrix
V.
V'
= V -
VrrTv
l+rTVr.
(2.28)
-
23
-
The full RLS iteration procedes as follows:
Starting
with
V
and
c
from
the
previous
iteration
and
the
vector of previous signal points
s
ko
(2.29)
SkO-p+l
use the new signal point to calculate the new predictor
c'
= c +
Vr
(s'-rTc)
(2.30)
l+r Vr
Finally, calculate the new state matrix
T
V'
= V -
VrrT
I+rTVr
(2.31)
-
2.2.1
Initialization of
24 -
LS
The RLS algorithm offers a means to efficiently extend
the
error
point.
C0
of
a
Thus, given an
and
an
calculate
error
region
initial
calculating the
to
predictor
matrix
V O,
of
predictors
up to
the
the
end
one
of
the
which
perform
both
can
since
data
those
calculations.
for
would
two
an
initial
involve a
tasks
It
initializing
initial
one
can
recursively
of
all possible
data.
be computationally
matrix
small data interval and then execute the RLS
remaining
by
However,
initial covariance predictor normally involves
inversion
addition,
state
extents
method
initial set of prediction coefficients
the coefficients
region
a matrix
covariance
was
the RLS
covariance
for
a
iteration for the
to
fundamentally
different
find an alternative means
algorithm which
predictor
inversion
In
large amount of program code,
involve
desirable
costly.
calculation.
did
not
involve
Examining
an
a first
order case suggests a solution.
For a first order
(single coefficient)
predictor,
the
main variables have the following form:
c[k 0]
= C 1 [k0]
(2.32a)
-
s
S
S[k 0o]
25 -
1
lI
S
(2. 32b)
Sk -1
0
Sk
s
s[ko]
=
(2.32c)
Sk_
-
I
/i
U
and
the predictor which solves the covariance method equation
(eq. 2.22) is given by
k -1
c
[ k] 0
ISkSk+
1
s
= k=k -1 kIk+
k -1
(2.33)
k=k -1
s
Because
dimensional,
and
the
(2.31) are
the
variables
equations
for
C,
v
the
RLS
and
r
are
iteration
now
from
one
(2.30)
-
c'
= c +
vr
26 -
{s'-rc}
(2.34a)
l+r v
or
c + rs'
c
t
=
V
(2. 34b)
r2
1+
V
and
2 2
VI
=V-
V
(2. 35a)
2
l+r v
or
V'
1
=
1+
V
(2. 35b)
r
2
where
r
Sk
(2. 36)
0
and
k -1
v-1 = S
T [ k
] 0
S[k ]0
2
=
k=k
s
-1
(2. 37)
Suppose one
and
(2.35)
k 0 +l=ks).
at
started
the
first
By assigning
27 the
point
iteration of equations
in
the
error
(2.34)
region
(i.e.
initial values v=vinit c=Cinit we
the
have
v =
(2.38)
k0-1
2~
2
1 1+
init
k=k s -1
Clearly,
requirement
of
for
eq.
this
iteration
(2.37), V init=.
to
Given
conform
that
to
the
result, the
successive values of c are
k -1
vinit
init +
init
ctk 0 ]
SkSk+
k=k -1
k -1
0
1
+
Vinit
making
method
predictors
This
behavior
data
comes
then
is
from
impulse at n=10.
S22
E
k=k -1
Therefore, given vinit= =,
(2.33)
(2.39)
the subsequent values of c solve eq.
identical
(irrespective
to
of
the
equivalent
the
initial
shown experimentally
exciting
a
1-pole
in figure
linear
covariance
value
of
(2.1).
system
c).
The
with
The system function and time response were
an
-
H(z) =
28 -
1
1
98
z
sin] = ul[n-lO0]
(.9 8 )n10
in the absence of noise should
The covariance method
predict this signal exactly when given more than one point of
its
impulse response
in the chosen error
As
region.
can be
seen from the figure, RLS also predicts the response for every
point
after
the
effective means
for
is an
This method of initialization
first.
matching
the
predictors
RLS
to
tnose
generated by the covariance method.
2.2.2
Higher Order Initialization
When
a
multicoefficient
is
predictor
used,
the
mathematical approach used in the previous section to find
Vinit
and
products
do
(2.34) and
Cinit
is not
not
commute
(2.35).
fruitful
and
since
in
cancel
the
the
vector
fashion
matrix
of eqs.
One reason for the difficulty of choosing
the initial values of c and V in this circumstance is that the
covariance
specified
predictor
until
the
for
error
which
region
we
is
aim
is
at
least
not
uniquely
p points
length; and even then only if the data sequence requires at
.
in
.
_.
-
. . .
.
_
VER
0.312
I
.
t
RLS ERROR
/DIv
I
T
1
X
T
T
!
I
t
V
T
1X
t-
T7
I
i
7
1
T
I
V
!
iI
Ln r n i
LHU
1.00
VER
rIb/Ul
/DIV
0.300
-
COEFFICIENT 1
It
! I
I
1
r
I
I
T
v
.
T
I
±
·
~
I
~
T
·
.
T
·
.
I
·
I
T
.,
I
I
,--t-
I
;I
-
F
1.00
PTS/OIV
Figure 2.1
I
4
I
I
i
I
I
T1
1
I
_.
i
.J
I
I
i
I
11
d~~~~~~~~~~~~~,
- 30 least a p pole model to describe it.
Consider the covariance
matrix definition
R
ST S
(2.40)
The iteration for R is
R[k+l] = R[k] + r[k]
R [ks-1
S
rT[k]
(2.41)
k>k -1
S
(2.42)
= 0
therefore,
k -1
R[k0 ]
rr T
=
k >k
Os
k=k -1
(2.43)
s
Now consider the
iteration
for V along
with
Vinit=Ia
as an
approximation
approximation for R 1
.
If R=V -1 then given equation (2.25b)
A[ks] ] =
1 I +
r~kg-l]
R[k
= 1I
+ r[k
-1]
and in general
rTls-i]
r
[k -1]
(2.44)
-
31
-
kk0-1
-l
R[k)]
I
-
r[k])r
+
~a
kI
(2.45)
k=k -1
Therefore setting a=s will make R=R and as in the first order
case
the
RLS
predictor
will
correspond
to
the
covariance
method predictor.
In the first order case the initial c was irrelevent.
Ultimately, the same is
well since c
true
in the multidimensional case as
solves
R c
(2.46)
= ST s
and once R is invertible, the value for c is fully determined.
However,
for
the
first
few
points
of
the
iteration,
R
is
singular and the trajectory followed by c depends on the data
values and the value ofcinit.
2.3
show
which
was
a 4-pole
processed
initialization
impulse
by
As an example, figures 2.2 and
response
beg inning
a 4-coefficient
at sample
predictor
with
20
the
-
Vin.t
init
32 -
= 10100
I
Cinit
0
The predictor
in
figure
different
predictors
used
2.3 is
in
igure
started
initial
reach
non-zero data, but
at
data
the
2.2 is started
k=30
(thus
values).
same
their
value
these
As
can
after
4
trajectories
the dependence of trajectory on Cinit,
at ks=10; the one
differ.
examples
use
seen,
both
be
steps
into
To
illustrate
figure 2.4 presents a
4-coefficient predictor started with
V
=
Cinit
=
10 1
the
I
0
-
-
k
Again the
S
33 -
= 10
same predictor value
is reached after 4 steps into
the data, but the trajectory differs from the previous cases.
-
- -~~~~_
____
0.281
/DOlV
INPUT DRTR
- Ill
I
fR
5.00.f.
PT$/nIIv
Of
5.00
PT5S/n V
ER
0.315E-02 /DIV
OR
5.00
PTS/OIV
ER
1.17
/OIV
_
5.00
PTS/OIV
VER
1.69
/DOI
ER
RLS ERROR
COEFFICIENT
lTl Tll !l I l li lll il i
_:i;_:::;:_.:
_:
HOR
al
I I I I I 1 1
L5.00
COEFFICIENT 2
PTS/DIV
1.12
/DIV
OR
5.00
PTS/DIV
ER
0.288
COEFFICIENT 3
/DIV
COEFFICIENT
I
J,
5,.00
4
iiiiiiii
II
PTS/0IV
Figure 2.2
............~
~ ~ ~
~
~
~
~
~
~
~
-
I
iiii
I
I
--
----
-
L
RLS ERROR
0.807E-05 /DIV
(ER
-
--
LR
5.00
PTS/DIV
.'R
1.17
/DIV
MOR
5.00
PTS/OIV
YE£R
1.69
/DIV
5.00
PTStDIV
.0
ER
112
q
--
-
-
-- --
-
-
COEFFICIENT
I
-
-
-
-
--
)-
-
I
COEFFICIENT 2
COEFFICIENT 3
DIV
. iTTiLl
i.
i. . T.
48R
5.00
PTS/DI V
YER
.A288
/DIV
COEFFICIENT
4
9
ill
I
10R
5.00
PTS/DIV
Figure 2.3
____
-
. 1
.
.
i
I
I
VER
.RA1
/nty
U;
INPUT DORTA
-.,
iMR
ER
.00
PTl/jlIv
5.00
PTS/orv
0.807E-05 /OIV
HOR
5.00
PTS/OIV
ER
1.17
/DIV
OR
5.00
PTS/OIV
.ER
1.69
/DIV
OR
5.00
PTS/OIV
ER
1.12
/DIV
HOF 5.00
fER
0.288
1 T IT
tI TT It
6R
RLS ERROR
COEFFICIENT I
COEFFICIENT 2
COEFFICIENT 3
PTS/DIV
/DIV
COEFFICIENT 4
.1
0R
5.00
'li6il
PTS/DIV
Figure 2.4
il
2.2.3
Numerical Considerations
The
matrix
in eq.
change
eigenvalue
to
of
is
R
that
the
succesfully
Subsequent
which the
choose
1/cz
about
1010
for
iteration would
value
use
so
that
our
initial
the value of
not cause
for
a
be
smallest
the
in comparison,
large
Fortunately, by using
to
of
error since
for numerical
In
of
any
coefficients as the data was processed. Since it
the
necessary
iterations
few
clearly singular.
threshold
(2.44) above
in
require
a
is
section offers
the last
first
The
problems.
work we discovered
is
in
equations have great potential
the associated R
ct
derived
initialization
numerical
potential
these
37 -
much
it
non-zero
is undesirable to
smaller
than
1010.
double precision arithmetic we were able
values
comparisons
exceeding
of
the
10
20
direct
for
covariance
calculation with the RLS algorithm initialized
Vi
init
initialization.
method
at
1010 I
set
showed coefficient differences well under one part in 10
data
lengths
up
to
1024.
That
for
accuracy was deemed sufficient
for this work, but for a more critical application
it would be
-
wise to
38 -
determine specifically how the
roundoff error of the
computer affects the initialization of the algorithm.
2.3
Discussion
This
chapter
behind the RLS
behavior.
For
has
more
references
particular
the
basic
mathematics
algorithm including some simple examples of its
information
prediction the reader
the
presented
can
on
the
topic
of
linear
is invited to examine rMakhoul,1975]
he
cites.
be
found
More
in
information
[Eykoff,1974]
about
and
more
and
RLS
recent
ladder forms of recursive linear prediction are illustrated
(Satorious ,1979].
______
-
in
in
3.
EVENT COMPRESSION WITH RLS
The
linear
a
of
described
it
can
how RLS
in
compressed
the
and
the
finally
an
method
and
input data
used
used
of
to
sequence.
to compress events.
sonic
We
can
which
RLS
for multiple event signals which
introduction.
signals
the
initialized
can be
of
abstraction
be
from
predictors
proposing a model
a very simple
iteration
how
examines
by
described
and
series
This chapter
We begin
chapter
previous
prediction
generate
is
39 -
be
well
then
situation
describe
extracted
experiments are
log
two
from
performed
to
the
event
RLS
illustrate
the performance of these signals.
The
algorithm
concept
like
RLS
behind
for
using
event
an
adaptive
compression
is
prediction
the
following.
Consider the series of predictors created by the RLS
as
a
single
time-varying
prediction
error
processing
an
energy
input
predictor
event
the
in
the
to
predict
of
the event
the
to
This
predictor
the event.
predictor
over
expanding
an
containing
make errors
at
minimizes
the
have
error
__
will
coefficients
as
Hopefully, after
passed,
will stop
burst
the
error
changing.
If
the total
region.
distinct
beginning of the event will
by the algorithm.
change
which
sequence
expects the
since
predictor
algorithm
When
events, one
beginning of each
not be predictable
be
accompanied
by
a
the
algorithm adapts
the
first
pulse
that
will
is
few points
die
the
away and
case, then
- 40 -
the
prediction
error
and
the
coefficient
changes
in a way that could be
respond to the events
would
used
to
both
perform
event compression of the input data.
3.1
The Data Model
This thesis was motivated by the problem of sonic well
logging which was presented in
present
log
Rather
data
fabricate
log
The signals
introduction.
in a real well log are ve y complex do to the geometry
of the well.
well
the
and
than
attempting
(a complicated
data
synthetic
what
exhibited
task
which had
we
to
accurately model
in
itself)
the
felt was
we
appearence
an
important
the
chose
of
a
to
well
feature of
well logs; namely that the signal contain multiple bursts with
spectra.
independent
this study is
each
of
shown
three
The
in
data model
figure 3.1.
delays
D 1 -D
for the signals
A single impulse is fed to
Their
3.
discrete-time all-pole systems H 1 -H
3
outputs
generating
are added to produce the signal
pulses which
three pulse model
used in
drive
three
sk].
the
delayed
We chose a
because the problem of seismic well
logging
can be considered a three pulse problem.
To
presented
chosen
make
it
later,
a
from
experiments.
the
possible
small
data
set
model
Most use either
__
to
compare
f
the
various
representative
given
above
to
figures
signals
was
perform
the
a signal labeled Burstl or some
______1_1_
rKt
a)
i,
I-
-
combination
Burstlc.
of
its
42 -
component
pulses
Burstla,
Burstlb
or
The parameters used to generate Burstl were:
Burstla:
D 1 = 50 points
H1
I
1-3.73z-1+5.4z-2-3.58z-3+.92
- 4
Burst b:
D 2 = 150 points
H 2 1-3.89z-1 +5.74z 2 -3.81z- 3 +96z
-4
Burst c:
G
D 3 = 250 points
H3 --
3
-
3 1-1.92z -1+ 98z-2
where
peak
the
gains
powers
were
of
1,
chosen
.5,
through 3.5 show the
and
time
to
.2
give
the
component
respectively.
response
and
log
arrivals
Figures
3.2
magnitude spectra
for these signals.1
The
complex
first
poles
amplitude.
as
two
bursts
indicated
have
by
2
their
superimposed
gradual
pairs
build
up
of
in
Whereas the third burst is due to a single pair of
1. The graphs
in these
figures and
in
following have been linearly interpolated.
most
of
those
-
complex
poles
giving
43 -
it a much sharper onset.
These choices
were made to give the data the appearance of a sonic well log.
In particular, the slow growth of the second burst is a common
feature
of
well
logs
and
makes
finding
compression or by eye a difficult task.
it
either
by
event
0.312
l00.
20D/OIV
/DIV
BURST I R
PTS/DIV
LOGIl
MGN SPECTRUM
Figure 3.2
ER
0.221
/DI
BURSTIS
rev.
PTS/DIV
-I
--
rU,I
Figure 3.3
BURST1C
/IlV
0.140
-I
100.
I
Eli
0::
A
X
PTS/OIV
ER 20DB/OIV
LOGlO MRGN SPECTRUM
7
I
·
I
.1R
l
1
.FS/DIV
Figure 3.4
I
I
1
I
0.312
)R
100.
20DB/DIV
/DIV
EURST1
PTS/DiV
LOG10 MGN SPECT9UM
. IFS/DIV
Figure 3.5
- 48 -
3.2
Event Compression Signals
There
iteration
are
we
which
compression.
signals
three
These
the prediction error
examined
are
for
available
from
potential
use
the
in
RLS
event
predictor coefficient vector c,
the
at the new point before the update e b,
and the prediction error at the new point after the update ea.
In terms of the equations for the update
chapter
given
in the
last
(eqs. 2.30 and 2.31) these error sequences are defined
as
eb[kO+l] = s[k 0+l]
- rTc
(3.1)
ea [k o+l]
-
(3.2)
and
= s[k 0 +l]
rTc
Of the two error signals we chose only to work with the post
update
prediction
error
ea
because
it
contained
compressed events as illustrated in figures 3.6 and 3.7.
more
The
data sequence for these figures was a single pole burst and as
expected both error signals have pulses at the
the burst
(the
first point of
first point of the burst can not be predicted
with RLS since the previous points are
all
zero).
However,
though the second point of the burst is predictable from the
first, only ea is zero for that point since eb is calculated
ER
0. 312
/DIV
INPUT DATA
i
L
~~I
T
I
T
T
T
HOR
1.00
VER
0.312
i
A
A
A
IA
I
A
~P
A
L LII~~~~~~~
L
TTT
"I
T
11
i
i
I
I
i
Il
1
[
I
I
I
PTS/DIV
/DIV
L
~I
T
L
HOR
'R
J1
I
T
1.00
0.300
1.00
A.
I
A
T
I
I
I
A
T
Il
I
I
.
T
A I
T
I.
T
I
7
T
PTS/01V
/DIV
COEFICIENT I
PTS/OIV
Figure 3.6
I
T
.
T
T
i
7I
-
L IL
I
I
A
T
7
i
r
-7
0.312
_tER
E
,
r
I
[ER
-
!
-
*
-
X
-
1.00
0.300
1.00
INPUT DATA
,
I
T
I
I
T
-
_
I
._
I
t
I
I
I
_
1_
-
I
4
-
t
I
1
1
;
i
i
I
f
T
T
T
_
PTS/DIV
0.312
IL
no"
.A
. _ ikk1 ~ At.
,
1.00
HR
-
/DIV
/DIV
X
,
- - - *
I
-
L1,1.
.
1
PTS/DIV
/DIV
COEFFICIENT
PTS/DIV
Figure 3.7
-
--
~-.- - .
- 1-
1 . I- 1 -~
_
-
before
updating
the
51 -
This
predictor.
effect
leads
to
longer
error bursts in eb at events in the data and led to our choice
to
use
ea
for
our
work. All
error or the prediction error
further
references
to
in this thesis are
the
RLS
to ea unless
otherwise noted.
final
One
signal,
to
the
applying
the
signal,
different
left
entire
sequence
energy
compared
to
the
this
error
sequence
error
on data of this type. The
the RLS
whereas
to
predictor
each
error
point
comes
in
the
Because the RLS predictor need only minimize the error
to
RLS
entire
a
energy
the
about
noted
error sequence comes from applying a single
covariance method
signal.
be
the covariance method
by
predictor
should
when
particularly
generated
from
point
error
than
of
the
(as is
the
sequence
the
error
predicted
point
and
not
over
the
case with the covariance method),
will
almost
sequence of
always
the
have
lower
total
covariance method
for
the same predictor order. As a matter of observation, we have
found
that this reduction in energy takes the form of shorter
error bursts at the events in
the data; though as yet we have
not proven that this must be the case.
In
addition
to
the
prediction
error
signal,
we
examined the changes in the predictor coefficients as a means
of
generating
an
event
compressed
signal.
This
idea
stemmed
from observing how rapidly the prediction coefficients settled
-
52 -
after a new event occurred.
Figure
3.8
demonstrates
the
behavior
a
12
coefficient RLS predictor on the signal Burstl defined
in
the
last section
to the
(some of the coefficients are omitted due
lack of space).
single
The
non-zero
first event
error
point
predictor.
This behavior
the
event
first
causes
very
because
of
bursts).
250).
is
little
the
But
Both
and
all-pole.
error
the
error
no
longer
coefficients led us
second
coefficient
between
activity
all-pole.
rapid
since
The
or
and
a
expected
similarity
note
the
is
(burst) at point 50 causes a
the
The
the
at
the
event
and
event
150)
second
(point
short
in
that
settling
of
(presumably
first
third
the
as
(point
change
fact
in
signal
ettle
the
rapid
change
the
coefficients
time compared to the burst, despite
is
of
the
time
signal
of
the
to formulate a signal based on them which
could be used for event compression.
A coefficient
filtering
each
ci [k]
produce
the
vector
Ilctk]-e[k]|t.
change signal was generated by low pass
with
a
single
[tk]
low
pass
measuring
the
Small random variations
value are reflected
in c[k]
filter
to
distance
about a fixed
in the coefficient change signal as noise,
with each sample having
a height equal
from the average predictor e[k]
c[k].
and
pole
However, when ck]
to the radial distance
to the instantaneous predictor
jumps to a new value due to an event
ER 0.312
BURST
/DIV
n ["\ A A
A AAA A
r\,
V 'wvl
vivm?'V Cluv
HOR
VER
PTS/DIV
50.0
0.355E-01
OR
50.0
PTS/DIV
ER
1.23
/DIV
H§R
50.0
PTS/DI¥
VER
1.88
/DIV
NOR
50.3
PTS/DIV
IER
2.40
/DIV
I
RLS ERROR
DI
COEFFICIENT 1
COEFFICIENT 2
COEFFICIENT 3
I
,} I
(12 COEF'S)
-
'
I
NOR
50.0
PTS/DOI
COEFFICIENT
50.0
PTS/DIV
Figure 3.8
I
v
I
i
i
/
54 -
-
in
the
data,
the
coefficient
exponentially decaying
the distance between
change
signal
will
contain
an
pulse whose initial height is equal
to
the old and new values of c[k]
arnd whose
decay time is set by the low pass filters used to generate C.
In
effect
filtered
the
to
motion
create
of
the
the
predictor
coefficient
is being
change
drawback of this scheme is that slow changes
will be reduced
in amplitude due to
high-passed
signal.
One
in the predictor
the high-passed
nature of
the coefficient change signal.
The
RLS
error
and
the
coefficient change
signal were
use to perform the event compression in all remaining
Figure
3.9
shows
how
they
behave
on
this case the 50% decay time of the
'he
change
(where p is
choice
for
signal
was
the order
this
multiple peaks
4
of
points.
the
parameter.
in the
the signal
to
Much shorter
at
to
empir i ally
predictor)
change signal
burstl.
filter; used
We
be
decay
figures.
generate
found
an
p/3
effective
times
each event
In
led
and
to
longer
times reduced the resolvability of closely spaced events.
3.3
EXPERIMENTAL RESULTS
The
error
and
containing
last
section
coefficient
all-pole
rarely available and
presented
change
events.
it
is
In
the
behavior
signals
practice
on
of
the
noiseless
noiseless
RLS
data
data
is
quite possible that preprocessing
C9f
'EM
- --
BURST I
/DIV
.312
I
.
/\fVn\fnfIf\ AI
-
I
L-_
MOM
VER
1l
j
J
"
TVV
-
L
,I
I-
J
"-j
v
f\ I
t
J
'm
l
Vv
%
r-
1
VJ
PTS/DIV
50.0
~~~~~~~-
1
·-
-
-
-------
-
50.0
PTS/OIV
5.' 6
/0 1i
RLS CEFFICIENT ChRNGE
_
I
I
I
L -- nDM
bU.U
i
PTS/DIV
Figure 3.9
!
i
I
- 56 -
(e.g.
filtering)
data
if
they
could have added zeroes to the events in the
were
not
experiments are intended
what to expect from
already
present.
The
following
to give the reader some insight as to
these
event compression
signals
when
the
input data is not ideal.
3.3.1
Additive Noise
The
example
of
event
location
given
the
in
last
section used noiseless data. Figure 3.10 shows what happens to
that
example
sequence
a=.001
when white
Burstl
giving
the
features appear
change
signal
(the
in this
the
is added
deviation
standard
first
where
gaussian noise
burst
of
this
a S/N of 60db).
figure:
the
algorithm
pulse
is
to
the
input
is
noise
Two important
in
the
started
coefficient
(point
0),
and
the substantial difference between the coefficient response
the
second
the
starting
coefficient
burst
(point
transient
response
to
150)
(in
the
and
the
the
first
(point 50).
coefficients)
first
event
are
and
the
implied
to
Both
large
by
the
structure of the RLS update equations.
Recall that the algorithm is trying to adjust c to get
the least square error E in the equation
E =
IlelI
2 =
Ils-Sell 2
(3.3)
BURST1 S1GMR=.001
/DIV
0. .312
fER
I
A
A
(N,
Abu
I
F
HOR
VFR
I
(i2
COEF'S)
1'
\I1
PTS/DIV
50.0
0.354E-01
7
I
AI
i
/IV
ALS ERROR
:-·
_
R
___
A
__
__
.
_
50.0
PTS/DIY
2.16
/OIV
50.0
.
_ __
___
_ Lao
_ __ _ __
_ _TI
_
_ ____
__J
RLS COEFFICIENT CHANGE
PTS/DVY
Figure 3.10
A
_
_ ____
__M__
__
I_
_____
·
- 58 -
to the matrix S and new points to
Each new point adds a ro
the vectors s and e giving
e
e'
S
-
S
=
-
the
new
c'
predictor
the
separating
i c'
.
(3.4)
rT
e'St
and
---
of
components
the
2
E'=I el
minimizes
new
E'
error
2+e'22
By
into
the
contribution from previous points Ep and the contribution from
the
current
represent
point
the
Ec
and
in
change
by
the
introducing
predictor
the
vector
coefficients
d
to
(i.e.
d=c'-c) one has the following relations:
Ep = Hell
2
+
IISd11
2
(3.5)
= E + d TRd
E
The
term
= Ils'-rTc',
d Rd
(3.6)
2
represents
the
cost"
error
of
changing the
predictor in terms of poorer prediction of previous points. At
each
new point
benefits
of
the
algorithm trades
improved
prediction
off
of
that cost with the
the
current
point
-
(reduction
in
the
of
cost
eigenvalues
Ec)
R
the
are
RLS
starting
error)
transient
in
the
low
small.
the
This
is
the
be
very
(and consequently
the
cause
change
coefficient
(e.g.
will
predictor
the values of Ec
and
be
will
is
predictor
small),
responsive to the data
the
permit. Therefore, if
that a change might
chang ing
of
59 -
of
signal
both
the
and
the
sensitivity of the coefficients to the first event.
To
has
on
the
illustrate
event
(3.1lla
through
offset
to
the
impact
the
compressed
3.13c)
right
was
300
that
a
signals
prepared
points.
the
series
using
Each
additive noise at a different level (i.e.
and within each group the RLS
different points
transient
starting
the
group
of
figures
Burstl
of
a=.001 ,
algorithm was
9
signal
three
has
.01 and .1)
started
at
three
( point 0, point 200 and point 300).
The starting position of the iteration has a small but
noticeable effect on the compressed events. For
the RLS error,
the sooner the arrival occurs after the starting point of the
iteration,
opposite is
the
smaller
true for the
the
event
will
be.
Exactly
coefficient change signal.
the
Equation
(3.5) indicates that the longer the interval between the start
of the
iteration and a given arrival the more linear equations
the predictor has to fit and
the less
itself too
Since the data values generating
these
the
new points.
additional
equations
are
it can afford to adjust
noise,
the
equations
are
- 60 -
independe;:t
and
predictor c
(That would not be true
or
each
one
puts
came from a noiseless all
added
points
predictor
change
Fortunately,
change
desensitize
and,
this
of
is
constraint
arrival).
predictor
consequently,
effect
a
starting
transient
leading
more
gradual
and
iteration
itself
the
In effect,
does
not
is
not
does not
the
to
prediction
less
error.
appear
to
Therefore, the
the character of the compressed events.
the
on
if the data were all zero
pole
the
exact starting point of the
as
more
crucial
obscure
as
the
long
first
arrival.
In
certain
situations
existence of
the
transient may be a problem due to the lack of
at
the
start of
initializing
the
some signals.
RLS
iteration
In
to
starting transient would have to be found.
starting
eventless" data
that case,
reduce
this
or
some
means
eliminate
of
the
STRT
YFR
I7
:-*
0.312
T
BURST 1 SIGMA=.001
/DIv
AAtliAn4\ Ahf>1/
I
I
V VVV Ji
40R
!00.
PTS/DIV
0.300E-01 /DIV
100.
VER
7
L
0.500
100.
RLS ERROR 1 12 COEF'S
PTS/DIV
/[
)IV
__
HOR
\V
_
___
RLS COEFFICIENT CHANGE
___
___Y_
u
__
_
_
PTS/DIV
Figure 3.
la
_Y
_
I
K
·
STRBT
0.312
--
BURST1 SIGMA.
/OIV
- -
s
T 200
-
e~~~~~~~~~~~~~~~~~~~
001
____I_
·
t
_
!
_
·
·_
1
1
1
1
~
_
.
_
._~
f
.
iR
,
i
.
11
·
VVVV V AvV
OR
-ryER
-
100.
0.300E-01
PTS/DIV
/DIV
ALS ERROR
( 12 COEF'S
-
·
;OR
100.
.ER 0.500
_ ___ __
I
1_______Y___I
.- -
-
_I· __
--
n-
'v
PTS/DIV
/DIv
RLS COEFFICIENT CHANGE
H
I
100.
II__
-----
PTS/D I
Figure 3.1lb
N________
--
-
Ai
I
START
0.312
BURSTI
/DIV
T 300
SIGMR=.D01
AIf \ I
100.
-
VER
-
PTS/D1V
_
100.
ER
RLS ERROP
0.300E-01 /IV
0. SOP
100.
L
12 COEF'S )
· ___
A
___
.-.
-W-
PTS/DIV
/DIV
RLS CEFFICIENT CHANCE
PTS/DIV
Figure 3.11c
-~------
~
~
_~
A i----C----r-l-*
A.-
.ft
STRRT
I
YER
BURST 1 SICR=.01
/DIV
0.308
T 0
Ah~AhAA~AR
.
-
--- --
HOR
I
,-
RLS ERROR
/DIV
-iL
VEP
------
2FJ9,~^;'-i
"Af11 YT, AI --I*v
-___
100.
HOR
-11
11
PTS/DiV
0.300E-01
hvi^
I IT
VI
--- -
100.
VEFR
t
/
0.500
rt-o-NT7U-"Vf
5f'
( 12 COEF'S
S .lll,
iA
b
..
P
-e
tA
r
------T. '-16
0 --I .ll-1'N
A
N
P TS/ODIV
RLS COEFFICIENT CHRNGE
/ 'DIV
5
I
J
_
_
_
_
1_
I - _- ---------------- · ----
_
m
_
-l
b OR
100.
Pr/DoI
Figure 3.12a
_
--- -----------
I-------------.
STARRT
ER
0.308
T 200
BURSTi S!GHR=.01
/DIV
i
-
HOR
YEA
IoO.
.
.
I 'A U I
-
PTS/OIV
0.300E-01 /DIV
RLS FRRR
12 COFF 'S
-I
i
oR
100.
YER
0.500
PTS/DI V
/ODlY
LS CEFFICIENT CHRNGE
I
L
nOR
100.
PTS/DIV
Figure 3.12b
__
____
.
START
0.308
T 300
BURSTI SIGMR=.01
/DIV
p5l
Ij
100.
YER
PTS/DIV
0.300E-01 /DIV
RLS ERROR
12 COEF'S
I
-
-
i
Al.
A ha
.~U1 A,
Ijr-rVI14111TW
lOR
100.
0.500
_
_
_
PTS/DIV
/0V
_
RLS COEFFICIENT CHANGE
_
Figure 3.12c
.--.
IkJki~I 1 i
f
ip-To
tr,
STRRT PT 0
0. 382
ER
I,,il ,.,,, ~hU.l
t
F
.,,~Lb
~ ,,, bllrhtA
I"r 'tlStf T-.
OR
100.
n
__
-,l
~,.l~i, A. ~
fIA~ i*V4.
__ _~_
__
_
100.
R
0.500
100.
__
_
_I
"
T
_~~_~__
__
_
I
_·
I 1
I
A' t hKA
V- y iV V Vf
JA.,A
hAi
~
yIv
N"7T
__
_
_
__
PT$/DIV
----
--
--
SIGAR. I
BURSTI
/DIV
PTS/DIV
RLS COEFFIC!ENT CHANGE
/DIV
PTS/DIV
Figure 3.13a
-
-
_
_
_
_
l
_
1
I,
1%
_
·
¾/V
STRRT
oTR20G
BUfS I SIGR=
l
/01V
fEs 0.382
100o
RLS ERGBR
0.100
2 COEF'S
/DIv
p /OYv
100o
_
100-
. .
CRNGE
RLS COEFFIC ENI
p?5/O1V
Figure
_.--------------__----_-----------
3
.13b
STRRT
,
.
T 300
BURST1 SIGMR-.
d
PTS/01V
100.
-
. .
r-
Ic
PTS/DIV
100.
r
\
rEFFFICIENT
CHANGE
b
100.
PTS/D!V
Figure 3.13c
_
-------
L---
- 70 -
To
following
eliminate
event
500
points
the
starting
other
to
than
the
location
the
starting
examples,
left
of
the
transients
do
not
slightly changing
transient
the
all
the
predictor
was
data
consequently
visible
appear.
the
from
and
This
sizes
of
has
the
started
no
effect
events
in
the
compressed signals.
The
change
noise
signal
itself has
at
a level of -60db
(figs. 3.12) the size of the
is
severely
event
little effect on the
reduced
is visible
and
(figs.
third event
at -20db
(figs.
(note the change in
3.11),
coefficient
but at -40db
in the change signal
3.13) only the
scale
factor
over
first
those
three examples). As was noted previously, increasing the noise
level in the region before an event reduces the sensitivity of
the
coefficients
to
that
event
leading
to
a
smaller
coefficient change.
The
RLS
error
degrades
in
a
different
fashion;
the
most prominent feature being the apparently magnified noise in
the
RLS
error
signal.
If
one
views
the
problem
from
a
filtering standpoint the equation
s 1 5icI
___
(3.7)
=e
_____
- 71 -
must be
solved
to
lell 2
minimize
or,
equivalently, make e
orthogonal to the rows of
sis
In
the
frequency
flattening
(3.8)
domain
the spectrum.
signal Burstl
this
corresponds
to
whitening
or
Examination of the spectrum of the
(in section 3.1)
indicates that the flattening
will consist largely of raising the high frequency portion of
the spectrum.
This high boost leads to the noise
in the RLS
error signal.
The other apparent
effect on the RLS error signal of
increasing the noise is the increased
events.
size of the compressed
Here again, because increasing the noise reduces the
sensitivity of the predictor
(thereby lessening the extent to
which it adapts to new points);
the error that the predictor
makes at each new point is larger. Unfortunately, in practice,
this magnification
of
the
events
in
the
RLS error fails to
keep pace with the noise as the noise is increased. Therefore,
this effect does not appear to be a useful means for enhancing
the events in the RLS error.
The preceding
figures
in
this
section
__II______IIIIL___1111_______-
were
cr ated
-
using
a pseudo
random
noise
72 -
generator which
produced exactly
the same noise pattern on every graph. Since there may be some
question
about
substantial
compressed
whether
impact
signals,
on
the
the
the
last
different positions.
noise
appearance
show a fixed noise pattern with
three
exact
three
the
These
of
pattern
the
figures
Burstl
events
(figs.
signal
has
in
a
the
3.14a-c)
shifted
to
figures
illustrate that the
precise noise pattern has little impact on
the characteristics
of the events in the location signals.
t c<
- -- '
0.312
/DIV
BURSTI SIGMA".01
ADJ
IN
AA~ 0f I0'
nAli
50s0
ER
0.200
_ 50.0
.y
ROf
50.0
PTS/DIV
/ODV
RLS COEFFICIENT CHANGE
-Sv
P
PTS/OIV
Figure 3.14a
..
7L1
/DIV
. 309
VER
........
m
-
-
-
-
iS
BURST
SIGHMA-.01
\
VFR
-- -
-
·
-----
ld
IY
Lr
·----------AIA
1 -
50.0
.ER
t
V
"-Q
PTS/DJV
OOnnF-n1
-----
·-- ----
1
I
V
V
50.0
A'
l
-
0.200
-I/TV
l
ALS ERROR
·
l
'
-·· ---
w
s
-
vvy- r
'V
A
- ·-
-
-
%
T
.,A
.
---- · Y
-7r
rz
n
A&jAA
--· Y----·-·w
-) I:r
A
J\
u Y
w
--V
kA
-·
L'Ysii-tVP
I·ir-·r··-
-VV1I
. A
-··1··I-1
fall\iA, ,
-I···r--
Yr~-11-n
Vy"
PTS/DIV
RLS COEFFICIENT CHANGE
/DIV
I
-
,
~--
50.0
-
-u
----
-
I-:--vs-
r
PTS/DIV
Figure 3.14b
----
·
/OIV
AR 0.318
BURSTI SIGMA.01
l
50.0
PTS/DIV
/DIV
0.500E-01
-I -A
I
-
-
-X.,I
A
. - M-
RLS ERROR
·
.I
T- P. I I -
50.0
0.200
v
50.0
I-
- f
_
,
,
-
\AI
4Ax
AA.di. A 'I1_ r_Y-'AfA
.~·- ~I,
'_~
_____ _~ _P- ·· --·I~1
1 1_·I--J
-
I-
·
·
-
--
--
-N, %4p -
I
¥
PTS/DIV
/DIv
RLS COEFFICIENT CHRNGE
1
. R
-
n
!
1
PTS/DIV
Figure 3.14c
I
,· i - -
NW
if -· iNrA
,Y
trV
-
Effects of Predictor Length
3.3.1.1
Figures
3.15a
predictors
levels.
through
series
of
first,
figures:
increasing
predictor length
the
second,
for the predictor to settle;
time
lengthens the
the
noise
various
at
signal
Burstl
two characteristics of event location with
There are
increasing
the
eight and twelve pole
show four,
3.17c
on
acting
RLS visible in this
noise
76 -
the
(up to a point) decreases
predictor settling time.
The
proportion to the
(see eq. 3.5) scales in
covariance matrix R
signal
noise
level. Thus
the
energy cost of
error
modifying the predictor at a new point dTRd increases with the
noise level.
independent
adapts less to
and
of
the
level.
noise
the events in
consequently the
coefficient change
level
from better prediction of the event
(Ec)
at the current point
is
in error energy
On the other hand the reduction
Therefore, the predictor
the data
compressed
events
have longer
signals
as
the
in
noise
the
duration
RLS
as
increases
error
the
and
noise
increases.
The reason that the predictor settles faster when 8 or
12 coefficients are used rather than 4 is probably because the
events
occurs,
little
in
the data
contain
a 4 pole model
chdnge
is
four
poles.
inadequate.
Once
the second event
Note that there
is
very
in the predictor settling time between the 8 and
- 77 -
12 coefficient predictors when the noise level is the same.
1
1. In all the experiments which were run, increasing the
predictor length beyond 12 coefficients did not improve the
location events.
However, these signals contained at most
three events and the events themselves contained only four
poles (at most) each. Situations involving larger numbers of
events or very complex events may benefit from
larger
predictor lengths.
YER
0.312
BURSTI SIGHR=.0I
/DIV
A
_
Af\P
n
~~~ ~~~
~~~~
_
A
_
_
I
/
A
1
.1 :--I-I·--
__ - : -___
xjI
*i
'II
V
(-
PTS/ODI
50.0
RLS ERROR (4 COEF'S)
0.362E-01 /DIV
YER
_-
· 11-1--: * _s_ X
VvX
V'VU
OR
f\
A
j
I
A
-
_[OFl
-"'
50.0
0.453
50.0
v-
-
I-
-T"
.
IL
7
I
_
v
<6w>_/
--
_·L_-
!-
_ .-.
IdY--·FCTI----·-T)h
PTS/DIV
/DIY
RLS COEFFICIENT CHANGE
PTS/DIV
Figure 3.15a
_
_
./
PI.
,I
i·l-dYill
.
n.qq9E-oi
fFH
.
.-
..
CRAS tlr51Yl.
RLS ERRR
/!Wv
1
COeF '$
L
a.~R
S;. 0
f F
0.1S9
I4
PTS/OI'
tOEFF!CIENT ChANE
/!iv-!l$
it
i
I
T
-I · c.
PTe/D
V
Figure 3.15b
_ ~ ,~~~~~~~~~~A
__.~__.
_~__.
...
~\
(
!
C. -
VFR
.··
n.3III
Y
/nlY
I
.iAt
.al
· · I_
·
L-I11Y-
r
, vI
.I
BURST1 SIGMR=.1
IAl\ n
I
·
· ·
I
1
I
·
1
·r
.
A
·1
I·
v
I
HOR
L_
lER
0.203
50.
Ir
1
1·
·
50.
1-
I
/DIV
RLS ERRCR
i4 COEF'SI
j~iA
PTS/DIV
_-
-_
-
- --
PTS/DIV
Figure 3.15c
I
· ·
·-
,wn
PTS/DIV
_
50.0
· .·
~I
I1
L
·
t\
i
u
C~~~~~~~~~~~'~,J.
HOR
(i
1
·
d
--
I
'ER
BURSTI SIGMf=.00
/DIV
0.312
, A I AAlI
_ __ _ __
1 _· _ 1J
1
_·
··
_
__
T _·
_·
§
X
I A
*
V\ JAJV0iW'\JvA I /
iOR
_ER
E
RLS ERROR
0.355E-01 /DIV
A
_
Iv
40R
50.0
.I /,
\V '
7w
x
R
X f-
r
w
1
__
~ ~
~~~ I-A-A
18 COEF'S)
~ ,
~ ~ ~IL
~
_ _
_
I_
_1_~_-
_
.-
- v_1_1___
.,
__
-
-
--
.
RLS COEFFICIENT CHRNGE
/OIV
K_
--
-
ij
;I
I
50.0
X
PTS/DIV
0.514
ioR
l
PTS/0 V
50.0
L
X
PTS/OIV
Figure 3.16a
^
-
- i
I
I
f
I
*I_'.
[ER
-
0.312
S
/DIV
BURSTI SIGMRA.01
nn~~~~~~~~~~~L~~~~~~~~ f;
---
fnf
\ A\
i~~~~~~~~~~~~~~~~~~~~
I
E
50.0
tER
.
* *JS
0.211
A
/
/-
V --
Viv
/0IV
RLS COEFFICIENT CHARNGE
i
I
so0.0
)
---
RLS ERROR 18 COEF'S)
h
OR
x
PTS/DIV
O.446E-01 /DIV
_
| g
\,J'
vPTSvII
lCR
,,
I
-
-
--
PTS/DIV
figure 3.16b
- -----I------
I-
I
0.361
f/I,
50. 0
ER
BURSTI SIGMR=. 1
?TS/DIV
/D v
0. 152
RLS ERROR (8 COEF'S)
-
I
idaN b)@X.1,1i lS
.nj~
T lst~~~~~~~~~IXAl
ni
, I
Uwl~~~~~~~~lF
TT
Y____V·II_
Ro
_
5s0.0
O.S2E-0
IR
1OR
50.
so0
·
___
__·
____I____·____·__
v1
II_
__·
1
..
_Y
I
f
11
fAL
i.M4b, .1 iA
1I1·
1
Y·IIY
r
I
!
PTS/DIV
RLS COEFFIC!ENT CHANGE
/DIV
PIS/OIV
figure 3.16c
-
-
-----
Y_ Y·_·i·__
F
1
Lh
_________II__
I,
V
·· ·_·
f
/DIy
C. 312
BURSTI 5IGMR=.0Cl
PTS/DIv
50.0
I ER
;\A1
I
n
0.356E-01 /Div
RLS ERRCR i12 COEF'S)
i
,
m
.DR
ER
50.
PTS/DIV
. 528
/Oiv
--
1-11v
RLS COEFF!ICENT CNE
4
_
.
_
_
.
Figure 3.17a
__
__
I
I-v-Ia
0. 12
50.0
YER
0.432E-01
BURST!
/DI¥
PTS/OIV
RLS ERROC 12 COEF'S)
/DIv
,,f4 A-V
'v
rIv,
y T" ________
I V. VI
V vl iW'_~____
-.. AL
AN
Yv
MORl SO.
"y(
L
-vYV
kA
_T,'-
_
L__jII
w
*v_4i
1h -l4
-%I- "Y~''p;Fpaq`?
PTS/DIV
RLS COEFF'CIENT CPNGE
0.231
50.0
S!GMn=.O:
PTS/DIV
Figure 3.17b
I
I. vvw.i~ika.I
Iw
F
.L
IT
q
Ark
',
/DIV
0.361
BURS
SI GMR= .1
IX\ 2llKI 1
PTS/DIV
50.0
ER
RLS ERROR
/Div
0.132
-7~
,',0,
-~~~~~~~~~~~~~~~~iur
LER
0.591E-01 /DIV
*pca_!-r
_
_
ISushi
1) ~
,I~I
RLS COEFFICIENT CHARNCE
\_
. _,
I!,p
!12 COEF'S)
_
-
_
Eigure 3.17c
t
N
-
3.3.2
Interevent Interference
As discussed
an
event
it.
87 -
in
Given
part
a two
in
the
determines
event
last section, the
data preceding
how the
will
situation, the more similar
event is to the second, the less
at the second event, so the
adapt to it.
data
of
to
the first
the prediction error will be
less the predictor will change to
Burstla
Burstla or Burstlb as
events)
and a longer
react
Figures 3.18 and 3.19 demonstrate this fact with
consisting
differing
predictor
as
the
the first.l
clearly
shows
second
Figure
a
larger
event
3.19
and
either
(the one
coefficient
with
change
prediction error disturbance at the second event
reflecting the larger difference between the two arrivals.
1. To make the details of the second event
these
graphs
have
been
magnified
causing
compressed events to go off scale.
more
some
apparent
of
the
BLRT1R
YFR
i-··
.448
-
/(Iy
SIGMF-.001
AiA WIh
A 1Al 1
=-
I
I
I
. I
v
50.0
VFR
L..
---
I-I
HOR
JER
/ODI
A-.
1-·1·
_ I O Ilv 'V¥- -I-"
--
==-
l
I
,A
%
f
;
i
.11
'h
I1
)
r,
,.,
,'
"
PTS/DI V
0. lOOE-01
-
15'
5
-----
RLS ERROR
I-viv---
..
It
IV
Vlif 1:. o
V
I
.
u-
.
-
Iv.
(12 COEF$')
PO1- P
, W
vT
AI-I
l"b 1 6
c-
v'
- 1 -- v !
* AAi.
IvI
'r'V
A.A
-I. -
.
y
~
I
--- TIP
-
V ',
I.
I
.bIII
LY~
-
v
PTS/CIV
SO.O0
0. 100
/OIV
RLS COEFFICIENT ChANGE
l
--g
,,,
OR
-
-:- __j
_- 3
S--
50.0
^-
-
I
I
/_
_
-
i
PTS/DIV
Figure 3.18
_
_I_---------·
1-
-
__
L
_
BURST 18 c 5S BURST'
0.514
/OiV
i
V *ou
I
VER
S1GRA. 001
Y
0. IOE-O1 /Iv
RLS ERROP
12 CEF'5;
r14 !Ai 0
HOR
50.0
0.100
....
a I150
-A..
A,
PTS/DIV
/iIV
I II
RLS COEFFICIENT CHPNGE
v,
Figuire 3.19
- -
-
-
90
The
next
twelve
figures
(nos.
3.20a
demonstrate
the
effect
that
spacing
has
interference.
or
3.22)
was
used
of
the
a
Within
fixed
to
predictor
compress
Burstla
figures
each group of
events
pulse
illustrate
length
at
how
inter event
four figures
(3.20, 3.21
(4, 8 or
different
event
3.22d)
on
12
coefficients)
in data containing
four
the
through
two
instances
spacings.
compressed
These
signals
behave
versus event spacing and predictor length.
The
predictor
These
appear
error
figure
sequence
points
appears
events
different at
differences
Examining
150
compressed
include
3.20d
after
beyond
for
all
a
first
it.
spacings
changes
reveals
the
the
The
in
four
coefficient
up to
150 points.
size
small
to determine the zone over
shape.
disturbance
event which
duration
and
extends
of
this
which the
in
at
least
disturbance
position of the
second
compressed event will influence its shape or size.
8
12
and
coefficient
disturbance,
similar
This led
model
in
and
size
predictors
they
and
generate
shape
us to believe that
a given
event, as
error disturbance
for
have
a
much
compressed
all
indicated
shorter
events
spacings above
the longer
by
the
The
error
which
25
are
points.
the predictor takes to
the
time
taken
for
the
to die away, the further the next event must
be to remain unaffected.
This
postulation
was
tested
by
the
following
-
experiment:
point
50,
positions
signals
Burstlb
between
were
at
100
91
generated
point
and
-
350
200,
with
Burstlc
and
Burstla
points;
then
The results are presented
was performed.
in
starting
at
at
various
event compression
figures 3.23a-h.
As can be seen from the figures, the compressed events for the
last two bursts only interfere
time to
this
The question of what determines
settle between them.
time
settling
is
if the predictor does not have
a
possible
for
topic
future
investigation.
1. Burstlc was used to desensiti ze the predictor and it
To make
produces a large event simply because it is first.
and
error
the
RLS
clearer,
the details of the remaining events
first
the
causing
magnified
were
graphs
coefficient change
compressed event to go off scale.
A__
L-
R
O.35
/OIV
50.0
PTS/DIV
0_979E-02 /DIV
VFR
. -- r
-.
7T ·-
fOR
1V-
50.0
0.453
50.0
50 & 75 SIGMA=.00
BURSTIA
IT.
iT
4 COEF'SI
RLS ERROR
~.1 J Y
1 ~ ~1 ,A
yl
1-O.
o
'ln'
(i
_..tE act
r1r
gd
V-V-l -Tr
PTS/DIV
/DIV
RLS COEFFICIENT
CNGE
PTS/DIV
Figure 3.20a
__
,,.artA4.t.
-Y-
In-,
,;liAli..'..I
fyVVI,VI
I:uyy'~l-
It.
...
it
ERl 0. 312
/O Iv
U.RST!R
5So
InGo SiG.
I4
i
HOP
IEF
t
S
_
. 1:
v
PT/i¥V
0.979E-02 /Oiv
RLS ERO,;
i4 COEF'S5
+
-BO
4EAR
50.0
0.453
PTS/DI v
/Civ
RLS COEFFIC:ENT C-AoGCE
t
ii~,-.
I
T
HOR
50.0
PTS' DI v
Figure 3 2b
or
fER
'. 4iq&
/DIV
BURSTIA
.~~A,: \
-4
·
-
·
-0
-
L
50 & 50 Si GM =.CO
n
·
'
'-
-#
r;
A
'
II,'
(
J_
t
II ,
A
,Ai
II
':
:
L
'\ /
\.
I\I 1 iI'VI/\ \1
\
-.
I "
'-
ii
\j
V
PTS/DIV
50.0
VER 0.979E-02 /OIv
RLS ERROR
Ir
_- _-nt kV
~~ A
I'
1CR
rl
H
50.0
JER
0.453
.
,I
V
li~~~~
i
F.
-1
.,i
-~~~VR 7lr V-Ti~~~~
ii
r7
N4 COEF'S)
AtA,A Th.
hA
"I,
· .
-
I
.
lw..?',v I I'-
h
. A
W I
, -
--
AA-
-
Vy" 7"
a,
-
I
PTS/DI V
/DIV
9LS CO
IE
Ch .
---
Figure 3.20c
d..
!,i,, .
".I
I-101
;
i.-
i.
I
IER
0.312
/ IY
BURSTIP
I
<
VER
.
V
^
c 5
200 Sl=-
<o
-
0.979E-02 /O!v
RLS ERRCR
CF 'S,c
I
T
I'
It
U
-IiI
..i*%
rI v
'ri
-
.
-k
" Alil
!V~A
1
k i IA dl
.,43
-
YER
me
-
.
//n^'
MI
RLS CEFFCIENT -HrNC5
,
i
---
4i
HOR
50.
P'
/DiV
Figure 3.20d
;
'-, t
0.435
BURSTIR 0 50 & 75 SIGMR=.OG1
/OIV
.n~~nnnn,,Tv
I
VE:R
0.979E-02 /OIV
.- --
RLS ERROR
AA , .
LP--- 7 -rrr-R-r ~
_-
*
-
*
·
.^.n-
-
IVf ,
HOB
s0.0
LER
0. 493
I
· ·
w
v
I
·
'l
eo
I
i·
W'
^ I
(P COEF'S)
kt4hAMAA
ILA -..------.-.-.
r, b NA..&
A-1ASiA
&xA.
...,7bAA--.,Y..,,-..--.;..-I-I-.
s
vYY'
V -V VV
y-
vt
V "y-
.
-v ''v vrv,-1
n
NA
PIA
A -4___ .
_. 4.._..
y vv'
-Vy
hA
i
v;
I
PTS/DIV
RL5 COEFFICIENT CHRNGE
/DIV
h
-1
50.0
-1
I
.
I
;I--
i
PTS/DIV
Figure 3.21a
I
AJJh
.
a.
__
Ii
..
7,i Wvs'qy
-r 1
0- l
0.312
--
HOR
--
/DIV
BURSTIR
A
I I-
-
\
I A
I
50
I
100 SIGMR=.001
k ,
PTS/DIV
50.0
VFR 0.979E-02 /DIV
RLS ERROR
1
8 COEF'5S
m
I§R
50.0
PTS
rER
0.493
/DIV
h
__
HOR
IV
RLS COEFFICIENT CHANGE
f4_
I
X
i
_
50.0
PTS/DIV
Figure 3.21b
i
,I
I
-
__
VER
_-
O. 4=8
/ODIV
BURSTIl
I
-
-
10R
tER
-
50.0
-I
IER
0.493
_~ [~n
,
"
'
z~s
rN
PTS/OIY
RLS ERROR
..
LAMAAPA .flt.
!t
*
-.
v
I
-
-'U '
'
- ,-A
l
I1h
(8 COEF'SI
L1.
IK *
I
_.1.
.. ,
.l.,..
u
,
·
I,-
AlII
Y.Y~r I v 'I - v" - r v.- I- y vv-v
..I
A. . .A
A
1lil. I . .
....-.. IA .Ld~ll.
I- Al~;r~-AI KAn 1 A"A..
I
I.
-V
PTS/OIV
/DIY
~~~~
50.0
.i
\ V'\JVV
J\ J '\ VJ\
-V , f I I '- . 97-1
t
I
Ij
"'..V
S0.0
150 SIGMp=.001
AI f
AN
1A A
0.979E-02 /DIV
.- I
o 50
RLS COEFFICIENT CHRNGE
,__iti
PTS/DI V
Figure 3.21c
iI
I
-V.-, -
~
" -.!V"Jr r -,V-f
fER
1
I
BURSTIR
/DlV
0.312
JAF
:
50
,
I
200 SIGMR=.001
A f\.\-,
I
I A
VVV
v i
V
vv
M_
PTS/DIV
50.0
HOR
4ER 0.979E-02 /DIV
H
RLS ERROR 18 COEF'S,
I
- 11
_^_I
m___·l_·_
I
F
__
-OR
--
._ 1_1_Y.1 __-."
.A-A.1
·__·__- ___·I·
-'
_
r.vyv1
-A
. .lkAI
^·· A
··_· _Y·
r _ II_ _ _·.I ·_Y1_ ·_ __
.V-.'iIvVy1V
I
A . .
j _ IILI:IIY^P
Wv
rV¶y8
-ff-
_
50.0
0.493
50.0
PTS/DIV
/DIV
RLS COEFFICIENT CHANGE
PTS/DIV
Figure 3.21d
A1 ·L AII IAIL·ln I
1 'i,
-WVVy-,y
_iI-·- ly. n·i _1
kA···J -1-~- YI
U IAAl
·III.I·YYII I ·iA
· I --e) ,
T
.v"r
I0c
0.435
/DlV
BURSTIA 0 50 & 75 SIGMR=.oo001
.vr
AR 0.968E-02 /DIV
,r
0.508
RLS ERROR
=
/DIY
·
12 COEF'S)
RLS COEFFICIENT CHANGE
.· .
Figure 3.22a
tot
0.312
/DIV
JRSTIA o 50 & 100 SIGMR=.001
'VaVv
v ,v
v
%.
-
-'-
A
.
LER
0.968E-02 /OIV
RLS ERROR
I
__h~~A
litR
so.o
tER
0.508
A.
N41kAA
,v v-V?
50.0
Ad -- r
r
I AVY.rrt
WI
, I
-
.
PTS/DIl
/OIv
RLS COEFFICIENT CHANGE
I
ItF
(12 COEF'S)
i.
Ii
PTS/DIV
Figure 3.22b
TY
_I . - A
.k1It
,v.
.ER
0.448
150 SIGMA=.OC1
50
BURSTIA
/OIV
l
A
-
iOR
I
A A AIt
,T----__ _ I
VER
_I
h.
I ~-~I
I Li
1
iu urrr
--.---
'V
50.0
VER
-
IJi.
-~r
·- ruilI
.,AmAFt
A A -.
-·-··
mu-u· ···
,
vvV-l
T
f
Pki-14,1
-kA
.s
v ..
..
I--
"1
1
I a iA . .Il,.
.k ~u~&
I .~L'l J '.
-`~-"
I
Alm-L,
', ,'
__U -I
I'I
PTS/DIV
0.508
/DIV
RLS COEFFICIENT
HRNGE
i
h"
1
HOR
50.0
A
t
,fN
N lt
\ ,, "='
RLS ERROR (12 COEF'S)
0.968E-02 /DIV
.*'
At
f 1k i
1
PTS/D1V
50.0
.s j
:0l
I
-
F
-_i.
PTS/DIV
Figure 3.22c
i
iA .
A.i.
! A',z-...
s
,,,I,
Wl
'
MY1
",-_
Y-·
k 1A
h . I
WIYL..
,,|
,
I..., l,I
.
1
0.312
BURSTIR
/OIV
50 4 200 SICMR=.001
f\.\-,
n.n
50.0
1
ER
_
-Y
·
r
i
HUi
PTS/DIV
0.968E-02 /DIV
_ _
w
-o-
__ m
50.0
0.508
50.0
I I A
_ I A^-__ls___
RLS ERROR
d
'IfT-T-
_1 Ji A
- __m__R___7
'-
'lI''
(12 COEF'S)
A AA ilMII________
. kA Ak&
__lr__
-''V,
YV,rv
_ ·____ _____ _ II
i-
Ny , v -,-
PTS/OIV
/DIV
RLS CEFFICIENT CHRNGE
PTS/DIV
Figure 3.22d
__
-
-
-----------
r -i
AoA __1_ ____________V___1
-A
I __
_ ____
r v
V
I
, v,
ll·
_1___
An _1_1___
V-w,,v,- - W,
BURSTIA
0. 338
/D I V
.
O. OOE-01
SIGHR=. 001
,JUIV
/OIV
r
100
RLS ERROR
(12
C3EF'5)
I -UlA
/DIV
RLS CEFFICIENT CHRiGE
r',ujIV
-
Figure 3.23a
-~
_
_
_I
BURST IR © 150
0.37s
/Dly
SIGMf=.001
.
. .
YER
O. IOOE-OI /IV
RLS ERROR
0.100
RLS COEFFICIENT CHANGE
/DIV
--
L_
HOR
50.0
l~
-
-
I
12 COEF'SJ
for~~~~~~~~~~~~~~
PTS/DIV
Figure 3.23b
--
L
l
-
--
D
-
C
I
BURST iR © 175
'ER
0.
26
/DIV
SIGHa=.001
1\~A ~r\ ~,\
_..
.·
V
__
f\
_ w
\, \/-
-ieOR
50.0
_
OR
__
_
-
_
E
_
l
__K
i
I
vJLU
lr·
PA
_
__
_
-- -
. ","
_
RLS ERROR
/DIV
-
12 COEF'S
RLS COEFFICIENT CHRNGE
I
o50.0
-
'I''
-
-
! _
-
-
I
PTS/DIV
Figure 3.23c
---
i
------------
-
---- -I
--A
_
L
PTS/DIV
0.100
_
l
PTS/DIV
50.0
1 /DIV
ER
\.-
AfV
--
BURST 1R © 200
.490
ER
SIGMA=. 001
/DIV
AA\
V
VJ
\v/
'
_V
AnA .
v
A
XVI/
I
v
HOR
50.0
PTS/DIV
O.1OOE-0i1
a- -, I,
HUR
j
LER
T
-1J-
.
Li r
r - mt m an R Ilrrrrn
1
,111-P
,
·
1
50.0
RLS ERROR
/DIV
AL
1. A. -,
- rrrlrl--p
I,'ll,
12 COEF'S)
-A
iA
1
A-*.A
kAA A.a
7 r
vr uu r rrp --u.rs
v
., r-
r
VV'
.,i
l+
'
,.JAAAn.
;--- r r - --u 7
t
" I
-
?.
.
t
----
-·-
.. JA . ,
-cn
;r -r·r-urrrrrrPurrrr-.
,1
TIT
PTS/DIV
/DIV
0. 100
RLS COEFFICIENT CHRNGE
..k.: ,
r
----4
OR
50.0
I
---
PTS/DIV
Figure 3.23d
--I
-
--
---
`
© 225
2
BURSTIR
/OIV
0.511
50.0
SIGMR=.001
PTS/DIV
0. 100E-01 /DIV
~ A
-"'-`
m
~
· -
Or
OR,
1s
" "'"
i_
_
Li
- "I""
RLS ERfiCR
--Y"'-"'i
v
1w
50.0
PTS/DIV
.
,'DIV
OR
50.0
J. A.....-"'fik""';""
'""~ ~" ^ "
1-i
1 .Y
-* ,V.
. ^'-''"
, . · ·t-
12 COEF'S)
'
v
"
h
E~l
RLS COEFFICIENT CHANGE
PTS/DIV
Figure 3.23e
I
Jr
A
x
AM A
A Ar.
J",.. I'V ' f.VW2Vf4
z
-. Nl"
,] war-_
- f~v
"I
BURST 1R @ 250
0.478
/0IV
SIGM=. 001
pi'/
I\
.
__
'E9
Ir
HOR
.
,
A,
U
.,
0.100E-01 /DIV
RLS ERROR
.L...
.I.. I
Pt' l'""1ro-
' '"'b
.
-
-!
50.0
.
I
-
-
I
J
1-IMPF
·
I
^~
~ ~~ 1;
v
,
PTS/D V
0.100
1
,~~
.
112 COEF'S]
I
/DIV
ALS COEFFICIENT CHRNGE
.,
Figure 3.23f
f\ AA.
A -
~I.
,[I\
li&A\.
11~
1 A'i
qu
'A"$fI VW-l
LijAMj_,.AA..
*r$iw7wvl
300
BURST1R
0.511
/DIV
SIGMR=.001
f
50.0
VFR
I, \
PTS/DIV
1 /DIV
IR
A\
50.0
RLS ERROR
I12 COEF'S)
PTS/DIV
O. 100
/DIV
RLS COEFFICIENT CHANGE
I
L
IOR
50.0
C--
·
I
Y
I
--
PTS/DIV
figure 3.23g
--
-
1----6
-I
-1
!
1
350
BURSTIR
0.433
VER
SIGMR=.Oo1
/OIY
2
r \fK
"I-\
yl-
' V
-
v
A
V'V
v
_
50.0
OR
VER
,
I
PTS/DIV
RLS ERROR
O.IOOE-01 /DIV
I
(12 COEF'S)
I
I
-·
L
.
HOR
50. 0
_VER
a. 100
L. t I~ ~ ~
...-.
V I.''1
-
- .
7
"
-.
-
;....*
li . I
..,,
,
.a
~-
Im~r
~
~
AAk
A.
.- A..
~
~,*
d%
V.
I
,
,*.
·
~·-.~. -r~
flr
aft,
vrJ
PTS/DIV
/DIV
RLS COEFFICIENT CHRNGE
4;
___r__
B-
yg
I
*---
:---;-
--
-I
I
I-
LR
50.0
PTS/DIV
Figure 3.23h
7;L-=
~-
-11
-i--e
·
-. Jmof
112 -
-
3.4
Linear Filtering
It should
noise
degrades
enhancing
these
the
event
quality
experiments were
and
be evident from the preceding sections that
of
performed
post-process
to
compression
these
using
event
signals.
signals
linear
(on
To
noisy
filtering
compression
with
as
try
data)
a pre-
RLS.
Three
questions were examined:
Does prefiltering the data to reduce noise
improve the quality of the observed compressed
events?
Is
all-pole
filtering
preferable
filtering for this application?
Will postfiltering the
in it more visible?
We decided
that
reducing
to
RLS
prefilter
error
the
to
make
data
FIR
events
because
we
thought
the high frequency noise in the data might allow
the predictor to adapt more quickly to the events and thereby
produce
faster
sharper
events
settling in
in
the
coefficient
the RLS error.
The
signal with three different noise
3.24a-3.24c.
frequency
It
band
from
noise.
We thought
reduce
the error
(see
eq.
is
evident
that reducing
to
events in the data.
these
.5f
of
thereby permit more
figures
is
the noise
energy cost d Rd
3.5) and
spectra of
the
and
Burstl
levels are shown in figures
from
.f sample
change signal
that
dominated
the
by
in this band would
changing
the predictor
responsiveness
to
the
-
Two
113 -
filters were designed
for the purpose o
the noise power in the frequency band extending
to
5fsample'
spectrum
Figure
of a 50
of
p
[Deczky,1972].
criterion
Figures
prefiltering
on
the
IIR
first
al.,1973].
event
FIR
filtering
convolves
compressed
given in figure 3.25 .
of
the
bursts making
in
the
RLS
examples.
second
not
error
and
apparent
seems
in
input
contains
spectrum
coefficient
the
noisy
program
Figures
to moel
the sequence
zeroes to each
via an all-pole
the 50 point long bursts
change
event
The
extended
inadequate prediction.
sequence with
This causes
unaffected.)
the
Figure
the effect of FIR
signals.l
example
them difficult
(Surprisingly,
burst
design
The convolution adds 49
technique such as RLS.
the
(purely recursive) filtering.
filtered
the
inverse
filter
events in the compressed signals due to
FIR
and
3.27a-c demonstrate
3.28a-c show the effects of IIR
The
et.
with
(all-pole) filter generated by
purely recursive
minimum
designed
coefficients
lfsample
the time response and
filter
[McClellan
the denominator
a 6 point
the
illustrates
FIR lowpass
point
algorithm
Parks-McClellan
3.26 shows
3.25
from
reducing
for
the
corresponding
low noise
to
the
fact that this effect is
examples
is
currently
not
1. These figures have been adjusted to position the events
in the same places as those in the IIR example. That is, they
have been left shifted 25 points.
-
114 -
understood.
The
number
IIR
of
Thus,
filtering
zeroes
the
t
process
each
resulting data
algorithm
than
in
the
modelling
technique)
burst,
is more
FIR
and
does
case
not
instead
introduce a large
it
adds
easily modelled
(since
it
is
5
poleS.
by the,RLS
an
all-pole
therefore the event location signals
are better behaved.
Unfortunately, comparison of these figures with
3.17
indicates
that
filters does not
events;
and
reason
after
for
this
types
of
its
of
either
filtering
noise of the
increase
filtering, the
because
by
improve the quality of
both
increase in the
prefiltering
predictor
can be
increased
RLS
cause
an
coefficient change
in
noise
the
FIR
more
correlation,
change more in trying to predict it.
activity
or
figure
all-pole
error
signal
undesirable
signal.
The
may be
that
effectively predicted
and
the
coefficients
In any case, this method
of enhancement seems to be inneffective.
The
location
noise
events
in
the
therein.
enhance
the events)
applied
to
the RLS
the
error
RLS
To
FIR
error
try
filter
signal.
signal
removing
shown
The
figure 3.29 for various noise levels.
in
it
obscures
(and
figure
results
are
the
thereby
3.25 was
given
in
In comparison with the
unfiltered RLS error signal presented in figure 3.17, there is
some
reduction of
the
noise
level, but the events themselves
-
are
smeared.
compress the
115 -
Possibly a matched filter could be designed to
events
characteristics of
in
the
RLS
those events
design such a filter.
__
error,
are
but
at
this
time
the
not known well enough to
if
Iko
-R
0.312
/DIV
BURSTI
SIGMR=.00o
A
_
_
.
2008/OIV
.
.
.
_
.
_
.
i,
.
.
LOGO1
MRGN SPECTRUM
.,
Figure
3.24a
_
I17
0.313
/DIV
.I
-J/ul
2008/DIv
_
.
BURST1 SIGnR=.OI
LOGlO MRGN SFECTRUM
.
Figure 3.24b
_ ____
I_
l IS
0.367
/DIB
BURSTI SIGMA=.1
lllb
--:R
20DB/DIV
n
AAh~
--LOGIG MRGN SPECTRUM
. FS/DIV
Figure 3.24c
____
1~~wiL
\ t>
VER
T
0.692E-01 /DIV
FIR LPF
.IFS 5OPTS
I
l
l
I
I
HOR
PTS/DIV
VFR
5.00
XnnR/nTV
r·· --·
LOGlO MRGN SPECTRUM
~~~~~~~~~~~~~~~~
·
1I
I_
I·
I
N
f
yI___
_
' IX001 He X,0 It's
m......
HOR .1FS/DiV
Figure 3.25
[R
2.19
/DIV
IIR LPF
. IFS 5 POLES
I
j
t----;
Ii-
ii
I
X
I
I
Ii
I
F
-H-R
1.00
PTS.'DIV
.1FS/DIV
Figure 3.26
_-
-
I
1l
l
VFR
:-
0.330
/DIV
BURSTI THROUGH FIR LOW PSS SIGMR=.C00
i7
A I.
-i-
-
f|
f
I A
l
I.
l--
-V1 J vV
_
.
_
_
H09
4ER
_
_
50.0
--
i\r\
V'7
-T
~
~
,
I
i
.A
I
Id
%
/,
E
I
Z
\J ,
'I
V
_
PTS/DIV
0.743E-04 /DIV
RLS ERROR
(12 COEF'S)
I
-- -.
'
I,I . .
A, A,\ /-i~ur
O0R
50.0
PTS/DIV
VER
1.51
/DIV
,a-
i-
=
!
fI
iOR
50.0
--
w - -.
-
--
r.
-
V$v V w
tllsm
t
W*4"
A
.AAAM
t
.
vrv -
-
-
L
rle --
It
RLS COEFFICIENT CHANGE
_-
I
.
.I Jfl ^\Ji AU
.-
I"r
!
.
.
~II
.
.
.
-
-I
-l
..
-
rl---
--
PTS/DIV
Figure 3.27a
I
I
i
_
.
VER
/DIY
0.327
BURST1 THROUGH FIR LOW PRSS SMAR=.01
A nl
.
-
.
?
-
I
.
I
nA
!
i
,
i !
I! ,
dI
1
II
r\r\
/
'2/'V
. .
. ..
bL
VV'V V
\JIu V
50.0
VER
T
I
!
· i_
1_
&
J
L
1
·
_
_
_
I
\&
PTS/DIV
0.183E-03 /DIV
RLS ERROR
12 COEF 'S
-
.....
A,hit
T
w...
V'W.,lV
L---
HUR
---
50.0
G.978
50.0
m
I
A 8 .t~A{.
~ TAnth\
., AL bhA O A I"w
v I'yNvN Pv
......
.....
.1
_.I.
.
T
*
1*.T-·
q-
i
l.
It
.Y
.
II. ...rr-
1
Iv.
l
'
_w
1
i
·
-T4
PTS/DIV
/DIV
RLS COEFFICIENT
CHRNGE
PTS/DIV
Figure 3.27b
.l
hi-oel-A--&n
Ak 4LAl
I MIT WI 1YI Nf I -
I
i .i '' I IiI''..
vu
W
A
-A
VER
0.33 :3
T
I
BURSTI THROUGH FIR LOW PSS SIGMA=.I
/DIV
-A
.......................
I
---I
__
I-
I+OR
50.0
VER
7
I
--
A
MOH
A
V-\f
·
I
5u.U
0.162
50.0
RLS ERRtR
O
I wish
y1.
L
_
I
I\
i
·....Id
I'
I-1-1J,
·
4
!C
,
/^-\ 1,I
,·
.
.
>1
-
:
PTS/DIV
n_786F-03 /DIV
h. l
I
J~~~~~~~~~~~~rJ
AkI kIl
ONwI I
lv'1
(12 COEF'S)
,1
I
v
PTS/DIV
/DIV
RLS COEFFICIENT CHRNGE
PTS/DIV
Figure 3.27c
k
,A.J
rf"P11
R 1Vvvlf r I,
YFR
2
/0IV
0. 300
'A
K
_
_·
__
1
, A CA
_
i
{
_
j
__
__
_
nl\
*
_
_
V VI iV'V
i
_HOR
I
_
THROUGH 5 POLE LOW PRSS SIGMR=.001
BURST1
YER
50.0
----- -7-
!iR
I
i
I
\'- 7
l
, _I
I·
I:
. ..
I 1, I·
L
.
S
isll
\JI
PTS/DIV
0.231E-03 /DIV
I
II
A
RLS ERROR
- -CYT---- ----·
'V
50.0
PTS/DIV
4. 46
,'3IV
_
12 COEF'S)
-N----wu
I
-uu-7lr
p-- --·M-N
'V
'
RLS COEFFICIENT CHRNGE
_
Figure 3.28a
7* T---T
I
--7 1
VER
_
0. 298
_
_
_
_
_
/01V
_
BURST1 THROUGH 5 POLE LOW PRSS
_
'A
I
~~~·_~~~~
_
·
_
_
I
\
V-V-V y VV
VI
L
m1o
VER
'1
I._.
f
i,,
HOM
1
I...,
·
RLS ERROR
0.27SE-03 /DIv
T,
1 Vl'
_
·
%I
,
I
,
_
TVJi \
/t
"`V
PTS/DIV
50.0
_A..
..,.-1
_
IGMA=.Ol
....- A
V
AiA
nir
L
Iiu
yI N
UJW
50.0
PfTS/DIV
1.36
/DIV
50.0
PTS/DIV
AI..iII
·rrr·ir.
-Inlu
A
nu
i,
TV I'l,'V'
·
12 COEF'S)
r·& -:ru
yW
' vr
. LA AMAA
t AMH
nr· ·· rii·iu
yr,
RLS COEFFICIENT CHRNGE
Figure 3.28b
1
li.lrr·
I
I
.nlr-
·o-l·-
A
..,n·
Ujl-*^ .1.wk
A
Ir-·-r·ul·
r·
WNlly uV U VWI-
-,u
VER
0.303
BURSTI THROUGH 5 POLE LOW PRSS SIGHR=. I
/DIV
I
J' ' ' "fA
\' ' ' 'I\" f'
-"
I.
OR
50.0
VER
i2
lV f I f\,
" '
v
K!
"-,-V
k-i'
PTS/OIV
.10E-02 /DIV
1
I
' ' ' ' ''
RLS ERROR
I12COEF'S}
1
Llwik-
--- - -- ---
IVf
1 w I" G
HUR
50.0
'ER
0.182
;J1 .
liT
jf Wv1
/DIV
50.0
'
I-
P 10, I
RLS COEFFICIENT CHANGE
I,&\ArA-'1
__
/,.,,-__
W
PT5/DIV
t
.
HOR
A
Nq ,
jV-.
0
PTS/DIv
FIgure 3.28c
I . .
i Im
.iII,
I'llf'
l
0111
ly
-11i I W
VFR
O. 312
/DIV
K.
BURSTI SIGMA=.001
I
I
A
r
.
Ilo "-V vL\/
__
_
NOR
I
I
I
I
I
\
!
--
- r
f\
,t
-/
_
50.0
0.637E-02
PTS/OIV
/DIV
RLS ERROR THROUGH FIR LOW PSS
/DIV
RLS COEFFICIENT CHANGE
__
-
.ER
0.528
'I~'
HR
50.0
-
-
I
PTS/DIV
FIgure 3.29a
"
I
I
_
fER
0.312
/DIV
A
-1
I
-i
I
BURSTI SIGM=.01
A
If\
J
I
,
\,~J'
-OR
50.0
PTS/OI V
0.28iE-G01 /DIV
LS ERROR THROUGH FIR LOW PSS
100.
0.231
50.0
/DIV
RLS COEFFICIENT CHANGE
PTS/0IV
Figure 3.29b
I\
L
VER
-
0.361
BURST1 SIGMR=.1
/DiV
dM
___I__
_··____
_I_
.A
I I
IA1 __
L AM
K.''VV'V\JyYV
HOR
50.0
YER
0. 124
7-I
_
/-\
__·
1_·
· __
AIA \
____
V ,'
JV
'v
__
V
·
'
_I
_r
·
·
V' V
100.
0.691E-01
50.0
;
.kA-
.
.
.
. ..
v.
RLS ERROR THROUGH FIR LOW PSS
/DIV
_
S
V Y'\f
PTS/DIV
A
nir
/DIV
RLS COEFFICIENT CHRNGE
PTS/DIV
Figure 3.29c
A
_
A A
__
V ·-
_
. AA _
___
·
-
3.5
Decimation
The
another
nature of the burstl signal suggests
band-limited
possible
compressed
low
130 -
approach
signals.
energy
The
to
regions
.f sample
(i.e.
enhancement
of the
of
event
spectrum which have
.5f sample)
to
the
are
amplified,
to the high energy regions, in the process of linear
compared
prediction
to
(due
the
mentioned
whitening
Decimating the data would
section).
the
in
last
the relative size
reduce
of the low energy region of the spectrum possibly reducing the
impact
that
the
compressed signals
noise
and
in
reg ion
that
allowing
the
had
predictor
on
to
the
event
expend more
"effort" predicting the events and less predicting the noise.
Figures
3.30a-c and 3.31a-c show 2/1 decimation of the
Burstl signal with and without all-pole prefiltering
aliasing.
to reduce
Figures 3.32a-c and 3.33a-c show the same examples
but with 4/1 decimation.
Comparison
examples
(figs.
of
3.17
these
and
event
figures
3.28)
with
show
with
that
increasing
the
undecimated
the
coefficients
decimation
ratios,
change more
at each
but neither
the unfiltered nor the filtered decimation methods
offers substantial improvement in the quality of the data.
In
1. All-pole filtering was chosen over FIR filtering because
it smears the events less.
-
-
- 131 -
fact
since
independent
lowers
the
of
settling
the
time
decimation
of
the
ratio,
predictor
and
seems
since
to
be
decimation
the interevent spacing, interevent interference is more
likely if the data has been decimated.
.
I
9rv<
ER 0.312
/DOI
_
.OR
BURST1
,rtn nAIiA
'v j1S vV V
25.0
i
i_
__
hi
_
_
__
_ ll
A
- -- ---rr.v
25.0
i\ T\n
I
-L
L
__
__
__
_
r_
_
·
?'!
RLS ERROR
- -v --u - r ------r-.
(12 COEF'S)
7 I--,
w,'
.
--
AA-
YVV
Y
--%/
PTS/DIY
0.995
/DOI
n
-
25.0
A
r
DECIM 2/1
PTS/DIV
'ER 0.321E-01 /DIV
4OR
SIGMA=.001,
RLS COEFFICIENT CHANGE
-;LI
-
--
_R
PTS/DIV
Figure 3.30a
i
'
--
Vv
-
I
YER
O.309
/DIV
BURSTI SIGMa=.0I, DECIM 2/1
!
_1
_
-
-
A
A O
_
~~ ~~~~~ ~
·
·
·
~Vy'YV~iI
HOR
25.0
25.0
ER
_ __ _
0.359
25.0
~
__
·
·
v
n
·
I\
I
i
·
I
r_ I
i
I
PTS/DIY
PTS/DIV
/DYI
A/N
",l\jI
\j
RLS COEFFICIENT CHANGE
PTS/DIV
Figure 3.30b
I
·
/DIV
0.328
BURSTI SIGMR=.1, DECIM 2/1
I'\I
25.0
VER
/DIV
RLS ERROR
-
_
.
~A
L,
·I··-LC.·L·Y
1
k I
.
1
AIll
· 1-··r
.V j
i__
HOR
II
PTS/D1V
0.168
T-j
A!
25.0
··
·. ··
-1---
Y.
-
r
flIA, V
I
·-
~kv~AhaA
I·YI
..1·
1-7-·-·11?II·
I
A A,LnA.
·
-····11
I
vI Y
I'
1\I
PTS/DIV
0.733E-01 /DIV
25.0
M~,~
AA
"' v
12 COEF'S)
RLS COEFFICIENT CHANGE
PTS/OIV
Figure 3.30c
I____
---·-------
_II
I 1 - 1 ly·
v v
.
·L-
"
( A
.A
J VI
IvV V
~ ~~ ~
-- -
.
.ER
0.296E-02
---
-- -r--
R
I 'ER
-~
k--
t3li
A · ' ·- ·
II
--
-
RLS ERROR
/IV
I-- --_
I
-
5 POLE LPF.
DECIM 2/1
riT\ ri\
l
-
e
I
f
-1-,
f
974
PTS/DIV
25.0
IHOH
SIGnR.001.
BURSTI
/DIV
0. 300
.ER
,-
7V
25.0
PTS/DIV
25.7
/OIV
~~~'
25.0
-- --.
-'
\
P
---
(12 COEF'S)
AI'-
RLS COEFF:CIENT
-
I
II
-1
A- -I - - - - - -/r.l - - -qvvT
-VI I V',
A
--- - -
-I·
.
CHANGE
-
t-
i
t-
rT
PTS/DIV
Figure 3.31a
___
L-
-
-·
I
----
~~~~~~~~~~~~~~~~~~~~
·
z
[ER
0. 298
BURST1 SIGAR=.01, 5 POLE LPF,
/DIV
.A A fVA
-~~?ITI
25.0
I
n/N
_
f~
PTS/DIV
0.318E-02
/DIV
.-...
PA
A1
\A
RLS ERROR
A
25.0
PTS/DIV
3.41
/DIV
25.0
f\n I.
DECIM 2/1
A A,.
t12 COEF'S)
J
l
RLS COEFFICIENT CHRNGE
PTS/DIV
Figure 3.31b
--------
--- -----
·_
-
4
·
_
,
_ -^
I·
J
.vER
0.303
!
BURSTI
,A f f
TV
.
_
.OR
/DIV
25 .0
0.130E-01
fi
25.0
.I~ .
t
5 POLE LPF. DECIM 2/1
A\n
,
v
.
_
_t_
II
I
I _:
·
f
l
z
I I
/DIV
RLS ERROR
(12 COEF'S3
_
/OIV
·s
_
"--v
PTS/D1V
_
0.498
b~
SIGMR=.I.
RLS COEFFICIENT CHRNGE
PTS/DIV
Figure 3.31c
r\i\
%
i / -- --- --
It
I
t
f
S;I
\--J-i
I.
II
tER
/OIV
0.307
,1
i
E
X
BURSTI1 SIGMR=.001.
J\ -
_
__
VIV
12.5
DECIM 4/1
A1
L
_
l_
VVVI
I'
12.5
PTS/DIV
i. 16
/DIV
12.5
PTS/DIV
_
_
7 a71\
\
PTS/DIV
0,849E-01 /DIV
_
RLS ERROR
12 CEF'S)
RLS CEFFICIENT CHANGE
Figure 3.32a
_r
t
l
,
-/l-\
_
I
_
<I
L
_
BURSTI SIGMAH.1.
/DIV
fER
DE'
ol/I
0.309
S
I
-
F
OR
-
--
. J V VI vQA
12. S
12.5
0.553
12.5
l
l; 1
~AtAL
.-
-
RLS ERROR (12 COEF'S]
-, rt _\ J\ /l\,
A
PTS/DIV
/DIV
RLS COEFFCIENT CHANGE
PTS/OIV
Figure 3.32b
·
l
·
l
I
PTS/DIV
0.917E-01 /DIV
X
r
fn
?
/
_ _
V
I
VFR
n
R
---
In T
,.
- ---
.,
,
...
DUM~aun11 IGMR=,I,
DECIM
/1
I~
x
-
- -
-v-1 HR 12.5
[ER
0. 178
I v-
--
[R
12.5
£ER
0.134
12.5
I -- - ----
A.
PTS/DIV
/DIV
RLS ERROR
(12 COEF'S)
I~~~~~~~~
A XV 1%/ I, vV A
I\ 1'
V
'
1f
AAAv
VI'
E . ..
v
A
4v
PTS/OIV
/ODI
RLS COEFFICIENT CHANGE
f~~~~~~~~lof
I
m
PTS/DIV
Figure 3.32c
IX
--
!ER
0. 297
/DIV
.
-
hOR
12.5
f
BURSTi SIGMA-.001. 5 POLE LPF. CECIM 4/I
.
i'\
Iyj f0
-W
*, ,
A/I
I ~
I
1
mI
tI '
i
I ,vvjvv
v I
Ad
.
f
i
#
16
f
,
4
_
I
f
_
x,.,/
\--V,
PTS/DIV
0.413E-01 /DIV
RLS ERROR
112 COEF'S)
:ne
y
ER
.
4OR
9.17
/DIY
RLS COEFFICIENT CHANGE
,~ >--.
.
II
12.5
i
-
i
--
--
4
I
PTS/DIV
Figure 3.33a
It
'ER
. 298
/Div
'
__
Afx
fA
_
I
I
i[
t
,l
-
U
I
- -
-
12.5
PTS/OiV
1.40
/DIV
-
I'
I
--- --
--- -
._
HOlI
L
RLS ERROR
' V
iRo
VER
12.5
I
I
I
I
.
]
i\ r
\-,\Q'
%
q
.
-I
1
J
.
-
PTS/DIV
0.525E-01 /DlV
__
DECIM 4/1
10
1,N
/
n
VV\I V VI
12.5
VER
BURSTI SIGMAR.O1, 5 POLE LPF.
LL
--
A-- P,
V
-
',J,V
(12 COEF'S)
/ ,
JV
'-W/
f
A
Ax_ ,
- -
-
,
V Wv\,
V
A
-
-
V
11 ---\kl\
I#
-#
_
-4!f*
..=
f,
\
RLS COEFFICIENT CHANGE
- -,
·
-·
PTS/Di V
Figure 3.33b
----
-----
-------- -
I
1
i' K
0.303
VER
,
_~
_
_
/OIV
,
~
A f
.
BURSTI SIGMA=.i, $ POLE LPF, DECIM 4/1
It
VV'
{ v
I_
I
VI
,\
__
V'J'
/
,li\
VI
K'
\J
_
y'
12.5
PTS/DIV
0.818E-01 /DIV
12.5
t
rER
0.245
RLS ERROR
(12 COEF'S)
PTS/DIV
/DIV
RLS COEFFICIENT CHANGE
J
i
HOR
12.5
PTS/D IV
Figure 3.33c
_
__
-
3.6
Summary
preceding
The
change signal)
events
when
data.
If
pulses,
the
addition,
order
exceeds
ratio
despite
the
duration
the
predictor
the
length
that
and
both
the
of
these
coefficient
locating the positions of
apparent
not
in
the
input
events
40db, the compressed
overlap
severe
events
of
error
for
are
positions
compressed
the
of
S/N
and
visible
(the RLS
provide a means
those
the
indicate
examples
signals
compressed
event
are
144 -
are
well
compressed
input
the
In
separated.
events
appears
and
of
to
is
be
on
the
fairly
independent of the existance or nature of previous events.
The
following
are
some of
the
important
results
these experiments.
The starting location of the RLS iteration
the
of
sizes
appears to only affect the
a
large
there
is
compressed events. However,
initial pulse in the coefficient change signal
due to the sensitivity of the algorithm to the
This means that the
first few data points.
lengths of
predictor
data must have several
the
if
first
event
the
preceding
noise
coefficient change signal is to be used for
event compression.
Additive noise in the data does not hve a
substantial effect on the pulse shape of the
However, it does cause
compressed events.
noise in the RLS error and coefficient change
signals, which seems to be in proportion to
Unfortunately,
the noise level in the data.
there is a large variation in the size of
compressed events with this technique; making
it difficult to establish a S/N criterion for
event compression.
--
___
of
-
145 -
Compressed events do not influence each other
substantially if the predictor has time to
settle between them, even though the bursts
which cause them may be severely overlapped.
Figures 3.23a-h showed this by changing the
relative position of the second and
third
bursts in an input sequence.
The resulting
compressed events had constant shape so long
as they did not overlap.
Therefore, the
faster
the
predictor
settles,
the
closer
events can be to one another and still be
resolved.
Th i s
al so
means
that
events
containing large numbers of zeroes (as in the
FIR filtering examples) will not compress well
ih
this .thech-iQue unless they are widely
separated.
Note that a burst containing a
large
number
of
zeroes
does
generate
a
compressed event, but the event has longer
duration than it would if the zeroes were not
present.
While filtering the data may be necessary to
reduce aliasing, we found that it did not
improve the quality of the event compressed
signals.
If filtering must be performed,
all-pole filtering should be used since the
zeroes
introduced by FIR filtering tend to
lengthen the compressed events.
We found that decimation does increase the
responsiveness of the predictor to each event,
but the relative event to noise ratios in the
location signals do not improve. In addition,
the predictor settling time becomes a larger
fraction
of
the
event
spacing,
thereby
increasing
the
likelihood
of
interevent
interference.
Consequently, decimation should
be avoided; and in fact over-sampled data is
likely to provide higher quality compressed
events.
_
______I__IIIILCCC______
-
4.
146 -
CONC LUSIO!S
this thesis we have develope4 an event compression
In
technique
compress
model
based
on
the
events
of
differing
by the
assumed
examined
the
RLS
RLS
algorithm, which
provided
spectra;
RLS method
prediction
(i.e.
error
can
effectively
they fit
the
We
all-pole events).
as
an
event
compressed
signal and
in addition developed the coefficient change signal
for
compression.
event
,
positions
minor
that
but
or
effects
on
should
technique
variations
the
in
that
S/N ratio
is
the
ordering,
of
event
the
those
high
this
>
(e.g.
event
have
only
Therefore,
this
algorithm
events.
compressed
useful
be
in
position
starting
indicate
experiments
is effective only if the
technique
40db)
Our
situations
where
the
events are close to all-pole and the noise level is not high.
This
thesis
was
initial
an
investigation
into
the
feasability of using the RLS algoritm for locating events with
differing
discover
When
spectra.
we
to
synthesized
be
work
to
use
the
in
the data.
if
we
wanted
to
it did, where
Given that need, the best course
algorithm
examples, where we
of the events
RLS
this
if the technique had any value and
should further work be done.
seemed
began
on
a
large
knew the number
These examples
number
of
and positions
indicate
that
is indeed a viable means for performing event compression.
However, there are many questions
_
touched
upon
in
the
course
_______I_____IIL___C__L__
-
147 -
of this thesis which need more detailed investigation.
One
of
most
the
that the events in the data cannot be
predictor
settling time, if
other.
remains
with
issue
The
of
unsolved.
noise
level,
and
point),
dependent
settling
what
than the
spaced closer
determines
with
decreases
3.23a-h
settling
time
indicate that it increases
predictor
that
indicate
of
this
previous
length
a
(up to
it is not strongly
events.
the
Also,
on
the
position
time
for
a given event is strongly dependent on the
"character"
of
that
dependence
of
the
characteristics
means
thesis is
they are to be resolved from each
Our experiments
figures
this
important results of
for
event.
More
predictor
would
be
settling
insofar
useful;
the
decreasing
investigation
settling
time
on
as
it
of
the
of
the
event
the
a
provides
predictor,
thereby reducing the necessary separation between events.
Another
possible
avenue
of
investigation
is
that
of
alternatives to the RLS algorithm as presented in this thesis.
Some variations of this
moving
region
Q
over
adaptive algorithm exist which
which
the
total
minimized, rather than an expanding
Still
other
data.
methods
These
involve
modifications
squared
region
energy
allow
E
is
as was done here.
exponentially weighting
would
us.>
the
the past
algorithm
to
be
used over large amounts of data without the predictor becoming
totally
insensitive
to
the
new points
(as
_______________IC______
the
error
region
-
148 -
grows the predictor
tends to become less sensitive).
applications
that
capability
importantly,
reducing
the
predicted might allow the
new
events,
thereby
might
amount
of
algorithm
reducing
be
essential;
previous
to more
the
minimization
expanding or moving
would permit the
might
the
be
settling
region
more
data
time.
to
be
Our
to
own
the region of
effective
at each
more
readily adapt
feeling is that a means for dynamically changing
error
In some
than
iteration,
simply
since
that
region of error minimization to be reduced at
a new event to shorten the settling
time, and
then lengthened
between events to reduce noise.
We
found
the
coefficient
for
event compression, but our
all
the coefficients
and
a
change
signal
signal
used
to
equal
be useful
weights
for
fixed decay time for the filters.
Is there an optimum choice for the weights of the coefficients
in
the change signal?
Could Kalman filtering be used to track
the coefficients more effectively?
alternative
than
example
adaptive
signal
an
simply high
might make
decay
the
Perhaps
passing
time
compressed
for
there is a better
the
coefficients.
the
coefficient
events
For
change
for the second burst
in the Burstl signal more visible.
Finally,
real
data.
a means
there
is
no
substitute
for
experiments
on
The original motivation for this work was to find
for compressing
--111---
-
sonic well
log
-- -----
data.
__
Unfortunately,
-
subsequent
149 -
tudy of the well logging problem revealed that the
structure of the signals
recorded
in
that situation does not
fit the model that was assumed for this work.
Experimentally
we found that this
those
but
given
surprising.
event
technique
their
complicated
Therefore,
compression
the assumed model
did not compress
to
the
data
application
which
is needed.
structure,
We
of
that
was
this
method
more closely corresponds
hope
to
perform
using acoustic cardiac data in the near future.
II
signals,
__
not
of
to
experiments
-
150 -
References
Deczky, A.G. (1972). Synthesis of Recursive Digital Filters
Using the Minimum-p Error Criterion", IEEE Trans. on Audio and
Electroacoustics, vol. AU-20, no. 4, pp. 257-263
Eykhoff,P. (1974). SYSTEM IDENTIFICATION: Parameter and State
Estimation, John Wiley and Sons, New York, p. 235
Makhoul, J. (1975).
Linear Prediction: A Tutorial Review',
Proc. IEEE, vol. 63, no. 4, pp. 561-580
Markel, J. (1972).
The SIFT Algorithm for Fundamental
Frequency Estimation",IEEE Trans. on Audio and
Electroacoustics, vol. AU-20, no. 5, pp.
367-377
McClellan, J., Parks, T., Rabiner, L. (1973). A Computer
Program for Designing Optimim FIR Linear Phase Digital
Filters", IEEE Trans. on Audio and Electroacoustics, vol.
AU-21, no. 6, pp. 506-526
Satorious, E.H. (1979). On the Application of Recursive Least
Squares Methods to Adaptive Processing", Intl Workshop on Appl
of Adaptive Control, Yale University
Tribolet, J.
(1977). Seismic Applications of Homomorphic
Signal Processing", PhD Thesis M.I.T.
Ulrych, T. (1971). Applications of Homomorphic Deconvolution
to Seismology", Geophysics, vol. 36, no. 4, pp. 650-660
Young, T.Y. (1965). "Epoch Detection-A Method for Resolving
Overlapping Signals",Bell Sys. Tech. Journal, vol. 44, pp.
401-426
Download