01 Oct 2013
•
: Geodetic cross-correlation assignment
(should we postpone deadline to Tue 10/15?)
• Given a discrete function h defined on
– sampled at intervals T , the
is defined on [–
,
] (and otherwise periodic). Frequencies
≥
f
N
= 1/2 T are
to lower f !
• Generally, continuous aperiodic; discrete
periodic. In practice, we usually apply the
to discrete, finite data…
• Properties of correlation functions and spectra for the are otherwise similar to those for
, save that they are discrete/aperiodic in the time domain and continuous/ periodic in the time domain…
© A.R. Lowry 2013
is generally performed by one of two approaches:
•
(or “classical”) methods use the Fourier transform and (perhaps) one or more tapers applied to the data (examples include
,
,
power spectra)
•
methods assume some model to describe the behavior or statistics of the data, and parameterize that model (e.g.
,
power spectra).
In this course we will focus primarily on nonparametric spectral estimation.
For more on the topic see Modern Spectral Estimation (Kay) or Kay & Marple (1981) (posted link)
Recall: The
of a random process is:
The
of the FT is:
E
E
˜ e
i
t dt
~
which, assuming f is wide-sense stationary, is
t e
e i
i t
(i.e. this has meaning for
= 0 ). t dt dt
F
2
The autocorrelation of the Fourier transform is:
R
FF
E
˜ ˜
*
E
˜ ˜ e
iut
1 e
ivt
2 dt
1 dt
2
~ ~
(can’t assume
F is WSS just because f is!)
R ff
~
Given f wide-sense stationary, e
iut
1 e
ivt
2 dt
1 dt
2
R
FF
E
˜ ˜ e
R ff
iut
1 e
ivt
2 dt
1 dt
2
t
1
t
2
e
iut
1 e
ivt
2 dt
1 dt
2
S
S ff
e
t
2 dt
2 ff
2
v
u
So, F (
) is generally
because it depends on frequency ( u )
We’ve already noted that
• the autopower spectrum is the Fourier Transform of the autocorrelation function, &
• the autocorrelation function can be estimated (given ergodicity) from the signal via:
˜
T
1
2 T
T
T
˜
˜
dt
Given an infinite record length,
E
V
R
T
R
T
R
0
as as
T
T
(provided R (
)
0 as
).
However given a finite record length, have limited overlap of x ( t +
) with x ( t ) . This does not bias the estimate of but fewer realizations means increasing variance as
R
, increases…
For example:
R
T
( l ) for l =
has N = 33 realizations here…
R
T
( l ) for l = 30 has just N = 5 realizations.
For discrete data, this calculation looks like: r xx
1
N
l
N
l n
1
˜
˜
And is performed for all l
= 0,1,2,…,
L
(But estimate will be very poor if L / N < 0.2
, and ideally would use L = N ).
The
(or “indirect” method) for power spectral estimation uses this approach to estimating
R , but downweights the longer lags, estimating the spectrum as:
S xx
N l
1
N
l
N r xx
e
i
l dl
(using the FFT). This has two problems:
(1) Variance in estimate of
ˆ xx is large as l
N
(& that variance will influence ALL
(2) Limited length of r xx
after FFT) implies convolution with some fn!
classical (nonparametric) methods for spectral estimation suffer from two main problems:
(1) The estimate of statistical properties has relatively few realizations for large lags of the autocorrelation function in the time domain, and correspondingly for long wavelengths (= low frequencies) in the spectral domain.
There’s not much that one can do about this
you have the luxury of running the experiment multiple times (i.e., multiple realizations), in which case you can stack.
(2) The power spectrum is also convolved with the transform of whichever sorts of windowing/weighting functions were applied in the time domain.
Recall (from
) cross-correlation and stacking is what’s used to estimate normal-moveout velocity in industry reflection data…
For a reflection from a given layer, NMO correction stacks
CMP gathers using different assumed velocities and looks for the stacking velocity that maximizes the stacked energy
This is a form of cross-correlation analysis so can be sped up a bit using FFT’s…
Can similarly stack
of vertical and horizontal components of tremor signals…
And more accurately locate sources using S – P arrival times.
QuickTime™ and a
TIFF (Uncompressed) decompressor are needed to see this picture.
La Rocca et al. (2009) Science 323 620-623
with classical (nonparametric) methods for spectral estimation is referred to as
(due to
of power into spectral lobes of the windowing function). Recall our transforms:
R xx
S xx
W
1 W
P
2 T
(sinc fn!)
W
1
2 T sin
In the case of the
spectral estimate, the windowing function applied to the autocorrelation was a triangle function, for which the transform is a squared sinc function…
T
2 T
(sinc 2 fn!)
W
1
4 T
2 sin
2
(Well… Okay… That kinda makes sense!)
So an obvious place to exert effort in spectral estimation is in designing windows (
) that reduce or minimize the bias, e.g.:
Hanning window:
cos
2
x
2 a
1
2
1
cos
x a
Hamming window:
0.54
0.46 cos
x a
The
, or
( Thomson ), partially overcomes both the variance problem and the bias problem by (1) generating multiple power spectra from the same signal using orthogonal, minimum bias ( Slepian ) tapers, and
(2) averaging these independently tapered spectra to form the final spectral estimate. The Slepian tapers (also called discrete prolate spheroidal sequences) look like this:
Note that each successively higher-order taper has one additional zero-crossing. In practice, spectra are most commonly generated by averaging over the first three, five, or seven tapers…
An example comparison of a nine-taper MTM (red) with a periodogram (black) power spectral estimate: