IMAGING MOVING TARGETS IN A MULTIPATH ENVIRONMENT WITH MULTIPLE SENSORS

advertisement
IMAGING MOVING TARGETS IN A
MULTIPATH ENVIRONMENT WITH MULTIPLE
SENSORS
By
Analee Miranda
A Thesis Submitted to the Graduate
Faculty of Rensselaer Polytechnic Institute
in Partial Fulfillment of the
Requirements for the Degree of
DOCTOR OF PHILOSOPHY
Major Subject: Mathematics
Approved by the
Examining Committee:
Margaret Cheney, Thesis Adviser
David Isaacson, Member
William Siegmann, Member
Fengyan Li, Member
Matthew Ferrara, Member
2
Rensselaer Polytechnic Institute
Troy, New York
November 2010 (For Graduation December 2010)
CONTENTS
LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
v
ACKNOWLEDGMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ix
ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
x
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2. Imaging in the presence of a single perfectly reflecting layer . . . . . . . . .
6
2.1
Mathematical Data Model - Single Perfectly Reflecting Layer . . . . .
6
2.1.1
Incident Field . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
2.1.2
Scattered Field . . . . . . . . . . . . . . . . . . . . . . . . . .
14
2.1.3
Slowly Moving Scattering Object . . . . . . . . . . . . . . . .
19
2.1.4
Narrowband Waveform . . . . . . . . . . . . . . . . . . . . . .
22
Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
2.2.1
Image Formation . . . . . . . . . . . . . . . . . . . . . . . . .
23
Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3. Imaging in the presence of a dispersive reflector . . . . . . . . . . . . . . .
29
2.2
2.3
ii
3.1
Mathematical Data Model - Dispersive Reflecting Layers . . . . . . .
29
3.1.1
Incident Field . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.1.2
Scattered Field . . . . . . . . . . . . . . . . . . . . . . . . . .
33
3.1.2.1
3.1.2.2
3.2
Case 1: Data Model for a Stationary Scattering Object . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
Case 2: Data Model for a Moving Scattering Object
42
Image Reconstruction - Dispersive Reflecting Layers . . . . . . . . . .
45
3.2.1
Image Formation . . . . . . . . . . . . . . . . . . . . . . . . .
46
3.2.2
Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . .
47
3.2.2.1
Image Analysis: Case 1, Stationary Target . . . . . .
49
3.2.2.2
Image Analysis: Case 2, Moving Target . . . . . . .
50
4. Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
4.1
4.2
Waveforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
4.1.1
High Range Resolution Waveform . . . . . . . . . . . . . . . .
51
4.1.2
High Doppler Resolution Waveform . . . . . . . . . . . . . . .
57
Perfect Reflector . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
4.2.1
Numerical Experiment 1 - Stationary Targets . . . . . . . . .
61
4.2.1.1
61
Algorithm - Data
iii
. . . . . . . . . . . . . . . . . . .
4.2.2
4.3
4.2.1.2
Algorithm - Image Formation . . . . . . . . . . . . .
62
4.2.1.3
Case 1 - Known Paths . . . . . . . . . . . . . . . . .
64
4.2.1.4
Case 2 - Unknown Paths . . . . . . . . . . . . . . . .
67
Numerical Experiment 2 - Moving Objects . . . . . . . . . . .
83
4.2.2.1
Algorithm - Data
. . . . . . . . . . . . . . . . . . .
84
4.2.2.2
Algorithm - Image Formation . . . . . . . . . . . . .
84
4.2.2.3
Case 1 - Known Paths . . . . . . . . . . . . . . . . .
87
4.2.2.4
Case 2 - Unknown Paths . . . . . . . . . . . . . . . .
94
Dispersive Reflector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.3.1
Numerical Experiment 1 - Stationary Targets . . . . . . . . . 123
4.3.1.1
Algorithm - Data
. . . . . . . . . . . . . . . . . . . 123
4.3.1.2
Algorithm - Image Formation . . . . . . . . . . . . . 125
4.3.1.3
Dispersive Reflector Experiment
. . . . . . . . . . . 127
5. Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.1
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.2
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
LITERATURE CITED
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
iv
A. Determining sgnDζ2 Φdsa (ζ 0 ) . . . . . . . . . . . . . . . . . . . . . . . . . . 140
B. Determining the virtual sensor locations . . . . . . . . . . . . . . . . . . . 141
v
LIST OF FIGURES
4.1
SFCW Waveform Plot
. . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2
SFCW Ambiguity Surface
. . . . . . . . . . . . . . . . . . . . . . . . .
56
4.3
CW Waveform Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
4.4
CW Ambiguity Surface . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
4.5
Stationary Object Image Formed using Known Paths . . . . . . . . . .
65
4.6
Stationary Object Surface Formed using Known Paths . . . . . . . . . .
66
4.7
∗11
Stationary Object Image formed using Psf
cw . . . . . . . . . . . . . . .
68
4.8
∗11
Stationary Object Surface formed using Psf
cw
. . . . . . . . . . . . . .
69
4.9
∗12
Stationary Object Image formed using Psf
cw . . . . . . . . . . . . . . .
71
4.10
∗12
Stationary Object Surface formed using Psf
cw
. . . . . . . . . . . . . .
72
4.11
∗21
Stationary Object Image formed using Psf
cw . . . . . . . . . . . . . . .
74
4.12
∗21
Stationary Object Surface formed using Psf
cw
. . . . . . . . . . . . . .
75
4.13
∗22
Stationary Object Image formed using Psf
cw . . . . . . . . . . . . . . .
77
vi
53
4.14
∗22
Stationary Object Surface formed using Psf
cw
4.15
Stationary Object Image formed using
4.16
Stationary Object Surface formed using
P
. . . . . . . . . . . . . .
∗lm
Psf
cw
78
. . . . . . . . . . . .
81
. . . . . . . . . . .
82
4.17
Moving Object Image using Known Paths and SFWC Waveform . . . .
88
4.18
Moving Object Surface using Known Paths and SFWC Waveform . . .
89
4.19
Moving Object Image using Known Paths and CW Waveform . . . . . .
91
4.20
Moving Object Surface using Known Paths and CW Waveform . . . . .
92
4.21
∗11
Moving Object Image formed using Psf
cw . . . . . . . . . . . . . . . . .
94
4.22
∗11
Moving Object Surface formed using Psf
cw . . . . . . . . . . . . . . . .
95
4.23
∗11
Moving Object Image formed using Pcw
. . . . . . . . . . . . . . . . .
97
4.24
∗11
Moving Object Surface formed using Pcw
. . . . . . . . . . . . . . . . .
98
4.25
∗12
Moving Object Image formed using Psf
cw . . . . . . . . . . . . . . . . . 100
4.26
∗12
Moving Object Surface formed using Psf
cw . . . . . . . . . . . . . . . . 101
4.27
∗12
Moving Object Image formed using Pcw
4.28
∗12
Moving Object Surface formed using Pcw
. . . . . . . . . . . . . . . . . 104
vii
lm
P
lm
∗lm
Psf
cw
. . . . . . . . . . . . . . . . . 103
4.29
∗21
Moving Object Image formed using Psf
cw . . . . . . . . . . . . . . . . . 106
4.30
∗21
Moving Object Surface formed using Psf
cw . . . . . . . . . . . . . . . . 107
4.31
∗21
Moving Object Image formed using Pcw
4.32
∗21
. . . . . . . . . . . . . . . . . 110
Moving Object Surface formed using Pcw
4.33
∗22
Moving Object Image formed using Psf
cw . . . . . . . . . . . . . . . . . 112
4.34
∗22
Moving Object Surface formed using Psf
cw . . . . . . . . . . . . . . . . 113
4.35
∗22
Moving Object Image formed using Pcw
4.36
∗22
. . . . . . . . . . . . . . . . . 116
Moving Object Surface formed using Pcw
4.37
Moving Object Image formed using
4.38
Moving Object Surface formed using
4.39
Moving Object Image formed using
4.40
Moving Object Surface formed using
4.41
Dispersive Reflector Geometry . . . . . . . . . . . . . . . . . . . . . . . 124
4.42
∗22
Stationary Object Image Formed using Psf
cw,dis . . . . . . . . . . . . . 128
4.43
∗22
Stationary Object Surface Formed using Psf
cw,dis . . . . . . . . . . . . . 129
P
. . . . . . . . . . . . . . . . . 115
∗lm
Psf
cw . . . . . . . . . . . . . . 118
P
lm
P
viii
lm
. . . . . . . . . . . . . . . . . 109
lm
P
∗lm
Psf
cw
. . . . . . . . . . . . . 119
∗lm
Pcw
. . . . . . . . . . . . . . . 121
lm
∗lm
Pcw
. . . . . . . . . . . . . . 122
ACKNOWLEDGMENT
This work was supported by the Air Force Research Laboratory under RF Integrated Systems In-House Research Contract Number JON 76221104. This work was
supported in part by a grant of computer time from the DOD High Performance
Computing Modernization Program at ARL DSRC and AFRL.
I am very grateful to my advisor, Margaret Cheney, for her guidance, support, encouragement, and mentor-ship throughout my time at RPI. I would also like to thank
Katie Voccola, Heather Palmeri, Tegan Webster, and Tom Zugibe for being supportive radar group members and friends and for many useful discussions. I owe a large
amount of gratitude to Matthew Ferrara for introducing me to the opportunities in
the Air Force Research Lab that secured my funding and for being a great resource
this past year. I am also grateful to Tracy Johnston, Keith Loree, Joe Tenbarge,
Brian Kent, David Jerome, Jackie Toussaint-Barker, Nivia Colon-Diaz, Rober Ewing, Braham Himed, and the countless other members of the AFRL RYR division who
provided me with guidance, engineering viewpoints, and encouragement throughout
my research. I would like to acknowledge David Isaacson, William Siegmann, and
Fengyan Li and all the professors at RPI who gave me encouragement and useful
information. Finally, I would like to thank my family for their support throughout
my graduate career.
ix
ABSTRACT
In this dissertation we develop a method for designing a wave-based imaging system
that utilizes multiple sensors effectively in the presence of multipath wave propagation.
We consider the cases where the individual transmit/receive sensors are separated
by large distances. The scene to be imaged has been illuminated by direct path
and multipath wave propagation. The scattering objects of interest are moving.
We develop a model for the received data that is based upon the distorted wave
Born approximation [21]. To model the multipath wave propagation, we introduce
a reflection surface to our problem.
We derive the data models for two cases: one where the reflecting surface is perfectly
reflecting and one where the reflection medium depends on frequency and take-off
angle. We then develop a number of inversion formulas based on various versions
of a filtered adjoint operator of the forward model. The varying inversion formulas account for the waves that have arrived via different paths. We then find the
appropriate point-spread function for each case.
We perform numerical experiments by numerically simulating the forward data and
reconstructing an image from that data. We test several sensor configurations of the
imaging system.
In the first set of experiments we image a stationary object, we model our reflective
surface to be perfectly reflecting, we use 1 receiver and up to 9 transmitters, and
x
we simulate the forward data using a stepped frequency continuous waveform, which
provides us with high range resolution. It is known from the theory in [55], [12],
and [30], that multipath data contains copies of the scattering object in the wrong
location. If the data can be separated according to path, then these ambiguities
can be easily removed. If the data cannot be separated according to path prior to
imaging, then we show that adding more sensors will minimize the ambiguities if the
best path is used for backprojection.
In the next set of experiments, we image a moving object, we model our reflecting
surface to be perfectly reflecting, we use 1 receiver and up to 9 transmitters, and
we use two waveforms to simulate the forward data The first waveform is a steppedfrequency continuous wave as in the stationary target case. The second waveform is
a long pulse and it is known to have high doppler resolution. We again add more
sensors and backproject over various paths in order find a best backprojection path
and sensor configuration that will reduce ambiguities. Here we have an image of
a moving object and thus we expect to find ambiguities in position and velocity.
We display the image by creating a 4-dimentisonal plot made up of two dimensional
slices of the image grouped together in a grid of varying velocities.
We find that we can estimate a correct velocity using the high doppler resolution
waveform. We also find that using more sensors in a specific configuration minimizes
ambiguities in position for the high range resolution waveforms. It is interesting
to note that velocity may also be detected using a high range resolution waveform,
however, the resolution in velocity is poor when compared with the high doppler
resolution waveform images.
In the final set of experiments, we image a stationary object, we model our reflective
surface to be dispersive, we use 1 receiver and up to 9 transmitters, and we simulate
xi
the forward data using a stepped frequency continuous waveform. We notice four
distinct paths instead of three. We find results similar to those found in the multipath case for a stationary target with a perfect reflection. We see that if we add
more sensors, the image fidelity is improved. In the case where the data cannot be
separated by path prior to imaging, then the best adjoint is the sum of all paths.
The numerical experiments show how the theoretical results provide tools for forming
images from multipath data effectively and efficiently and for analyzing these images.
xii
CHAPTER 1
Introduction
A variety of imaging technologies operate by transmitting waves through some medium.
The waves scatter off objects of interest, and the scattered waves are received at receiving sensors. From the measured scattered waves, one tries to determine the
location and properties of the scattering objects. Examples of such technologies include radar, sonar, seismic imaging, and medical imaging. All of these modalities can
be modeled, with varying degrees of accuracy, by the scalar wave equation. One complication to wave-based imaging is that waves can propagate along multiple paths.
Typically, the multiple paths give rise to artifacts and uncertainty in the position
and properties of the scattering objects. Consequently, finding a way to distinguish
between artifacts and true targets in a reconstructed image is an important goal [31]
[30].
Seismic imaging systems typically need to develop algorithms to suppress multiple reflections by applying methods such as Surface Related Multiple Eliminations
(SRME) [69] [23]. SRME eliminates artifacts from images by estimating multiply
scattered wave contributions and removing these contributions from the data. Another method for finding and removing multiple scattering artifacts involve inverse
scattering series development [76]. Some methods use multiply scattered waves as
a tool to illuminate objects that cannot be seen by singly scattered waves, such as
near vertical subsurface features [48].
Medical imaging systems, such as Ultrasound, use sound waves that are reflected by
1
2
tissues. Data from these systems are processed and may yield poor quality images
due to multipath artifacts [37]. Methods that correct for these artifacts involve homomorphic signal processing to correct for multipath multiplicative noise. In this
method, high-pass filtering is used to suppress low frequency components, which are
assumed to be multiplicative noise, in the log-intensity domain. Another method
that reduces multiplicative noise is median filtering in the projection space, which
is a nonlinear digital filtering technique that examines the image pixel by pixel and
ensures that the pixels are represented by the median values of a particular neighborhood in the image. Examining a larger neighborhood reduces noise but degrades
image quality [18]. In [67], the reflectivity function of a stationary scattering object
in a multipath environment is reconstructed using a method based on a minimumnorm least-square-error approach. This approach is extended to include high clutter
environments.
Sonar system operating in sidescan can be affected by multiple reflections from the
sea floor [7]. The artifacts that are created by the multiple reflections degrade the image quality and require mitigation techniques. One of these techniques involves using
blind deconvolution and classification of the recovered multipath signal to remove
ambiguities [77] [11]. Another method for a multiple sensor interferometric synthetic aperture sonar imaging system suppressess multipath artifacts by employing
a Bayesian method [36] . Other statistical methods for improving image resolution
from multiple views include Autocorrelation Estimation (AE) and Maximum Likelihood Estimation (MLE) that predict where multipath ambiguities will occur [81]
[52]. Another signal processing approach uses multiple sensors to jointly estimate
time delay and angle of arrival information which is then used to detect the scattering
object of interest [75], [63].
Radar systems that operate in multipath environments include Synthetic Aperture
3
Radar (SAR), Over-the-Horizon Radar (OTHR), and satellite-mounted Radar. The
images formed from these systems contain multipath artifacts that degrade image
quality and resolution [31] [81]. In the SAR case, multipath environments include
urban regions and heavily wooded areas. In [55], multipath SAR modalities are
shown to enhance image resolution via multiple views of the scattering object. In
OTHR, the ionosphere acts like a perfectly reflecting mirror to extend the range
of scattering object detection. This is a special case that neglects the direct path
because of the Earth’s curvature. However, the non-line-of-site path data yields poor
image resolution and special methods have been developed such as Signal Inversion
for Target Extraction and Registration (SIFTER) that uses multiples views and
time progression, propagation channels, and antenna patterns to perform coordinate
registration of scattering objects. This method also recovers velocity information for
moving targets [29].
While accomplishing the task of removing ambiguities and improving imaging quality,
many of the methods discussed earlier were intended for stationary objects. The
SIFTER method detects velocity information by utilizing a known reflector position
and utilizing only one path of the multipath data and it is not known if this method
would achieve the same result with full multipath data or if image resolution may
be enhanced using a dispersive reflector model. It has been shown by the work
referenced in this chapter that adding more sensors or views of the object improves
image fidelity in a multipath environment. However, it is not known if ambiguities
exist in velocity and position when imaging a moving object in a full multipath
environment. An approach has been developed by [13] that images moving object
using multiple sensors in the free-space case.
In this dissertation, we build upon the aforementioned free-space theory that combines information from multiple sensors to develop a method for forming an image
4
of moving scattering objects in a multipath environment. The main goal of this
dissertation is to provide a theoretical tool to design, analyze, and determine the
effectiveness of combining data from a set of coherent or incoherent imaging systems
operating in a multipath environment and forming an image of moving objects. We
explore two types of reflectors that produce multipath data, a perfectly reflecting
layer and a dispersive reflecting layer. We develop image formation techniques for
both reflector types.
We begin by creating a data model that incorporates the parameters necessary to
model the set of imaging systems and the type of multipath environment. The parameters include the number of transmitters and receivers, the time each transmitter
is initiated, the type of waveform(s) used, the scene of interest which includes stationary or moving objects, and the type of reflector we are using, dispersive or perfectly
reflecting. This data model is an important tool for the design and implementation
of a specific imaging system.
An example is a system of seismic sensors transmitting stepped-frequency continuous
wave waveforms, with each transmitting sensor transmitting the same waveform at
different times. This is an example of an incoherent set of imaging systems. The
multipath environment can be modeled using a model for a dispersive reflector. The
scene of interest would be stationary seismic faults. The coherency of the imaging
system is an important factor in the design of the imaging system hardware. By
modeling the type of data we expect to receive in the incoherent case, and subsequently finding a way to process this type of data, system engineers can save time
and funding.
Another imaging system that could be modeled could be an Over-the-horizon Radar
system, with some sensors transmitting a high range resolution waveform while others
5
transmitting a high doppler resolution waveform. Each transmitting sensor could be
initiated at the same time. The scene of interest for this system consists of moving
objects on the ground. The reflector type may be dispersive. In this case, we
may investigate how to combine and process data from different waveforms and for
moving objects. A radar system that operates in this modality is not currently
feasible, however, we may investigate this system through modeling and simulation
with the theoretical tools we develop in this dissertation.
After deriving the general data model, we continue by developing an image reconstruction method that uses information of interest from the data model to form an
image of the distribution of imaging objects. The properties of interest for the reconstruction are position and velocity. By using the data model in conjunction with
image model, the toolbox for the design and implementation of an imaging system is
complete. Upon implementation of the actual system, we may use the same imaging
algorithm to process data from the actual system.
After the theoretical models are developed, we perform numerical experiments for
simple cases to verify if known results are effectively predicted by our theoretical
model. We conclude by analyzing the effectiveness of the imaging model to predict
parameters of interest.
CHAPTER 2
Imaging in the presence of a single perfectly reflecting layer
In this chapter, we develop a method for imaging moving objects in a multipath
environment consisting of a single perfectly reflecting surface. The approach we take
is to model the forward problem (data model) via a solution to the wave equation.
We then form an image by applying an adjoint to the data model.
2.1
Mathematical Data Model - Single Perfectly Reflecting
Layer
It is the goal of this section to derive a model for the data we expect to collect at
a receiving sensor. This receiving sensor will receive information from a wave that
scatters off a set of moving targets and arrives at the receiving sensor via multiple
paths. Many of the modalities for which this work is applicable may be modeled by
the wave equation.
Radar For example radar is governed by Maxwell’s equations.
ρ
0
(2.1)
O·B =0
(2.2)
O·E =
O×E =−
6
∂B
∂t
(2.3)
7
O × B = µ0 J + µ0
∂D
∂t
(2.4)
where E is the electric field, B is the magnetic field, J is the current density, D is
the electric displacement, 0 is the permittivity of free space, µ0 is the permeability,
and ρ is the total charge density. We take the curl of (2.3) to arrive at
O × O × E = −O ×
∂B
∂t
(2.5)
We interchange the order of operations on the right side of (2.5) to arrive at
O×O×E =−
∂
(O × B)
∂t
(2.6)
and substitute (2.4) to the right side on (2.7)
O×O×E =−
∂D
∂
(µ0 J + µ0
)
∂t
∂t
(2.7)
In a vaccuum and charge-free-space, where ρ = 0, J = 0, and the fields E, D are
related by the following constitutive relation of free space
D = 0 E
(2.8)
We substitute J = 0 and (2.8) to (2.7)
O × O × E = −µ0 0
∂ 2E
∂t2
(2.9)
8
We apply the identity
A × (B × C) = B(A · C) − C(A · B)
(2.10)
to the left side of (2.9) to arrive at
O(O · E) − (O · O)E = −µ0 0
∂ 2E
.
∂t2
(2.11)
Since ρ = 0, then we arrive at the wave equation
O2 E −
where c−1 =
√
1 ∂ 2E
=0
c2 ∂t2
(2.12)
µ0 0 . Thus, each component of E satisfies (2.12). To model a source,
we can use
O2 E −
1 ∂ 2E
=S
c2 ∂t2
(2.13)
where S is some source function.
Sonar For technologies where waves pass through a medium, such as sonar or
medical imaging, the acoustic wave equation in fluids and gases is appropriate. We
begin by considering Newton’s second law as applied to a fluid
−OP = ρ0
∂v
,
∂t
(2.14)
9
where P is a pressure field. Next, we consider the continuity equation
O·v =
1 ∂ρ
,
ρ0 ∂t
(2.15)
where v is the particle velocity and ρ is density. Finally, we consider the equation
of state
1 ∂P
∂ρ
=
∂t
cs ∂t
(2.16)
where cs is the speed of sound. We substitute (2.16) into (2.15)
O·v =
1 1 ∂P
ρ0 cs ∂t
(2.17)
and differentiate (2.17) with respect to t and change the order of operation for the
left hand side,
O·
∂v
1 1 ∂P
=
.
∂t
ρ0 cs ∂t
(2.18)
Next we substitute (2.18) into the divergence of (2.14) to arrive at
52 P =
1 ∂ 2P
.
cs ∂t2
(2.19)
Again, we give an example of a source-free wave equation. We may add a source to
this equation as in the wave equation for the Electric field.
Free Space Green’s Function for the Wave Equation By adding a source to
the wave equation, we may use a free-space green’s function g to solve for the field
10
we are interested in. For example, a free-space green’s function g with a source at
x = 0 satisfies
Lg(t, x) = δ (t) δ (x) , (t, x) ∈ R × [0, ∞)
where L = O2 −
∂2
∂t2
(2.20)
and g is given by
g (t, x) =
δ t−
|x|
c
4π|x|
(2.21)
This green’s function is a useful tool for the free-space wave propagation case.
Free Space Green’s Function for the Helmholtz Equation We introduce the
Helmholtz equation with a delta source
L̂ĝ(ω, x̃) = δ(x̃)
(2.22)
where L̂ is
L̂ = 52 + k 2 , k =
ω
c
(2.23)
that satisfies an outgoing Sommerfield radiating condition
lim |x̃|
|x̃|→∞
∂ĝ
− ikĝ
∂|x̃|
= 0, k > 0.
(2.24)
11
A green’s function ĝ that satisfies (2.22) with (2.24) is
ĝ(ω, x̃) =
2.1.1
eik|x̃|
4π|x̃|
(2.25)
Incident Field
We consider imaging systems where the wave will propagate through a uniform
medium with constant refractive index. Consider a single-frequency waveform where
its frequency f is related to its period by
T =
1
f
(2.26)
Definition 1 The refractive index, n, of a medium M is
n = vc /vp
(2.27)
where vc is the speed of the wave in a reference medium and vp is the phase velocity.
The phase velocity may be described as
vp =
λ
T
(2.28)
where λ is the fixed frequency wavelength and T is the period.
For simplicity, we choose a refractive index of 1 for our wave propagation medium.
For example the speed of the wave can be the speed of light in vacuum c. The wave
propagation medium can then be mathematically described as lying in the half-space
12
Ω and can be physically thought of as air. Specifically, we have
Ω : R3+ = {(x1 , x2 , x3 ) |x1 , x2 ∈ R, x3 < z0 }.
(2.29)
We want the outgoing wave to propagate through the medium in Ω and reflect off
a single homogeneous reflective planar surface. The boundary of Ω, ∂Ω, is a plane
x3 = z0 , and can be the location of the reflective medium
∂Ω : R2 × {z0 }.
(2.30)
We define Ω∗ as the space that is the reflection of Ω across the boundary ∂Ω.
Ω∗ : R3− = R3 \(Ω ∩ ∂Ω).
(2.31)
Definition 2 The incident wave-field, ψ in , due to a source S satisfies,
Lψ in (t, y) = S (t, y) , ψ in ∈ Ω × [ty , ∞)
(2.32)
with the boundary condition,
ψ in (t, y) |∂Ω = 0.
(2.33)
where L is the linear partial differential operator,
L = 52 − c−2 ∂t2 ,
(2.34)
where c is constant and we consider sources of the form
S (t, y) = δ (y − Y ) s (t − ty ) .
(2.35)
13
The incident wave-field is derived by using the method of images.
The wave source is located at position y ∈ Ω at time ty . In the method of images we
need a virtual source located at position y 0 ∈ Ω∗ . If we define y as y = (y1 , y2 , y3 ),
then we define the virtual source at y 0 = (y1 , y2 , y3 + 2z0 ). When the wave that
is initiated at the virtual source location y 0 reaches an object located at x̃, it has
traveled the same distance as if the wave originated at y and had been reflected at
∂Ω. A solution that satisfies (2.32) with (2.33) can be constructed with the help of
g(t, y, Y ) = gy (t, y, Y ) − gy0 (t, y 0 , Y 0 )
(2.36)
where gy is the free space green’s function with source at y and gy0 is the free space
green’s function with a virtual source at y 0 . Notice that g vanishes at the boundary.
Thus, from (2.20) and (2.21), we can construct a composite green’s function g(t, y, Y )
that satisfies
Lg(t, x̃, Y ) = δ (x̃ − Y ) δ (t)
(2.37)
g|∂Ω = 0,
(2.38)
and is described by
g (t, x̃, Y ) =
δ t−
|x̃−Y |
c
4π|x̃ − Y |
−
δ t−
|x̃−Y 0 |
c
4π|x̃ − Y 0 |
.
(2.39)
14
This means that the solution to (2.32) with (2.33) is
Z
in
ψ (t, x̃, y) = − g(t − T, x̃, Y )δ (Y − y) s (T − ty ) dY dT
0|
|x̃−y|
s t − ty − |x̃−y
s
t
−
t
−
y
c
c
=
−
.
0
4π|x̃ − y |
4π|x̃ − y|
2.1.2
(2.40)
Scattered Field
The receiving sensor collects scattered wave-field information. To arrive at the scattered wave solution, we no longer neglect the contribution of the scattered object.
We begin to explain how we accomplish this by defining the scattered wave-field ψ sc
ψ = ψ in + ψ sc ,
(2.41)
L(t, x̃)ψ (t, x̃) = S(t − ty , x̃, y), ψ ∈ Ω × [ty , ∞)
(2.42)
where ψ sc satisfies
with boundary condition
ψ (t, x̃) |∂Ω = 0,
(2.43)
L(t, x̃) = L + V(t, x̃).
(2.44)
where
In (2.44), V is a perturbation that describes the scattering of our wave-field by a
distribution of targets. We write
V = V (t, x̃)∂t2 .
(2.45)
15
In order to define V (t, x̃), consider an object, q ∈ Ω, that moves with velocity v.
This object can be described by an index-of-refraction distribution
qv (t, x̃) = c−2 [n (x̃ − vt) − 1].
(2.46)
If there are many objects within a space Ωδx ⊂ Ω, each moving with velocity v ∈ Ωv
where Ωv is the space of all possible velocities in Ωδx , then we can describe the
distribution of multiple moving objects as V (t, x̃)
V (t, x̃) =
Z
c−2 n (x̃ + δx − vt) d(δx)dv.
Ωδx ×Ωv
(2.47)
In order to solve for the scattered field ψ sc we subtract (2.32) from (2.42) and use
(2.41) to arrive at a partial differential equation that describes the scattered field
Lψ sc (t, z) = V(t, x̃)ψ sc (t, z)
ψ sc (t, z) ∈ [ty , ∞) × Ω
ψ sc |∂Ω = 0.
(2.48)
We use the green’s function solution for the operator L and get
sc
ψ (t, z) =
Z
g (t − t0 , z, x̃) V(t0 , x̃)ψ(t0 , x̃, y)dt0 dz.
(2.49)
We use (2.41) to expand equation (2.49) into scattered and incident field components
sc
ψ (t, z) =
Z
in
gVψ +
Z
gVψ sc
= ḡVψ in + ḡVψ sc
(2.50)
16
where the operator ḡ is defined as ḡf (x, t) =
R
g(t − t0 , x − y)f (y, t0 )dt0 dy.
If kḡVk < 1, then we can solve (2.50) via a Neumann series:
ψ sc = (I − ḡV)−1 ḡVψ in = (ḡV)ψ in + (ḡV)2 ψ in + (ḡV)3 ψ in ...
(2.51)
The Born or single-scattering approximation drops all the terms on the right side of
(2.51) that are nonlinear in V.
If we introduce the solution (2.40) into equation (2.51) and use the Born approximation ψBin = ḡψ in we get,

 0 x̃|
δ t − t0 − |z̃−c x̃|
δ t − t0 − |z̃ −
c
 V(t0 , x̃)ψ in (t0 , x̃, y)dt0 dx̃
ψBsc (t, y, z) = 
−
0
4π|z̃ − x̃|
4π|z̃ − x̃|
|x̃−y|
0
Z δ t − t0 − |z̃−x̃| Z
s̈y t − ty − c
c
qv (x̃ − vt0 ) dṽ
dt0 dx̃
=
4π|z̃ − x̃|
4π|x̃ − y|
|x̃−y 0 |
0
Z δ t − t0 − |z̃−x̃| Z
s̈
t
−
t
−
y
y
c
c
qv (x̃ − vt0 ) dṽ
dt0 dx̃
−
0
4π|z̃ − x̃|
4π|x̃ − y |
|x̃−y|
0
Z δ t − t0 − |z̃0 −x̃| Z
s̈
t
−
t
−
y
y
c
c
0
−
q
(
x̃
−
vt
)
dṽ
dt0 dx̃
v
4π|z̃ 0 − x̃|
4π|x̃ − y|
|x̃−y 0 |
0
Z δ t − t0 − |z̃0 −x̃| Z
s̈y t − ty − c
c
0
+
q
(
x̃
−
vt
)
dṽ
dt0 dx̃
v
0
0
4π|z̃ − x̃|
4π|x̃ − y |
Z
(2.52)
Equation (2.52) yields a model for our scattered wave-field. The physical interpretation of (2.52) is that a transmitter transmits s(t−ty ) from y, and the field propagates
according to the scalar wave equation, bounces from a perfectly reflecting surface and
17
then off a distribution of moving objects V (t, x̃). Then the field bounces again from
the reflecting surface and propagates back to the receiver.
The y, y0 represent the real and virtual transmitter locations and the z, z0 represent
the real and virtual receiver positions. We now refer to ψBsc as the data d(t, y, z). We
interchange the order of integration and perform a change of variables x = x̃ − vt0 .
dv (y, z, t) =
−
−
+
Z δ t − t0 −
4π|x +
Z δ t − t0 −
|x+vt0 −z|
c
vt0
− z|
|x+vt0 −z|
c
Z
qv (x)
Z
qv (x)
4π|x + vt0 − z|
Z δ t − t0 − |x+vt0 −z0 | Z
c
0
s̈y t − ty −
|x+vt0 −y|
c
vt0
4π|x +
s̈y t0 − ty −
− y|
|x+vt0 −y 0 |
c
dt0 dxdv
4π|x + vt0 − y 0 |
0
s̈y t0 − ty − |x+vtc −y|
dt0 dxdv
dt0 dxdv
4π|x + vt0 − y|
|x+vt0 −y 0 |
0
s̈y t − ty −
c
qv (x)
dt0 dxdv
0
0
4π|x + vt − y |
qv (x)
4π|x + vt0 − z 0 |
Z δ t − t0 − |x+vt0 −z0 | Z
c
4π|x + vt0 − z 0 |
(2.53)
Next, we carry out the t0 integration in (2.53). First, we need to recall two important
properties of delta functions with more complicated arguments.
1. The delta function satisfies the following scaling and symmetry properties,
Z
R
δ(αx)φ(x)dx =
Z
δ(u)φ
αR
u du
α
α
=
φ(u)
|α|
(2.54)
and thus,
δ(αx) =
δ(x)
|α|
(2.55)
18
2. The delta function satisfies the following properties when composed with a
function,
Z
0
δ(g(x))f (g(x))|g (x)|dx =
Z
δ(u)f (u)du
(2.56)
g(R)
R
where,
i. g is continuously differentiable
ii. g 0 is nowhere zero
3. (2.56) implies that δ(g(x)) =
P
δ(x−xi )
i |g 0 (xi )|
where xi are the roots of g(x).
Next, we perform the following steps in order to carry out the t0 integration in (2.53).
1. First we establish the following notation,
Rxz (t) = x + vt − z, Rxz = |Rxz (t) |, R̂xz (t) =
Rxz (t)
Rxz
(2.57)
2. We introduce the notation
fxz (t0 ) = t − t0 −
|x + vt0 − z|
Rxz (t0 )
= t − t0 −
c
c
(2.58)
for the arguments of δ in (2.53).
3. We notice that the individual integrals in (2.53) have the form of (2.56) where
g is replaced by fxz , etc.
4. Let t̄xz (t) be a zero of fxz . This implies
t̄xz (t) = t −
|x + v t̄xz (t) − z|
c
(2.59)
19
5. Next, we evaluate the appropriate Jacobian
0
fxz
(t0 ) = −1 −
R̂xz
c
(2.60)
6. Then we use the notation,
0
µxz,v (t) = |fxz
(t̄xz (t)) |
(2.61)
7. We carry out the t0 integration by using the aforementioned delta function
P
i)
property (3) δ([f (x)]) = i δ(x−x
|f 0 (xi )|
This leads to the general data set for a single source y,
d(z, y, t) =
−
−
+
Z
Z
Z
Z
s̈y [t̄xz (t) − ty −
Rxy (t̄xz (t))
]qv
c
s̈y [t̄xz (t) − ty −
Rxy0 (t̄xz (t))
]qv
c
(x)
2
(4π) Rxz (t̄xz (t)) Rxy (t̄xz (t)) µxz,v (t)
dxdv
(x)
2
(4π) Rxz (t̄xz (t)) Rxy0 (t̄xz (t)) µxz,v (t)
s̈y [t̄xz0 (t) − ty −
Rxy (t̄xz0 (t))
]qv
c
s̈y [t̄xz0 (t) − ty −
Rxy0 (t̄xz0 (t))
]qv
c
dxdv
(x)
2
(4π) Rxz0 (t̄xz0 (t)) Rxy (t̄xz0 (t)) µxz0 ,v (t)
dxdv
(x)
(4π)2 Rx z 0 (t̄xz0 (t)) Rxy0 (t̄xz0 (t)) µxz0 ,v (t)
dxdv
= ψ sc (t, yz) − ψ sc (t, y 0 z) − ψ sc (t, yz 0 ) − ψ sc (t, y 0 z 0 )
2.1.3
(2.62)
Slowly Moving Scattering Object
The set of imaging systems we are considering deal with objects that move significantly slower than the speed of light. We can thus make the following approximations.
20
Assume |v|t,
ωmax
|v|2 t2
c
|x−z|, |x−z 0 |, |x−y|, |x−y 0 |, where ωmax is the effective
maximum angular frequency of signal sy .
This means that a first order Taylor approximation can be made such that
Rxz (t) = |z − x + vt| = Rxz (0) + R̂xz (0) · vt + O
where, Rxz (0) = x − z, Rxz (0) = |Rxz (0) |, and R̂xz (0) =
|v|2
Rxz
(2.63)
Rxz (0)
.
Rxz (0)
We use this in (2.59) to obtain
|x + v t̄xz (t) − z|
c
Rxz (0)
v t̄xz (t)
∼t−
+ R̂xz (0) ·
.
c
c
t̄xz (t) = t −
This implies that t̄xz (t) ∼
=
rewritten as
t̄xz (t) − ty −
Rxz (0)
c
R̂xz (0)·v
1+
c
t−
(2.64)
. Thus the argument of s̈y in (2.62) may be
Rxy (0) R̂xy (0) · v
Rxy (t̄xz (t))
t̄xz (t)
≈ t̄xz (t) − ty −
−
c
c
c
1 − R̂xyc(0)·v
Rxz
Rxy
=
t−
−
− ty
c
c
1 + R̂xz (0)·v
c
(2.65)
where the time dilation factor
αvyz =
1−
1+
R̂xy (0)·v
c
R̂xz (0)·v
c
(2.66)
is the Doppler scale factor that is closely related to the Doppler shift. We can further
21
approximate this factor as
v
αvyz ∼
1
−
R̂
(0)
+
R̂
(0)
· .
=
xy
xz
c
(2.67)
We immediately see that the Doppler scale factor αvyx depends on the bisector
R̂xy (0) + R̂xz (0) . If we let
v
βvyz = − R̂xy (0) + R̂xz (0) · ,
c
(2.68)
αvyz ∼
= 1 + βvyz .
(2.69)
then,
Thus the data set for the slow-mover case becomes
ds (z, y, t) =
Z s̈y [αvyz · t −
Rxz (0)
c
−
Rxy (0)
c
− ty ]qv (x)
dxdv
(4π)2 Rxz (0) Rxy (0) µxz,v
Z s̈y [αvy0 z · t − Rxz (0) − Rxy0 (0) − ty ]qv (x)
c
c
dxdv
−
2
(4π) Rxz (0) Rxy0 (0) µxz,v
Z s̈y [αvyz0 · t − Rxz0 (0) − Rxy (0) − ty ]qv (x)
c
c
−
dxdv
2
(4π) Rxz0 (0) Rxy (0) µxz0 ,v
Z s̈y [αvy0 z0 · t − Rxz0 (0) − Rxy0 (0) − ty ]qv (x)
c
c
+
dxdv
2
(4π) Rxz0 (0) Rxy0 (0) µxz0 ,v
(2.70)
Now I use the subscript xjk or vjk as placeholders for the dependence on the object, transmitter, or receiver location and where a zero is added in the respective
22
placeholder if there is no dependence on a transmitter and/or receiver to rewrite the
slow-mover approximation (2.70) for the data with this notation
ds (z, t) =
Z X
2 (−1)j+k s̈[α
vjk t −
=
−
Rxj0 (0)
c
− ty ]qv (x)
(4π)2 Rx0k (0) Rxj0 (0) µx0k,ṽ
j,k=1
Z X
2
Rx0k (0)
c
dxdv
Υjk
s qv (x) dxdv,
(2.71)
j,k=1
where
Υjk
s
=
R (0)
R
(0)
(−1)j+k s̈[αvjk t− x0k
− xj0
−ty ]
c
c
(4π)2 Rx0k (0)Rxj0 (0)µx0k,ṽ
. The following configuration is used to
distinguish between the real/reflected transmitter receiver pair dependence.
11 → yz
12 → y 0 z
21 → yz 0
22 → y 0 z 0
(2.72)
Thus we write Rx,z as Rx01 , Rx,y0 as Rx20 , and αv,y,z0 as αv12 to name a few examples.
2.1.4
Narrowband Waveform
Most wave based imaging systems use a narrowband waveform, which can be written
as s (t) = a (t) e−iω0j0 t where a (t) is slowly varying and ω0j0 is the carrier frequency
for transmitter y and thus the zeros indicate that there is no object or receiver
dependence. So now, the narrowband data for a slow-moving object is,
R
(0)
R
(0)
2 Z −iω0j0 [αvjk t− x0k
− xj0
−ty ]
c
c
X
ω0j0 2
e
ds (z, y, t) =
(−1)j+k
4π
Rx0k (0) Rxj0 (0) µx0k,ṽ
j,k=1
Rx0k (0) − Rxj0 (0)
× ajk t − ty −
qv (x) dxdv
c
(2.73)
23
where the αvjk = 1 for the slowly-varying amplitudes ajk for j, k = 1, 2 and where
j, k = 1 denotes the amplitude for the actual sensors and j, k = 2 denotes the virtual
sensors. Collecting the time-independent exponential phase terms into the function
ϕxvjk =
ω0j0
[Rxj0
c
(0) + αvjk Rx0k (0) + cty ], we can rewrite (2.73) after dropping the
subscript n as,
2 Z X
ω0j0 2
eiϕxvjk e−iωxj0 αvjk t
ds (z, y, t) =
(−1)j+k
4π
Rxj0 (0) Rx0k (0) µx0k,v
j,k=1
Rx0k (0) − Rxj0 (0)
× ajk t − ty −
qv (x) dxdv
c
2
X
=
(−1)j+k djk
s (t, z) .
(2.74)
j,k=1
2.2
Image Reconstruction
We begin the image reconstruction process by describing how we form an image.
Then we describe how we analyze the reconstructed image. We test our analysis by
performing numerical experiments. We conclude with a summary of our results.
2.2.1
Image Formation
The approach we take to form an image of our simulated data involves applying a
filtered adjoint to our data. We use this approach because applying a filtered adjoint
to our data will map functions of frequency and our sensor locations to functions of
position.[9]
24
Definition 3 An adjoint operator F † , is an operator such that
< f, F g >ω,y,y0 ,z,z0 =< F † f, g >x ,
(2.75)
where the < • >x is notation for the inner product,
< f, g >x =
Z
f (x)g ∗ (x)dx,
(2.76)
and ∗ is the conjugate.
We use the adjoint to form an image I (p, u) of the objects with velocity u, at time
t = 0, located at position p by using the data d(t, z, y).
Consider, for example, the case where the transmitters and receivers can illuminate
only one path, say the direct path, then the data is simply d11
s . In this case, the
relevant operator P 11 acts on qv (x) to give us the data
d11
s (t, z, y)
11
= P qv (x) =
Z
Υ11
s (x, v, t, z, y)qv (x)dxdv,
(2.77)
11
noted in (2.71),
where Υ11
s (x, v, t, z, y) is the slow-moving narrowband version of Υ
Υ11
s (x, v, t, z, y)
=
ω 2
y
4π
eiϕxyz e−iωy αvyz t
Rxz (0) − Rxy (0)
a11 t − ty −
Rxz (0) Rxz (0) µxz,v
c
In this case, the relevant operator P 11 , is given by the integral operator,
(2.78)
25
P
11t
f=
Z
Υ11
s (x, v, t, z, y)f dxdv
(2.79)
In order to form an image, we apply the adjoint operator
P
11∗
f=
XZ
Υ11∗
s (p, u, t, z, y)f dt
(2.80)
z,y
to d11
s (t, z, y)
I(p, u) =
XZ
11
Υ11∗
s (p, u, t, z, y)ds,n (t, z, y)dt
z,y
=
=
Z X
Z
11
Υ11∗
s (p, u, t, z, y)Υs,n (x, v, t, z, y)dtqv (x)dxdv
z,y
Ks11 (p − x, u − v, t, , z, y)qv (x)dxdv
(2.81)
where the Cauchy-Schwartz inequality implies the integral kernel
Ks11 (p
− x, u − v, t, z, y) =
Z X
11
Υ11∗
s,n (p, u, t, z, y)Υs (x, v, t, z, y)dt
(2.82)
z,y
is maximized when (p, u) = (x, v).
We can form separate images for each path and coherently (or noncoherently) add
the resulting images:
I(p, u) =
X
i,j
P ij∗ dij .
(2.83)
26
If it is not possible to distinguish paths, then the data is of the form ds =
P
and we apply the adjoint operator Ps∗ = i,j Psij∗ to form an image:
I(p, u) = Ps∗ ds,n (y, z, t)
Z X
Υ∗s (p, u, t, z, y)Υs (x, v, t, z, y)dtqv (x)dxdv
=
P
l,m
dlm
s
(2.84)
z,y
where the subscript s denotes the slow-moving case. The image that is formed
approximately gives us our original reflectivity function qv (x) but also gives us copies
of our reflectivity function in the wrong location.
2.3
Image Analysis
In order to analyze the image, we need to develop a way to investigate the relationship
between the image and the true object.
To do this, we use (2.74) in (2.84). The result is,
I(p, u) =
2
X
(−1)j+k Ksjk (p, u; x, v)qv (x)d3 xd3 v
(2.85)
j,k=1
where the sum of all Ksjk terms is called the point spread function (PSF). The PSF
is a way to describe the performance of an imaging system. In our case, the PSF is
27
dependent on real or virtual transmitter/receiver positions.
Ksjk
(Rp0k (0) + Rpj0 (0))
(Rx0k (0) + Rxj0 (0))
=−
t − ty −
ajk t − ty −
c
c
Rp0k (0)Rpj0 µp0k,u
× Je−iϕpjk eiω0j0 αpjk eiϕxjk e−iω0j0 αxjk
dt
Rx0k (0)Rxj0 µx0k,u
Z
2
ω0j0
a∗jk
(2.86)
Equation (2.86) can be further simplified by noting,
ϕxvjk − ϕpujk =
=
=
=
=
ω0j0
[Rxj0 (0) − αvjk Rx0k (0) + cty − (Rpj0 (0) − αujk Rp0k (0) + cty )]
c
ω0j0
[Rxj0 (0) − Rpj0 (0) − αvjk Rx0k (0) + αujk Rp0k (0)]
c
ω0j0
[Rxj0 (0) − Rpj0 (0) − (1 + βvjk )Rx0k (0) + (1 + βujk )Rp0k (0)]
c
ω0j0
[Rxj0 (0) − Rpj0 (0) − Rx0k (0) + Rp0k (0) − βvjk Rx0k (0) + βujk Rp0k (0)]
c
ω0j0
[c4τxpjk − βxjk Rx0k (0) + βpjk Rp0k (0)],
c
(2.87)
where,
4τxpjk =
Rxj0 (0) − Rpj0 (0) − Rx0k (0) + Rp0k (0)
.
c
(2.88)
With this notation, we can write (2.86) as,
Ksjk (p, u; x, v)
=
Z
×e
2
ω0j0
Ajk (ω0j0 [βpjk − βxjk ], 4τxpjk )eiω0j0 [βpjk −βxjk ][
−iω0j0 βpjk
c
[Rxj0 (0)−Rpj0 (0)]
Rxj0 (0)
−ty ]
c
Rp0k (0)Rpj0 (0)µp0k,u
Jdt.
Rx0k (0)Rxj0 (0)µx0k,u
(2.89)
28
where,
−iω0 τ
A(ω̃, τ ) = e
Z
a∗ (t − τ )a(t)eiω̃t dt.
(2.90)
is the narrowband version of the ambiguity function. Through analysis of the ambiguity function, we can find the accuracy of target range and velocity estimation.
CHAPTER 3
Imaging in the presence of a dispersive reflector
3.1
Mathematical Data Model - Dispersive Reflecting Layers
In this section we will derive the data model for the case when the reflector is dispersive and depends on the take-off direction. This is an approximate model for
a refracting medium. We apply the method of images. In this case, our reflected
sensors will be dependent on frequency and angle of arrival. We then use this green’s
function to derive our data model in a similar way to chapter 2.
3.1.1
Incident Field
In this subsection we find a model for the incident field ψdin for the dispersive reflector
case. Consider a Fourier transform in space,
Definition 4 Let Fξ̃ define a Fourier transform operator such that,
1
Fξ̃ (f (ω, x̃)) = F (ω, ξ̃) =
(2π)3
Applying definition (4) to (2.22) yields,
29
Z
f (ω, x̃)e−ix̃·ξ̃ dx̃
(3.1)
30
1
(2π)3
(3.2)
1
(2π)3 (|ξ̃|2 − k 2 )
(3.3)
(|ξ̃|2 − k 2 )Ĝ(ω, ξ̃) =
this means that Ĝ is
Ĝ(ω, ξ̃) =
Applying Fourier inversion yields a singular integral which can be regularized by
1
ĝ(ω, x̃) = lim+
λ→0 (2π)3
Z
eiξ̃·x̃
dξ̃.
(|ξ̃|2 − k 2 − iλ)
(3.4)
The time domain counterpart of (3.4) is,
1
g(t, x̃) =
(2π)4
Z
ĝ(ω, x̃)e−iωt dω
1
= lim+
λ→0 (2π)4
Z
ei(ξ̃·x̃−ωt)
dξ̃dω.
(|ξ̃|2 − k 2 − iλ)
(3.5)
Notice that if we carry out the integration of (3.5) in three dimensions, we arrive
at (2.25). Since the medium we consider does not vary in the x̃1 or x̃2 variables of
x̃ = (x̃1 , x̃2 , x̃3 ); we can rearrange (3.5) in the following way.
31
1
g(t, x̃ , x̃3 ) = lim+
λ→0 (2π)4
⊥
Z
˜
eix3 ξ3 ei(ξ
⊥ ·x̃⊥ −ωt)
2
ξ˜3 − (k 2 − |ξ ⊥ |2 − iλ)
dξ˜3 dξ ⊥ dω,
(3.6)
where ξ ⊥ = (ξ1 , ξ2 ) and x̃⊥ = (x̃1 , x̃2 ).
The one dimensional integration in ξ˜3 may be carried out using the residue theorem
and contour integration. We notice that the singularities are located at ±s+ =
p
k 2 − |ξ ⊥ |2 − iλ
Z
˜
⊥
⊥
eix3 ξ3 ei(ξ ·x̃ −ωt) ˜ ⊥
dξ3 dξ dω,
2
ξ˜3 − s2+
!
Z
˜
˜
eix3 ξ3
1
eix3 ξ3
⊥ ⊥
−
ei(ξ ·x̃ −ωt) dξ˜3 dξ ⊥ dω (3.7)
= lim+
4
˜
˜
λ→0 (2π)
ξ3 + s+ ξ3 − s+
1
g(t, x̃ , x̃3 ) = lim+
λ→0 (2π)4
⊥
By the residue theorem we arrive at
1
g(t, x̃ , x̃3 ) =
(2π)4
⊥
Z
eix3 s+
2πie−ix3 s+
⊥ ⊥
lim+ 2πi
−
ei(ξ ·x̃ −ωt) dξ ⊥ dω
λ→0
2s+
−2s+
(3.8)
Thus the green’s function may be represented as
Z
i
ei(ξ·x̃−ωt)
p
g(t, x̃) =
dξt dω,
2(2π)3
k 2 − |ξ ⊥ |2
Z
= G(ω, x̃)e−iωt dω
(3.9)
32
where ξ = (ξ1 , ξ2 ,
p
k 2 − |ξ ⊥ |2 ) and
G(ω, x) =
Z
i
ei(ξ·x̃)
p
dξt
2(2π)3 k 2 − |ξ ⊥ |2
(3.10)
This is the angular spectrum representation [32]. Please note that G is not related
to Ĝ of equation (3.3).
Now that we have a model for the free-space green’s function, we may use the method
of images to find the appropriate green’s function for the case when the reflection
location depends on frequency and take-off direction.
!
Z
0
ei(ξ·(x̃−y (ω,ξ)))
ei(ξ·(x̃−y))
i
p
− p
dξ ⊥ e−iωt dω,
gd (t, x̃, y) =
2(2π)3
k 2 − |ξ ⊥ |2
k 2 − |ξ ⊥ |2
Z =
G(ω, x̃, y) − G̃(ω, x̃, y 0 (ω, ξ)) e−iωt dω
Z
= Gd (ω, x̃, y)e−iωt dω
Here y 0 (ω, ξ) : R × S 2 → R3 , (3.10), G̃(ω, x̃, y 0 (ω, ξ)) =
R
(3.11)
0
x̃−y (ω,ξ)))
ei(ξ·(
i
√ 2 ⊥ 2 dξ ⊥ ,
2(2π)3
k −|ξ |
are grouped together and described by the notation Gd (ω, x̃, y) = G(ω, x̃, y) −
G̃(ω, x̃, y 0 (ω, ξ)). Our incident field may now be described by
ψdin (t, x̃, y)
=−
Z
gd (t − t̃, x̃, Y )sy (t̃, x̃, Y )dY dt̃
If we take sy (t̃, x̃, y) = δ(x̃ − y)s(t̃ − ty ) =
becomes
1
δ(y
2π
−Y)
R
(3.12)
e−iωt Sy (ω)dω, then (3.12)
33
ψdin (t, x̃, y)
3.1.2
Z
1
= − gd (t − t̃, x̃, Y )sy (t̃, x̃, Y )dY dt̃ = −
2π
Z
1
e−iωt Gd (ω, x̃, y)Sy (ω)dω.
=−
2π
Z
e−iωt Gd (ω, x̃, Y )δ(y − Y )Sy (ω)dωdY
(3.13)
Scattered Field
In this section, we find the model for the data received by a receiving sensor. We use
the Born-approximated version of (2.51), with g replaced by gd , to model our data
for the dispersive reflector case,
ψdsc (t, z, y)
≈−
=−
=−
Z
Z
Z
gd (t − t0 , z, x̃)V(t0 , x̃)ψdin (t0 , x̃, y)dt0
gd (t − t0 , z, x̃)qv (x̃ − vt0 )ψ̈din (t0 , x̃, y)dx̃dvdt0
0
0
0
e−iω(t−t ) Gd (ω, z, x̃)qv (x̃ − vt0 )ω 2 e−iω (t −t̃) Gd (ω 0 , x̃, y)
× sy (t̃ − ty )dx̃dvdt0 dt̃dωdω 0
!
Z
0
ei(ξ·(z−x̃)) − ei(ξ·(z (ω,ξ)−x̃))
ω 02
p
=
×
4(2π)6
k 2 − |ξ ⊥ |2
0
0
0
0
0
0
0
0
ei(ξ ·(x̃−y)) − ei(ξ ·(x̃−y (ω ,ξ )))
p
k 2 − |ξ 0⊥ |2
× e−iω(t−t )−iω (t −t̃) qv (x̃ − vt0 )sy (t̃ − ty )dx̃dvdt0 dt̃dωdω 0 dξ ⊥ dξ 0⊥
(3.14)
where z 0 (ω, ξ) : S 2 × R → R3 . We notice that,
p
ξ = ξ ⊥ k 2 − |ξ|2 = k ξ ⊥ ,
!
ξ⊥ 2
1−| | .
k
r
(3.15)
!
34
As a result, we decide to make the following change of variables,
ζ=
ξ⊥
k
dξ ⊥ = kdζ.
(3.16)
Substituting (3.16) into yields
ξ = k(ζ,
p
1 − |ζ|2 ) = k êζ ,
where êζ represents the unit vector êζ = (ζ,
(3.17) into (3.14) yields
ψdsc (t, z, y)
p
(3.17)
1 − |ζ|2 ). A substitution of (3.16) and
Z
kk 0 ω 02 qv (x̃ − vt0 )sy (t̃ − ty )
p
p
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
−iω(t−t0 )−iω 0 (t0 −t̃)
ikêζ ·(z−x̃)
ikêζ ·(z 0 (ω,kêζ )−x̃)
×e
e
−e
0
0
0
0 0
× eik êζ0 ·(x̃−y) − eik êζ0 ·(x̃−y (ω ,k êζ0 )) dvdx̃dt̃dt0 dωdω 0 dζdζ 0 (3.18)
=
We make the change of variables x = x̃ + vt0 and apply it to (3.18). This yields
ψdsc (t, z, y)
Z
kk 0 ω 02 qv (x)sy (t̃ − ty )
p
p
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
0
0 0
0
0
0
× e−iω(t−t )−iω (t −t̃) eikêζ ·(z−x−vt ) − eikêζ ·(z (ω,kêζ )−x−vt )
0
ik êζ 0 ·(x+vt0 −y)
ik0 êζ 0 ·(x+vt0 −y 0 (ω 0 ,k0 êζ 0 ))
× e
−e
dvdxdt̃dt0 dωdω 0 dζdζ 0
Z
kk 0 ω 02 qv (x)sy (t̃ − ty )
p
p
=
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
ê 0
ê
−i(ωt−ω 0 t̃)−it0 ω 0 −ω+ω cζ ·v−ω 0 cζ ·v
ikêζ ·(z−x)
ikêζ ·(z 0 (ω,kêζ )−x)
×e
e
−e
0
0
0
0 0
× eik êζ0 ·(x−y) − eik êζ0 ·(x−y (ω ,k êζ0 )) dvdxdt̃dt0 dωdω 0 dζdζ 0
(3.19)
=
35
00
We make the change of variables ω = ω
ψdsc (t, z, y)
=
Z
×e
4(2π)8
êζ
c
·v−1
kk 0 ω 02 qv (x)sy (t̃ − ty )
p
êζ
2
0
2
1 − |ζ| 1 − |ζ | c · v − 1
!
!

ê
p
ω 00
êζ
c ·v−1
−i
t−ω 0 t̃ −it0 ω 0 +ω 00 −ω 0
ζ0
c
·v
eikêζ ·(z−x) − e
ikêζ · z 0
ω 00
êζ
c ·v−1
0
0
0
0 0
× eik êζ0 ·(x−y) − eik êζ0 ·(x−y (ω ,k êζ0 )) dvdxdt̃dt0 dω 00 dω 0 dζdζ 0
!
!
!
,kêζ −x
(3.20)

We carry out the t0 integration by noting
Z
e
ê 0
−it0 ω 0 +ω 00 −ω 0 cζ ·v
0
0
00
dt = δ ω + ω − ω
0 êζ 0
c
·v
(3.21)
Substituting the right side of (3.21) into (3.20) yields
Z
ω 00
êζ
c ·v−1
!
−i
kk 0 ω 02 qv (x)sy (t̃ − ty )
e
p
p
ê
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2 cζ · v − 1
!
!

i ω00 ê ·(z−x)
ω 00
i
êζ ·
ζ
êζ
êζ
êζ 0
× δ ω 0 + ω 00 − ω 0
· v e c ·v−1
− e c ·v−1
c
0
0
0
0 0
× eik êζ0 ·(x−y) − eik êζ0 ·(x−y (ω ,k êζ0 )) dvdxdt̃dω 00 dω 0 dζdζ 0
ψdsc (t, z, y) =
!
t−ω 0 t̃
z0
ω 00
êζ
c ·v−1
(3.22)
!
,
ω 00
êζ
c ·v−1
!
!
!
êζ −x
36
We apply the sifting property of the delta function to carry out the integration in ω 00
ψdsc (t, z, y)
=
Z
0
k 2 ω 02 αζ,ζ 0 ,v qv (x)sy (t̃ − ty ) i(αζ,ζ0 ,v ω0 t̃−ω0 t)
p
p
e
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
iαζ,ζ 0 ,v k0 êζ ·(z−x)
iαζ,ζ 0 ,v k0 êζ ·(z 0 (αζ,ζ 0 ,v ω 0 ,αζ,ζ 0 ,v k0 êζ )−x)
× e
−e
0
0
0
0 0
× eik êζ0 ·(x−y) − eik êζ0 ·(x−y (ω ,k êζ0 )) dvdxdt̃dω 0 dζdζ 0
where αv,ζ,ζ 0 =
(3.23)
êζ 0 ·v−1
.
êζ ·v−1
We introduce the notation,
Am
d
0
k 4 α 0 q (x)
p ζ,ζ ,v vp
=
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
Z
0
sy (t̃ − ty )eiαζ,ζ0 ,v ω t̃ dt̃
(3.24)
We note that, using the change of variables t0 = t̃ − ty , we can simplify
1
2π
Z
sy (t̃ − ty )e
i(αζ,ζ 0 ,v ω 0 t̃)
1
dt̃ =
2π
Z
0
0
0
sy (t0 )eiαζ,ζ0 ,v ω (t +ty ) = eiαζ,ζ0 ,v ω ty Sy (t0 ) (3.25)
Applying (3.25) to (3.24), we get
Am
d
0
k 4 αζ,ζ 0 ,v qv (x)Sy (αζ,ζ 0 ,v ω 0 ) iαζ,ζ0 ,v ω0 ty
p
p
=
e
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
Introducing (3.26) and dR = dvdxdω 0 dζdζ 0 to (3.27)
(3.26)
37
ψdsc (t, z, y)
=
−
−
Z
Z
Z
0
0
0
0
0
i(αζ,ζ 0 ,v k êζ ·(z−x)+k êζ 0 ·(x−y)−αζ,ζ 0 ,v ω t)
dR
Am
d e
0
0
0
0
i(αζ,ζ 0 ,v k êζ ·(z−x)+k êζ 0 ·(x−y (ω ,k êζ 0 ))−αζ,ζ 0 ,v ω t)
dR
Am
d e
0
0
0
0
0
0
0
0
0
0
0
i(αζ,ζ 0 ,v k êζ ·(z (αζ,ζ 0 ,v ω ,αζ,ζ 0 ,v k êζ )−x)+k êζ 0 ·(x−y)−αζ,ζ 0 ,v ω t)
dR
Am
d e
Z
0
0
0
0
i(αζ,ζ 0 ,v k êζ ·(z (αζ,ζ 0 ,v ω ,αζ,ζ 0 ,v k êζ )−x)+k êζ 0 ·(x−y (ω ,k êζ 0 ))−αζ,ζ 0 ,v ω t)
dR
Am
d e
Z
0d(11)
iΦ
(x,v,ζ,ζ 0 ,t̃,ω 0 )
iΦ0d(12) (x,v,ζ,ζ 0 ,t̃,ω 0 )
iΦ0d(21) (x,v,ζ,ζ 0 ,t̃,ω 0 )
e
−
e
−
e
dR
= Am
d
Z
iΦ0d(22) (x,v,ζ,ζ 0 ,t̃,ω 0 )
+ Am
dR
d e
+
(11)
(12)
(21)
(22)
= dd (t, z, y) − dd (t, z, y) − dd (t, z, y) + dd (t, z, y),
(3.27)
where Φ0d(ij) (x, v, ζ, ζ 0 , t̃, ω 0 ) = αζ,ζ 0 ,v k 0 êζ · (z i − x) + k 0 êζ 0 · (x − y j ) − αζ,ζ 0 ,v ω 0 t. We
use i,j=1 for an actual sensor and i,j=2 for a virtual sensor.
We now carry out further analysis in two special cases.
3.1.2.1
Case 1: Data Model for a Stationary Scattering Object
In this case, we investigate a stationary scattering object where the path is dependent
on frequency and angle-of-arrival. We therefore let v = 0 and no longer integrate
38
over v. Applying these changes to (3.27) yields
sc
(t, z, y)
ψdsa
=
Z
=
Z
k 02 ω 02 qv (x)Sy (ω 0 )
p
p
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
0
−iω 0 (t−ty )
ik êζ ·(z−x)
ik0 êζ ·(z 0 (ω 0 ,êζ )−x)
×e
e
−e
0
0
0
0
× eik êζ0 ·(x−y) − eik êζ0 ·(x−y (ω ,êζ0 )) dxdω 0 dζdζ 0
−
−
+
Z
Z
Z
0
êζ
0
êζ
0
êζ
0
êζ
k 02 ω 02 qv (x)Sy (ω 0 )e−iω (t−ty + c ·(z−x)+
p
p
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
êζ 0
c
·(x−y))
êζ 0
0
dxdω 0 dζdζ 0
0
k 02 ω 02 qv (x)Sy (ω 0 )e−iω (t−ty + c ·(z−x)+ c ·(x−y (ω ,êζ0 )))
p
p
dxdω 0 dζdζ 0
8
2
0
2
4(2π) 1 − |ζ| 1 − |ζ |
0
0
êζ 0
0
0
êζ 0
c
k 02 ω 02 qv (x)Sy (ω 0 )e−iω (t−ty + c ·(z (ω ,êζ )−x)+
p
p
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
k 02 ω 02 qv (x)Sy (ω 0 )e−iω (t−ty + c ·(z (ω ,êζ )−x)+
p
p
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
(11)
(12)
(21)
c
·(x−y))
dxdω 0 dζdζ 0
·(x−y 0 (ω 0 ,êζ 0 )))
dxdω 0 dζdζ 0
(22)
= ddsa (t, z, y) − ddsa (t, z, y) − ddsa (t, z, y) + ddsa (t, z, y)
= ddsa (t, z, y).
(3.28)
At this point it becomes computationally advantageous to apply a 2-dimensional
stationary phase reduction of dds in the ζ and ζ 0 variables. We refer to [6]
Theorem 1 If f is a smooth function of compact support on Rn , and Φ ∈ C ∞ has
only non-degenerate critical points, then as ω → ∞,
Z
e
iωΦ(ζ)
f (ζ)dζ ≈
X
[ζ 0 :DΦ(ζ 0 =0]
2π
ω
n2
0
π
2
eiωΦ(ζ ) ei( 4 )sgnD Φ(ζ
f (ζ ) p
|detD2 Φ(ζ 0 )|
0
0)
(3.29)
39
(22)
We focus our attention on the more interesting case of ddsa .
(22)
ddsa (t, z, y)
=
Z
Z
êζ
0
0
0
k 02 ω 02 qv (x)Sy (ω 0 )e−iω (t−ty + c ·(z (ω ,êζ )−x)+
p
p
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
0
dsa
dsa
êζ 0
c
·(x−y 0 (ω 0 ,êζ 0 )))
dxdω 0 dζdζ 0
0
k 02 ω 02 qv (x)Sy (ω 0 )e−iω (t−ty +Φ (ζ)+Φ (ζ ))
p
p
=
dxdω 0 dζdζ 0
8
2
0
2
4(2π) 1 − |ζ| 1 − |ζ |
Z 02 02
k ω
0
qv (x)Sy (ω 0 )e−iω (t−ty ) I1dsa (ω 0 )I2dsa (ω 0 )dxdω 0
=
8
4(2π)
(3.30)
where
êζ
· (z 0 (ω 0 , êζ ) − x)
c
êζ 0
Φdsa (ζ 0 ) =
· (x − y 0 (ω 0 , êζ 0 ))
c
Φdsa (ζ) =
and I1dsa (ω 0 ) =
R
0 dsa
iω Φ
(ζ)
e√
dζ,
1−|ζ|2
and I2dsa (ω 0 ) =
R
0 dsa
0
iω Φ
(ζ )
e√
dζ 0 .
1−|ζ 0 |2
(3.31)
We perform the station-
ary phase reduction in the ζ of I1dsa variable and note that I2dsa is similar. We recall
that the phase Φdsa (ζ) can be re-written as
Φ
dsa
ζ
(ζ) = · z 00 (ω 0 , êζ ) − x0 +
c
p
1 − ζ12 − ζ22 0 0
(z3 (ω , êζ ) − x3 ) ,
c
(3.32)
We now identify the critical set ∂ζi Φdsa = 0 for i = 1,2,
(zi0 (ω,0 êζ ) − xi )
ζi
− p
(z 0 (ω 0 , êζ ) − x3 )
2
2 3
c
c 1 − ζ1 − ζ2
êζ
+
· ∂ζi z 0 (ω 0 , êζ ) = 0
c
∂ζi Φdsa =
(3.33)
0
Solving for the critical set gives ζz,x
for which there is a leading order contribution.
For some combinations of z and x, a solution might not exist.
40
√
1−ζ12 −ζ22
If we assume a1 = ζc1 ∂ζ1 z10 (ω 0 , êζ ), a2 = ζc2 ∂ζ1 z20 (ω 0 , êζ ) , a3 =
∂ζ1 z30 (ω 0 , êζ )
√ 2 2 c
1−ζ1 −ζ2
and b1 = ζc1 ∂ζ2 z10 (ω 0 , êζ ), b2 = ζc2 ∂ζ2 z20 (ω 0 , êζ ) , b3 =
∂ζ2 z30 (ω 0 , êζ ), then
c
Equation (3.33) can be written as,
z 0 (ω 0 , êζ ) + a2 + b2 − x2
z 0 (ω 0 , êζ ) + a3 + b3 − x3
z10 (ω 0 , êζ ) + a1 + b1 − x1
p
= 2
= 3
cζ1
cζ2
c 1 − ζ12 − ζ22
(3.34)
0
(ω 0 , êζ ) =
Then equation (3.34) describes the path from the extended virtual transmitter zdsa
(z10 (ω 0 , êζ ) + a1 + b1 , z20 (ω 0 , êζ ) + a2 + b2 , z30 (ω 0 , êζ ) + a3 + b3 ) to the target x. This
p
set describes a line with direction êζ = (ζ1 , ζ2 , 1 − ζ12 − ζ22 ). Using the parameter
0
t, the vector form of this equation is x = zdsa
(ω 0 , êζ ) − êζ t. Since êζ is a unit vector
and we consider only positive values of its components, then we use the argument
in [14] to determine that the unit vector êζ evaluated at the critical point becomes
êζ 0 =
0
zdsa
(ω 0 ,êζ 0 )−x
.
0
|zdsa (ω 0 ,êζ 0 )−x|
This means that the phase evaluated at the critical point will
0
be Φdsa (ζ 0 ) = êζ 0 · (zdsa
(ω 0 , ê0ζ ) − x) =
0
|zdsa
(ω 0 ,êζ 0 )−x|
.
c
To complete the stationary
phase reduction, we need to compute the determinant of the Hessian Dζ2 Φdsa .
whose entries are,
2 dsa ∂ζ1 ζ1 Φdsa ∂ζ1 ζ2 Φdsa
Dζ Φ = ∂ζ2 ζ1 Φdsa ∂ζ2 ζ2 Φdsa
.
(3.35)
(ζj2 − 1)dsa
êζ
2∂ζi zi (ω 0 , êζ ) 2ζi ∂ζi z30 (ω 0 , êζ )
zx
· ∂ζi ζi z 0 (ω 0 , êζ ) +
− p
+
3
c
c
c 1 − ζ12 − ζ22
c(1 − ζ12 − ζ22 ) 2
∂ζi zj0 (ω 0 , êζ ) ∂ζj zi0 (ω 0 , êζ )
êζ
∂ζi ζj Φdsa =
· ∂ζi ζj z 0 (ω 0 , êζ ) +
+
c
c
c
ζi ∂ζj z30 (ω 0 , êζ )
ζj ∂ζi z30 (ω 0 , êζ )
ζi ζj dsa
zx
− p
− p
+
3
2
2
2
2
2
c 1 − ζ1 − ζ2
c 1 − ζ1 − ζ2
c(1 − ζ1 − ζ22 ) 2
∂ζi ζi Φdsa =
(3.36)
41
0
0
Where dsa
zx = z3 (ω , êζ ) − x3 and i, j = 1, 2.The determinant is
|Dζ2 Φdsa | = ∂ζ1 ζ1 Φdsa ∂ζ2 ζ2 Φdsa − ∂ζ1 ζ2 Φdsa ∂ζ2 ζ1 Φdsa
(3.37)
The eigenvalues of Dζ2 Φdsa are
2λ± = (∂ζ1 ζ1 Φdsa + ∂ζ2 ζ2 Φdsa )
q
± (∂ζ1 ζ1 Φdsa + ∂ζ2 ζ2 Φdsa )2 − 4(∂ζ1 ζ1 Φdsa ∂ζ2 ζ2 Φdsa − ∂ζ1 ζ2 Φdsa ∂ζ2 ζ1 Φdsa )
(3.38)
We note, as is argued in the appendix, that the sgnDζ2 Φdsa (ζ 0 ) is determined
by the functions z 0 (ω 0 , êζ ) and y 0 (ω 0 , êζ ). We therefore assign a value of Ss to
sgnDζ2 Φdsa (ζ 0 ). Applying Theorem (1) to I1dsa (ω) and I2dsa (ω) reduces (3.30) to
(22)
ddsa (t, z, y)
=
Z
ω 02 qv (x)sy (ω 0 )
4(2π)6 Fdsa (ζ 0 )Fdsa (ζ 00 )
± ×e
Z
=
where Fdsa (ζ) =
p
ζ12 + ζ22
−iω 0 (t−ty +
0
0
|zdsa
(ω 0 ,ê 0 )−x|
|x−ydsa
(ω 0 ,ê 00 )|
ζ
ζ
+
)+iSs π4
c
c
dxdω 0
π
ω 02 qv (x)sy (ω 0 )
0dsa
0
e−iφ (x,ω )+iSs 4 dxdω 0
00
6
0
4(2π) Fdsa (ζ )Fdsa (ζ )
p
(3.39)
|detD2 Φdsa (ζ)|, the x integration is over all x such that
ζz,x and, ζy,x satisfies the critical conditions and where
φ0dsa (x, ω 0 ) = ω 0 (t − ty +
0
0
|zdsa
(ω 0 , êζ 0 ) − x| |x − ydsa
(ω 0 , êζ 00 )|
+
)
c
c
(3.40)
Equation (3.40) is zero when the time delay t − ty is equal to the travel time of a ray
with take-off direction êζ 00 bouncing from the reflector corresponding to frequency
ω0.
42
3.1.2.2
Case 2: Data Model for a Moving Scattering Object
In this section, we consider the case where we have moving scattering objects and
sensors dependent on frequency and angle-of-arrival. We examine the 4th component
of (3.23) in this case
Z
0
0
k 4 αζ,ζ 0 ,v qv (x)sy (αζ,ζ 0 ,v ω 0 )e−iαζ,ζ0 ,v ω (t−ty )
p
p
=
4(2π)8 1 − |ζ|2 1 − |ζ 0 |2
0
0
0
0
0
0
0 0
× ei(αζ,ζ0 ,v k êζ ·(z (αζ,ζ0 ,v ω ,αζ,ζ0 ,v k êζ )−x)+k êζ0 ·(x−y (ω ,k êζ0 ))) dR
(22)
ddma (t, z, y)
where αζ,ζ 0 ,v =
1−êζ ·v
1−êζ 0 ·v
(3.41)
and dR = dvdxdω 0 dζdζ 0 . We use the notation φdma (ζ, ζ 0 ) to
describe the phase,
Φ̃dma (ζ, ζ 0 ) = αζ,ζ 0 ,v
êζ 0
êζ
· (z 0 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) − x) +
· (x − y 0 (ω 0 , k 0 êζ 0 )) − αζ,ζ 0 ,v (t − ty )
c
c
(3.42)
We note that the partial derivatives with respect to ζi and ζi0 of the doppler scale
factor αζ,ζ 0 ,v are
∂ζi αζ,ζ 0 ,v
∂ζi0 αζ,ζ 0 ,v
p
ζi v3 − vi 1 − ζ12 − ζ22
=p
p
1 − ζ12 − ζ22 1 + ζ10 v1 − ζ20 v2 − 1 − ζ102 − ζ202 v3
p
p
(1 − ζ1 v1 − ζ2 v2 − 1 − ζ12 − ζ22 v3 )(vi 1 − ζ102 − ζ202 − ζi0 v3 )
=
.
2
p
p
1 − ζ12 − ζ22 1 + ζ10 v1 − ζ20 v2 − 1 − ζ102 − ζ202 v3
(3.43)
43
Using this notation, we arrive at the critical sets
(∂ζi αζ,ζ 0 ,v )êζ
αζ,ζ 0 ,v 0
· (z 0 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) − x) +
(zi (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) − xi )
c
c
αζ,ζ 0 ,v êζ
αζ,ζ 0 ,v ζi
0
0
0
(z (αζ,ζ 0 ,v ω , αζ,ζ 0 ,v k êζ ) − x3 ) +
· (∂ζi z 0 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ))
− p
2 3
2
c
c 1 − ζ1 − ζ2
∂ζi Φ̃dma =
− ∂ζi αζ,ζ 0 ,v (t − ty ) = 0
(∂ζi0 αζ,ζ 0 ,v )êζ
xi − yi0 (ω 0 , k 0 êζ 0 )
· (z 0 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) − x) +
c
c
0
ê
ζi0
ζ
(x3 − y30 (ω 0 , k 0 êζ 0 )) −
− p
· ∂ζi0 y 0 (ω 0 , k 0 êζ 0 )
c
c 1 − ζ102 − ζ202
∂ζi0 Φ̃dma =
− ∂ζi0 αζ,ζ 0 ,v (t − ty ) = 0
(3.44)
It is easily seen that this critical set is very difficult to interpret. We would like
to note that for many applications of interest, the rate of change of the αζ,ζ 0 ,v
is negligible. Henceforth, we neglect this term. As a result, the correlation beR 0
0
0
0
tween ζ and ζ 0 in I1dma = eik (αζ,ζ0 ,v êζ ·(z (αζ,ζ0 ,v ω ,αζ,ζ0 ,v k êζ )−x)) dζdζ 0 and I2dma =
0
0
0 0
eik (êζ0 ·(x−y (ω ,k êζ0 ))) dζdζ 0 is no longer present. The uncorrelated phases are
Φdma (ζ) = αζ,ζ 0 ,v
Φdma (ζ 0 ) =
êζ
· (z 0 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) − x)
c
êζ 0
· (x − y 0 (ω 0 , k 0 êζ 0 ))
c
(3.45)
We focus our attention on I1dma (ω), and again note that I2dma (ω) is similar. This
44
means that the critical set is
αζ,ζ 0 ,v 0
αζ,ζ 0 ,v ζi
(z30 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) − x3 )
(zi (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) − xi ) − p
2
2
c
c 1 − ζ1 − ζ2
αζ,ζ 0 ,v êζ
+
· (∂ζi z 0 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ )) = 0
c
∂ζi Φdma =
(3.46)
If we assume a1 = ζ1 ∂ζ1 z10 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ), a2 = ζ2 ∂ζ1 z20 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) ,
p
a3 = 1 − ζ12 − ζ22 ∂ζ1 z30 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) and b1 = ζc1 ∂ζ2 z10 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ),
√ 2 2
1−ζ1 −ζ2
b2 = ζc2 ∂ζ2 z20 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) , b3 =
∂ζ2 z30 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ), then
c
Equation (3.46) can be written as
αζ,ζ 0 ,v (z10 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) + a1 + b1 − x1 )
cζ1
0
0
0
0
αζ,ζ ,v (z2 (αζ,ζ ,v ω , αζ,ζ 0 ,v k 0 êζ ) + a2 + b2 − x2 )
=
cζ2
αζ,ζ 0 ,v (z30 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) + a3 + b3 − x3 )
p
=
c 1 − ζ12 − ζ22
(3.47)
This result is similar to (3.34), except that now the virtual sensor locations depend
on the target velocity. Equation (3.47) describes the path from the extended virtual
transmitter, whose location is now is shifted by the contributions a1 , b1 , a2 , and b2 ,
namely
0
zdma
(αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) = (z10 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ )+a1 +b1 , z20 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ )+
a2 + b2 , z30 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) to the target x scaled by a doppler scale factor. This
p
set describes a line with direction êζ = (ζ1 , ζ2 , 1 − ζ12 − ζ22 ). Using the parameter
0
t, the vector form of this equation is αζ,ζ 0 ,v x = αζ,ζ 0 ,v zdma
(αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ )− êζ t.
Since êζ is a unit vector and we are only considering positive values of the third component of êζ , then we again use the argument in [14] to determine that the unit vector
45
0
zdma
(αζ 0 ,ζ 0 ,v ω 0 ,αζ 0 ,ζ 0 ,v k0 êζ 0 )−x
. This
0
|zdma (αζ 0 ,ζ 0 ,v ω 0 ,αζ 0 ,ζ 0 ,v k0 êζ 0 )−x|
means that the phase evaluated at the critical point ζ 0 will be Φdma (ζ 0 ) = αζ 0 ,ζ 0 ,v êζ 0 ·
α 0 0 |z 0
(α 0 0 ω 0 ,α 0 0 k0 ê 0 )−x|
0
. To complete
(αζ 0 ,ζ 0 ,v ω 0 , αζ 0 ,ζ 0 ,v k 0 êζ 0 ) − x) = ζ ,ζ ,v dma ζ ,ζ ,vc ζ ,ζ ,v ζ
(zdma
êζ evaluated at the critical point becomes êζ 0 =
the stationary phase reduction, we need to compute the determinant of the Hessian
0
0
Dζ2 Φdma which requires information about ydma
(ω 0 , êζ 00 )4 and zdma
(αζ 0 ,ζ 0 ,v ω 0 , αζ 0 ,ζ 0 ,v k 0 êζ 0 ).
We conclude that sgnDζ2 Φdma (ζ 0 ) is Sm . Applying (1) to I1dma (ω) and I2dma (ω) reduces (3.30) to
(22)
ddma (t, z, y)
=−
×e
Z
ω 02 qv (x)sy (αζ,ζ 0 ,v ω 0 )
4(2π)6 Fdma (ζ 0 )Fdma (ζ 00 )
−iω 0 t−ty +
=−
Z
0
0
α 0 0 |zdma
|x−ydma
(α 0 0 ω 0 ,α 0 0 k0 ê 0 )−x|
(ω 0 ,ê 00 )|
ζ ,ζ ,v
ζ ,ζ ,v
ζ ,ζ ,v
ζ
ζ
+
c
c
!
+iSm π4
π
ω 02 qv (x)sy (αζ,ζ 0 ,v ω 0 )
0dma
0
e−iφ (x,ω )+iSm 4 dxdω 0
6
0
00
4(2π) Fdma (ζ )Fdma (ζ )
dxdω 0
(3.48)
where
0dma
φ
0
0
(ω 0 , êζ 00 )|
αζ 0 ,ζ 0 ,v |zdma
(αζ 0 ,ζ 0 ,v ω 0 , αζ 0 ,ζ 0 ,v k 0 êζ 0 ) − x| |x − ydma
(x, ω ) = ω t − ty +
+
c
c
0
0
(3.49)
and Fdma (ζ) =
3.2
p
p
ζ12 + ζ22 |detD2 Φdma (ζ)|.
Image Reconstruction - Dispersive Reflecting Layers
We begin the image reconstruction process by describing how we form an image.
then we describe how we analyze the reconstructed image. We test our analysis by
performing numerical experiments. We conclude by summarizing our results.
46
3.2.1
Image Formation
We again form an image from our simulated data by applying a filtered adjoint to
our data model. We first show how to form an image, I (p, u) of the objects with
velocity u, located at position p, for the general-model and subsequently each special
case. We consider the more interesting path, d22
d of (3.23).
1. From equation (3.23), we have Φ0d(22)
Φ0d(22) (p, u, ξ, ξ 0 , τ, ω) = αξ,ξ0 ,u kêξ (z(αξ,ξ0 ,u ω, αξ,ξ0 ,u kêξ ) − x)
+ kêξ0 (x − y(ω, kêξ0 )) − αξ,ξ0 ,u ωt.
(3.50)
The general-model image is
(22)
Id (p, u)
=
Z
e
iΦ0d (p,u,ξ,ξ0 ,ω)
(22)
q
ξ12
+
ξ22
× dd (t, z, y)dωdξsξ 0 dt,
q
ξ102 + ξ202 Sy∗ (ω)
(3.51)
2. The image for the stationary object case will be
(22)
Idsa (p, u)
=
Z
0dsa (p,ω)
eiΦ
Fdsa (ξ 0 )Fdsa (ξ 00 )Sy∗ (ω)
(22)
× ddsa (t, z, y)dωdt,
(3.52)
where Φ0dsa (p, ω) is defined in (3.40).
47
3. The image for the moving object case will be
(22)
Idma (p, u)
=
Z
0dma (p,u,ω)
eiΦ
Fdma (ξ 0 )Fdma (ξ 00 )Sy∗ (ω)
(22)
× ddma (t, z, y)dωdt
(3.53)
where Φ0dma (p, ω) is defined in (3.49).
The image that is formed gives us an approximation to our original reflectivity function qv (p) but also gives us copies of our reflectivity function in the wrong location.
3.2.2
Image Analysis
In order to analyze the relationship between the image and the true object, we
(22)
analyze the point-spread function, Kd
. We now describe the point-spread function
Kd for the general case
(22)
Id (p, u)
=
Z
e
iΦ0d(22) (p,u,ξ,ξ0 ,ω)
0
q
ξ102
ξ202
+
q
ξ12 + ξ22 Sy∗ (ω)
0d
0
0
k 4 αζ,ζ 0 ,v qv (x)Sy (αζ,ζ 0 ,v ω 0 )e−iΦ (x,v,ζ,ζ ,t̃,ω )
p
p
dRdR0
×
2
2
02
02
8
4(2π) ζ1 + ζ2 ζ1 + ζ2
p
p
Z 04
k αζ,ζ 0 ,v qv (x) ξ102 + ξ202 ξ12 + ξ22 Sy∗ (ω)Sy (αζ,ζ 0 ,v ω 0 )
p
p
=
4(2π)8 ζ12 + ζ22 ζ102 + ζ202
0d
0
0d
0
0
× eiΦ (p,u,ξ,ξ ,ω)−iΦ (x,v,ζ,ζ ,ω ) JdRdR0
Z
(22)
= Kd (p, u; x, v)qv (x)dxdv,
(3.54)
where dR0 = dωdξdξ 0 dt. For simplicity we drop the (22) superscript in the phase of
48
(22)
Kd
(p, u; x, v)
(22)
Kd (p, u; x, v)
=
Z
0
k 4 αζ,ζ 0 ,v
× ei(Φ
p
p
ξ102 + ξ202 ξ12 + ξ22 Sy∗ (ω)Sy (αζ,ζ 0 ,v ω 0 )
p
p
4(2π)8 ζ12 + ζ22 ζ102 + ζ202
0d (p,u,ξ,ξ0 ,ω)−Φ0d (x,v,ζ,ζ 0 ,ω 0 ))
dω 0 dζdζ 0 dξdξ 0 dωdt
(3.55)
where Φ0d (p, u, ξ, ξ 0 , ω) − Φ0d (x, v, ζ, ζ 0 , ω 0 ) is
Φ0d (p, u, ξ, ξ 0 , ω) − Φ0d (x, v, ζ, ζ 0 , ω 0 ) = αξ,ξ0 ,u ω(t − ty ) − αξ,ξ0 ,u k êξ · (z 0 (αξ,u ω, αξ,u k êξ ) − p)
− k êξ0 · (p − y 0 (ω, k êξ0 )) − αζ,ζ 0 ,v ω 0 t
+ αζ,ζ 0 ,v k 0 êζ · (z 0 (αζ,ζ 0 ,v ω 0 , αζ,ζ 0 ,v k 0 êζ ) − x)
+ k 0 êζ 0 · (x − y 0 (ω 0 , k 0 êζ 0 ))
(3.56)
The general imaging formula is too computationally intensive for us to use. Therefore, we will not be investigating this formula further.
49
3.2.2.1
Image Analysis: Case 1, Stationary Target
We now describe the point-spread function Kdsa for case 1,
(22)
Idsa (p, u)
=
Z
0dsa (p,ω)
eiΦ
0dsa
Fdsa (ξ 0 )Fdsa (ξ 00 )Sy∗ (ω)
ω 02 qv (x)Sy (ω 0 )
4(2π)6 Fdsa (ζ 0 )Fdsa (ζ 00 )
0
× e−iΦ (x,ω ) dxdωdω 0 dt
Z
(22)
= Kdsa (p, u; x, v)qv (x)dx,
where Fdsa (ζ) =
p
ζ12 + ζ22
(3.57)
p
(4)
|detD2 Φdsa (ζ)| and Kdsa (p, u; x, v) is,
(22)
Kdsa (p, u; x, v)
=
Z
ω 02 Fdsa (ξ 0 )Fdsa (ξ 00 )Sy∗ (ω)Sy (ω 0 )
4(2π)6 Fdsa (ζ 0 )Fdsa (ζ 00 )
0dsa (p,ω)−Φ0dsa (x,ω 0 ))
× ei(Φ
dωdω 0 dt̃dτ dt
(3.58)
and where Φ0dsa (p, τ, ω) − Φ0dsa (x, t̃, ω 0 ) is,
0
0
|zdsa
(ω, êξ0 ) − p| |p − ydsa
(ω, êξ00 )|
Φ (p, ω) − Φ (x, ω ) = ω t − ty +
+
c
c
0
0
0
0
|z (ω , êζ 0 ) − x| |x − ydsa (ω , êζ 00 )|
+
− ω 0 t − ty + dsa
c
c
0dsa
0dsa
0
(3.59)
50
3.2.2.2
Image Analysis: Case 2, Moving Target
We now describe the point-spread function Kdma for case 2
(22)
Idma (p, u)
=
Z
0dma (p,u,ω)
eiΦ
0dma
Fdma (ξ 0 )Fdma (ξ 00 )Sy∗ (ω)
ω 02 qv (x)Sy (αζ,ζ 0 ,v ω 0 )
4(2π)6 Fdma (ζ 0 )Fdma (ζ 00 )
0
× e−iΦ (x,v,ω ) dxdvdωdω 0 dt
Z
(22)
= Kdma (p, u; x, v)qv (x)dxdv,
where Fdma (ζ) =
p
p
(22)
ζ12 + ζ22 |detD2 Φdma (ζ)| and Kdma (p, u; x, v) is
(22)
Kdma (p, u; x, v)
=
Z
ω 2 Fdma (ξ 0 )Fdma (ξ 00 )Sy∗ (ω)sy (αζ,ζ 0 ,v ω 0 )
4(2π)6 Fdma (ζ 0 )Fdma (ζ 00 )
0dma (p,u,ω)−Φ0dma (x,u,ω 0 ))
× ei(Φ
(3.60)
dωdω 0 dt
(3.61)
and where Φ0dma (p, u, τ, ω) − Φ0dma (x, v, ω 0 ) is
Φ0dma (p, u, τ, ω) − Φ0dma (x, v, ω 0 )
0
0
(ω, k êξ00 )|
|zdma
(αξ0 ,ξ0 ,u ω, αζ 0 ,ζ 0 ,v k êξ0 ) − p| |p − ydma
= ω t − ty + αξ0 ,ξ0 ,u
+
c
c
0
0
0
0
|zdma (αζ 0 ,ζ 0 ,v ω , αζ 0 ,ζ 0 ,v k êζ 0 ) − x| |x − ydma (ω 0 , k 0 êζ 00 )|
0
− ω t − ty + αζ 0 ,ζ 0 ,v
+
c
c
(3.62)
CHAPTER 4
Simulations
In this chapter, we numerically investigate the theory discussed in the previous chapters. In particular, we want to be able to form an image that minimizes ambiguities
in position and velocity. We develop algorithms based on the theory described in this
dissertation that provide tools for analyzing different variables that affect the image
fidelity such as the number of sensors used, the waveform used, the reflector used,
and the operator that is applied to form the image. In the following set of numerical
simulations, we investigate the case of a target in a plane, so that the (x, v) phase
space is four dimensional.
4.1
Waveforms
In this chapter, we will be investigating two distinct realistic waveforms. We use
a high range resolution waveform and a high doppler resolution waveform in order
to simulate our data models. In this section we explore the ambiguity function and
determine the range and doppler resolutions for each waveform.
4.1.1
High Range Resolution Waveform
We use a stepped-frequency continuous wave (SFCW) to model our high range resolution waveform. An SFCW waveform can be thought of as a discrete version of the
51
52
Frequency Modulated Continuous Wave (FMCW). It is created by transmitting Nf
single frequency sub-pulses. The frequency of sub-pulse n is fn = f0 + n NBf , where
we denote the Bandwidth by B = fmax − f0 . By setting the BTd = R2 we prevent
aliasing. [47] We specify the start frequency as f0 and the stop frequency as fmax
and the duration of the sub pulse is Tn = Ts /Nf where Ts is the duration of the
pulse. The SFCW waveform may be described mathematically as
Nf −1
p(t) =
X
n=0
e2πifn t u (t − nTd ) ,
(4.1)
53
SFCW waveform time domain
200
150
100
50
0
0
50
100
150
200
250
300
350
400
350
400
SFCW waveform frequency domain
1
0.8
0.6
0.4
0.2
0
0
50
100
150
200
250
300
Figure 4.1: SFCW Waveform Plot
where u is a rectangle function and where the ambiguity for a single frequency fi
54
may be calculated as in [58], [79]:
A(fi , T ) =
=
Nf −1 Nf −1 Z
X X
m=0 n=0
Nf −1 Nf −1 Z
X X
m=0 n=0
u(τ − nTd )u∗ (τ − mTd − T )e2πifi τ e−2πifn (n−m−T )τ dτ
u(τ − nTd )u∗ (τ − mTd − T )e2πi(fi −fn (n−m−T ))τ dτ
(4.2)
We make the change of variables t00 = τ − nTd and arrive at
A(fi , T ) =
Nf −1 Nf −1 Z
X X
m=0 n=0
Nf −1 Nf −1
=
X X
e
00 +nT
u(t00 )u∗ (t00 − T + (n − m)Td )e2πi(t
2πi(nTd (fi −(n−m−T )fn )
m=0 n=0
00
dt
Z
d )(fi −(n−m−T )fn )
dt00
00 (f
u(t00 )u∗ (t00 − T + (n − m)Td )e2πit
i −(n−m−T )fn )
(4.3)
Since we are assuming that each sub-pulse fn is uniform, we make the substitution
p = n − m and arrive at
Nf −1 Nf −1
=
X X
m=0 n=0
Au (T − pTd , fi − pfn − T fn )e2πi(fi −pfn −T fn )nTd
(4.4)
where Au is the ambiguity function for one pulse in a uniform pulse train. Using
dt00
55
reduction steps found in [58] the ambiguity function reduces to
Nf −1
A(fi , T ) =
X
p=−(N −1)
×
eπi(fi −pfn −T fn )(N −1+p)Td Au (T − pTd , fi − pfn − T fn )
sin(π(fi − pfn − T fn )(N − |p|)Td )
sin(π(fi − pfn − T fn )Td )
(4.5)
The range resolution may be estimated by
NX
−1
f
|A(0, T )| = e−πi(pfn +T fn )(N −1+p)Td Au (pTd − T, pfn + T fn )
p=−(N −1)
sin(π(pfn + T fn )(N − |p|)Td ) ×
sin(π(pfn + T fn )Td )
(4.6)
whereas the doppler resolution may be estimated by
Nf −1
X
eπi(fi −pfn )(N −1+p)Td Au (−pTd , fi − pfn )
|A(fi , 0)| = p=−(N −1)
sin(π(fi − pfn )(N − |p|)Td ) ×
sin(π(fi − pfn )Td )
The ambiguity surface for (4.1) is
(4.7)
56
Figure 4.2: SFCW Ambiguity Surface
57
4.1.2
High Doppler Resolution Waveform
We use a long duration T continuous wave pulse to model our high doppler resolution
waveform.
1
s(t) = √ cos(2πif0 t)rect(t/T )
T
(4.8)
58
CW waveform time domain
80
60
40
20
0
0
500
4
10
1000
1500
2000
2500
3000
3500
3000
3500
CW waveform frequency domain
x 10
8
6
4
2
0
0
500
1000
1500
2000
2500
Figure 4.3: CW Waveform Plot
59
As in [9], the ambiguity function for this waveform is

 (1 −
A(fi , Ti ) =
 0
|Ti |
)sinc[πfi (T
T
− |Ti |)]
|Ti | ≤ T


otherwise 
(4.9)
60
Figure 4.4: CW Ambiguity Surface
61
The range resolution may be estimated by

 1−
|A(0, Ti )| =
 0
|Ti |
T
|Ti | ≤ T


otherwise 
(4.10)
whereas the Doppler resolution may be estimated by
|A(fi , 0)| = |sinc[πfi T ]|
4.2
(4.11)
Perfect Reflector
4.2.1
Numerical Experiment 1 - Stationary Targets
In this experiment, we use the following scenario: homogeneous reflection medium,
single point-like stationary target, and an SFCW Waveform
4.2.1.1
Algorithm - Data
Let zrk ∈ R3 denote a receiver in the set {zrk : r = 1, ..., Nr , k = 1, 2} and ysj ∈ R3
denote a transmitter in the set {ysj : s = 1, ..., Ns , j = 1, 2}. Nr and Ns denote
the total number of receivers and transmitters used and j, k = 1 signifies the sensor
is actual and j, k = 2 signifies the sensor is virtual. Let x denote the target and
4f denote the frequency step. Then the data (2.74) can be simulated using the
62
algorithm
Ejk = zeros(Nf , Nr , Ns ), where j, k = 1, 2
f = f0 : 4f : fmax
f or
r = 1 : Nr
f or
s = 1 : Ns
temp
(:, r, s)
Ejk
=
e
2πif
c
(|zrj −x|+|x−ysk |) 2πif ty
e
|zrj − x||x − ysk |
temp
Ejk (:, r, s) = Ejk (:, r, s) + Ejk
(:, r, s)
end
end
d = fft(E11 − E12 − E21 + E22 )
djk = fft(Ejk )
4.2.1.2
(4.12)
Algorithm - Image Formation
This algorithm describes how an image is formed for the various numerical experiments in this section. Let B denote the bandwidth of the SFCW waveform. We
again use the notation zrk and ysj for the receivers and transmitters. The point
pxy = [px , py , 0] is a grid point on the image. The image can be formed using the
63
algorithm
Ijk = zeros(Nf , Nf , Nr , Ns ), where j, k = 1, 2
tjk = zeros(Nf , Nf , Nr , Ns ), where j, k = 1, 2
f or
r = 1 : Nr
f or
s = 1 : Ns
f or
x = 1 : Nf
f or
y = 1 : Nf
tjk (x, y, r, s) = round
B
(|zrj − pxy | + |pxy − ysk |)
c
Case: Known Paths
temp
Ijk
(x, y, r, s) = djk (tjk (x, y, r, s), s, r)
Case: Unknown Paths
temp
Ilm
(x, y, r, s) = d(tjk (x, y, r, s), s, r), where l, m = 1, 2
end
end
end
end
temp
Ijk = Ijk + Ijk
(x, y, r, s)
temp
Ilm = Ilm + Ilm
(x, y, r, s)
end

plot 
Nf
X
j,k=1

Ijk (:, :, j, k)
(4.13)
64
4.2.1.3
Case 1 - Known Paths
In this experiment, we explore the case where the data can be separated by paths.
65
Figure 4.5: Stationary Object Image Formed using Known Paths
66
Figure 4.6: Stationary Object Surface Formed using Known Paths
67
In figures (4.5,4.6), each path is matched by the appropriate adjoint I =
P
ij (−1)
i+j
∗ij
ij
Psf
cw d q(x)
where the subscript sfcw represents the waveform. We see that we can uniquely recover the singularities of q(x) as we add more sensors. We notice that as we increase
the number of sensors, the resolution improves since the target magnitude, identified
by the color red, has less thickness than the backprojection ellipses. We also notice
that there is a relationship between the configuration of the sensor positions and the
artifacts. However, if you add enough sensors, the artifacts are minimized. We note
in this case, we have no cross-terms only diagonal terms.
4.2.1.4
Case 2 - Unknown Paths
In this experiment, we explore the case where the data cannot be separated by paths.
68
∗11
Figure 4.7: Stationary Object Image formed using Psf
cw
69
∗11
Figure 4.8: Stationary Object Surface formed using Psf
cw
70
∗11
Figures (4.7,4.8) are formed by applying the adjoint Psf
cw to the data I =
P
i+j ∗11 ij
Psf cw d q(x).
ij (−1)
We display the images made as we increase the number of sensors. We expect to
see artifacts in these images since we have diagonal and non-diagonal terms. We
notice that the magnitude of the object gets stronger as we add more sensors, but
this means that the magnitude of the artifacts also increases. We see little change
in resolution for this adjoint-data pair.
71
∗12
Figure 4.9: Stationary Object Image formed using Psf
cw
72
∗12
Figure 4.10: Stationary Object Surface formed using Psf
cw
73
∗12
Figures (4.9,4.10) are formed by applying the adjoint Psf
cw to the data I =
P
i+j ∗12 ij
Psf cw d q(x).
ij (−1)
We display the images made as we increase the number of sensors. We see few artifacts in this case. However, there are sensor configurations that minimize the object
magnitude. We note that the artifacts that do arise for this adjoint-data pair have
less magnitude than those found in figure(4.7).
74
∗21
Figure 4.11: Stationary Object Image formed using Psf
cw
75
∗21
Figure 4.12: Stationary Object Surface formed using Psf
cw
76
∗21
Figures (4.11,4.12) are formed by applying the adjoint Psf
cw to the data I =
P
i+j ∗21 ij
Psf cw d q(x).
ij (−1)
We note that this is similar to Figures (4.9,4.10) but not equal because the target
is not located at the center of the scene so there is a slight difference between the
two cases. We display the images made as we increase the number of sensors. We
see that the artifacts are minimized as we add more sensors. We also note that the
artifacts for this adjoint have less image fidelity than figure(4.7.
77
∗22
Figure 4.13: Stationary Object Image formed using Psf
cw
78
∗22
Figure 4.14: Stationary Object Surface formed using Psf
cw
79
Dispersive Data Model
� �2
ω qv (x)sy (αζ,ζ � ,v ω � )
d(t, z, y) = −
4(2π)6 F (ζ 0 )F (ζ �0 )
�
�
∗22
Figures (4.13,4.14)
are formed by applying
the adjoint
Psf
cw to the data I =
−iω t−t +
+
�
y
α 0 � |z(α 0 � ω � ,α 0 � k� ê 0 )−x|
ζ ,ζ ,v
ζ ,ζ ,v
ζ ,ζ ,v
ζ
c
|x−y(ω � ,ê �0 )|
ζ
c
�
P
i+j ∗22 ij
Psf cw d q(x).
ij (−1)
×e
dxdω
We display
images made
as we increase the number
of sensors. We see in this
� the
�2
�
ω qv (x)sy� (αζ,ζ � ,v ω )
+
∗22 11
(ζ 0 )F
(ζ �0 )
case that we4(2π)
get6 Fan
aliasing
effect due to the diagonal
term Psf
cw d . The reason
�
�
�
α 0 � |z(α 0 � ω � ,α 0 � k� ê 0 )−x|
ζ ,ζ ,v
ζ ,ζ ,v
ζ ,ζ ,v
ζ
c
|x−y � (ω � ,ê �0 )|
ζ
c
+
+
for this×ise−iω
thatt−tthe
values tpi y0 z0 calculated
in algorithm
dxdω � (4.17) for each point in the
�
y�
�2
θ
�
qv (x)sybut
(αζ,ζ the
,v ω ) difference 4t 0 0 is small so the result is that only a small
image p+i areω large
py z
H
�
4(2π)6 F (ζ 0 )F (ζ �0 )
�
part of the−iωdata
in d11 is represented.
t−t +
�
y
α 0 � |z � (α 0 � ω � ,α 0 � k� ê 0 )−x| |x−y(ω � ,ê �0 )|
ζ ,ζ ,v
ζ ,ζ ,v
ζ ,ζ ,v
ζ
ζ
+
c
c
×e
� �2
ω qv (x)sy (αζ,ζ � ,v ω � )
−
4(2π)6 F (ζ 0 )F (ζ �0 )
×e
−iω �
�
Rp i z
dxdω
Rp i y
�
Rp i z �
α 0 � |z � (α 0 � ω � ,α 0 � k� ê 0 )−x| |x−y � (ω � ,ê �0 )|
ζ ,ζ ,v
ζ ,ζ ,v
ζ
ζ
t−ty + ζ ,ζ ,v
+
c
c
�
B(Rpi z + Rpi y )
tpi yz =
c
B(Rpi z + Rpi y� )
tp i y � z =
c
B(Rpi z � + Rpi y )
tpi yz � =
c
B(Rpi z � + Rpi y� )
�
�
tp i y z =
c
θ
θ
Rp i y �
H
�tp y � z � = R
tppi yz� z � − tpi+1 y� z � =
i
dxdω �
y
y�
(7) θ
time steps
θ
�
H
z
Rp i z
z�
Rp i y
Rp i z �
Rp i y �
θ
θ
H
Rp i z
= 2sinθ
Rp i y
H
Rp i y
Rp i z �
Rp i y �
�tp y � z � = tpi y� z � −
y
y�
z
pi� zz� − tp
� �
�tp y � z � = tR
pi
pi y
z�
i+1 y z
R
pi y
Rp i z
θ
θ
Rpi z �
Rp i z �
Rp i y
H
H
Rp i y �
Rp i y �
Rp i z �
Rp i z
Rp i z
�ttp y � z �� =� =
tpi2sinθ
�
�
y � z � − tpi+1 y � z �
θ
�tp y z = tpi y� z � −
pi+1 y z
Rp i y �
R
p
y
R
i y
(8)
pi y
θ
H
H
�tp y � z � = tpi y� z � − tpi+1 y� z � = 2sinθ
pi
(9)
Rp i z � � θ
Rp i z �
θ
y
H
Rp i z
Rp i z
Rp i y � H
Rp i y �
H
z
Rp i z
Rp i y
Rp i y
� �
=p2sinθ
ti zpi y� z � − tpi+1 y� z � = 2
py
�tp y � z � = tpi y� z � − t�t
R
pi+1
yz��zz� =
R
p
z
i
R
pi y
Rp i z �
Rp i z �
y
R
R
pi y
Rp i z �
Rp i y �
Rppiiyy�
y�
Rp i z �
Rp i z � � � R �
B(Rpi+1 z − Rpi z + Rp�t
z R=pi pytip)yi y� z � − tpi+1 y� z � = 2sinθ
pi
i+1pyy−
z
�tpyz = tpi+1 yz − tpi yz =
�
Rp i y �
�tp y � z � = tpi y� z � − tpi+1 y� z � =� 2sinθRpi y
pi
c
�tp y � z � =pti pi y� z � − tpi+1
B(Rpi+1 z − Rpi z ) �tp y � z � = tpi y� z � − tpi+1 y� z � = 2sinθ z
�tpy� z = t2pi+1 y� z − tpi y� z =
+ sinθ
c
B(Rpi+1 y − Rpi y )
�tpyz � = tpi+1 yz � − tpi yz � = sinθ +
c
�tpy� z � = tpi y� z � − tpi+1 y� z � = 2sinθ
H
pi
y
y�
z
3
z�
(9)
3
3
3
3
3
80
This mismatch gives us a low resolution ring in the direct path.
81
Figure 4.15: Stationary Object Image formed using
P
lm
∗lm
Psf
cw
82
Figure 4.16: Stationary Object Surface formed using
P
lm
∗lm
Psf
cw
83
P
∗lm
Figures (4.15,4.16) are formed by applying the adjoint lm Psf
cw to the data I =
P
i+j ∗lm ij
Psf cw d q(x). We display the images made as we increase the number
ijlm (−1)
of sensors. We see in this case that we get the sum of all of the previous images. We
also see a reduction in artifacts but little improvement in resolution.
4.2.2
Numerical Experiment 2 - Moving Objects
In this experiment, we use a perfect reflector, a single point-like moving target, and
a SFCW Waveform or a CW waveform.
84
4.2.2.1
Algorithm - Data
We use the same notation as in the algorithm for the stationary case. In this case,
we add the term αv,z,y ∈ R to denote the doppler scale factor in (2.67).
αvrs = zeros(1, Nr , Ns )
Ejk = zeros(Nf , Nr , Ns ), where j, k = 1, 2
f = f0 : 4f : fmax %sfcw
f = (f0 +
f or
fmax − f0
)ones(Nf ) %cw
2
r = 1 : Nr
f or
s = 1 : Ns
αvrs (:, r, s) = 1 −
temp
Ejk
(:, r, s)
=
e
zrj − x
ysj − x
−
|zrj − x| |ysj − x|
2πif
c
·
v
c
(αvrs |zrj −x|+|x−ysk |) 2πif ty
e
|zrj − x||x − ysk |
temp
Ejk (:, r, s) = Ejk (:, r, s) + Ejk
(:, r, s)
end
end
d = fft(E11 − E12 − E21 + E22 )
djk = fft(Ejk )
4.2.2.2
(4.14)
Algorithm - Image Formation
This algorithm describes how an image is formed for the various numerical experiments in this section. Let B denote the bandwidth of the SFCW waveform. We
again use the notation zrk and ysj for the receivers and transmitters. The point
85
pxy = [px , py , 0] is a grid point on the image. The point ux0 y0 is a velocity point of
86
the velocity grid in the image. The image can be formed using the algorithm
αvrs = zeros(px , py , vx , vy , Nr , Ns )
Ijk = zeros(Nf , Nf , Nr , Ns ), where j, k = 1, 2
tjk = zeros(Nf , Nf , Nr , Ns ), where j, k = 1, 2
f or
r = 1 : Nr
f or
s = 1 : Ns
f or
x = 1 : Nf
f or
y = 1 : Nf
x0 = 1 : Nf
f or
f or
y 0 = 1 : Nf
zrj − x
ysj − x
v
αvrs = 1 −
−
·
|zrj − x| |ysj − x|
c
B
tjk (x, y, r, s) = round
(|zrj − pxy | + |pxy − ysk |)
c
Case: Known Paths
temp
Ijk
(x, y, r, s) = djk (tjk (x, y, r, s), s, r)
Case: Unknown Paths
temp
Ilm
(x, y, r, s) = d(tjk (x, y, r, s), s, r), where l, m = 1, 2
end
end
end
end
end
end
temp
(x, y, r, s)
Ijk = Ijk + Ijk
temp
Ilm = Ilm + Ilm
(x, y, r, s)
end

plot 
Nf
X
j,k=1

Ijk (:, :, j, k)
(4.15)
87
4.2.2.3
Case 1 - Known Paths
The images that follow represent a four-dimensional image. Where the (p, q) entry
on the grid of images, represents a two-dimensional slice of the image I(:, :, p, q) at
the velocity [p, q, 0]. In this case, we investigate two waveforms, SFCW and long
pulse CW.
88
Figure 4.17: Moving Object Image using Known Paths and SFWC Waveform
89
Figure 4.18: Moving Object Surface using Known Paths and SFWC
Waveform
90
In Figures (4.17,4.18) we form the image I =
P
i+j ∗ij
Psf cw dij q(x)
ij (−1)
by matching
the appropriate adjoint to the appropriate data model. We see that the ambiguities
in velocity persist for all of the velocity pairs. The true velocity is at (3, 3) but we
are not able to distinguish between the true velocity and ambiguous velocity for this
waveform.
91
Figure 4.19: Moving Object Image using Known Paths and CW Waveform
92
Figure 4.20: Moving Object Surface using Known Paths and CW Waveform
93
Figures (4.19,4.20) were formed by applying the appropriate adjoint P ∗ij to the data
P
∗ij ij
model dij so that the image is I = ij (−1)i+j Pcw
d q(x) where the subscript cw
denotes the continuous wave waveform. We see the target in the right position at
the true velocity (3, 3). We also see the target at the wrong position in (3, 4) and
(4, 4). We see target fading at the other velocity pairs. This signifies that a CW
pulse waveform is able to detect velocity up to a certain error. Artifacts do remain
in the images as this is a low-range resolution waveform.
94
4.2.2.4
Case 2 - Unknown Paths
∗11
95
∗11
Figure 4.22: Moving Object Surface formed using Psf
cw
96
∗11
Figures (4.21,4.22) show the image formed by applying the Psf
cw adjoint to the data
P
∗11 ij
to yield the image I = ij (−1)i+j Psf
cw d q(x). We find that we cannot detect the
correct velocity for this waveform using this adjoint-data pair.
97
∗11
Figure 4.23: Moving Object Image formed using Pcw
98
∗11
Figure 4.24: Moving Object Surface formed using Pcw
99
Figures (4.23,4.24) were formed by using I =
P
i+j ∗11 ij
Pcw d q(x)
ij (−1)
where the sub-
script cw denotes the continuous wave waveform. We see the target in the correct
position at the correct velocity (3,3). We see fading and mis-positioning at the incorrect velocity. This adjoint-waveform pair does not detect the correct velocity in
a convincing way.
100
∗12
Figure 4.25: Moving Object Image formed using Psf
cw
101
∗12
Figure 4.26: Moving Object Surface formed using Psf
cw
102
Figures (4.25,4.26) were formed by using I =
P
ij (−1)
i+j
∗12 ij
Psf
cw d q(x). We see in this
case that we get mis-positioning of the target and some target fading at the wrong
velocity. However, this is not a good waveform-adjoint for velocity detection.
103
∗12
Figure 4.27: Moving Object Image formed using Pcw
104
∗12
Figure 4.28: Moving Object Surface formed using Pcw
105
Figures (4.27,4.28) were formed using I =
P
ij (−1)
i+j
∗12 ij
Pcw
d q(x). We see in this
case that we get mis-positioning for all of the targets and some target fading at the
wrong velocity. As a result, we find that this is not a good waveform-adjoint for
velocity detection.
106
∗21
Figure 4.29: Moving Object Image formed using Psf
cw
107
∗21
Figure 4.30: Moving Object Surface formed using Psf
cw
108
Figures (4.29,4.30) were formed using I =
P
i+j ∗21 ij
Psf cw d q(x).
ij (−1)
We see target
fading at the correct velocity. This waveform-adjoint pair is not optimal for velocity
detection.
109
∗21
Figure 4.31: Moving Object Image formed using Pcw
110
∗21
Figure 4.32: Moving Object Surface formed using Pcw
111
Figures (4.31,4.32) were formed using I =
P
ij (−1)
i+j
∗21 ij
Pcw
d q(x). We see in this
case that get mis-positioning for all of the targets and some target fading at the
wrong velocity. However, this is not a good waveform-adjoint for velocity detection.
112
∗22
Figure 4.33: Moving Object Image formed using Psf
cw
113
∗22
Figure 4.34: Moving Object Surface formed using Psf
cw
114
Figures (4.33,4.34) were formed using I =
P
i+j ∗22 ij
Psf cw d q(x).
ij (−1)
We see in this
case that get mis-positioning, more ambiguities than all of the other plots for all
of the targets and no target fading . This is also not a good waveform-adjoint for
velocity detection.
115
∗22
Figure 4.35: Moving Object Image formed using Pcw
116
∗22
Figure 4.36: Moving Object Surface formed using Pcw
117
Figures (4.35,4.36) were formed using I =
P
ij (−1)
i+j
∗22 ij
Pcw
d q(x). We see in this
case that get mis-positioning and no target fading. This is also not a good waveformadjoint for velocity detection.
118
Figure 4.37: Moving Object Image formed using
P
lm
∗lm
Psf
cw
119
Figure 4.38: Moving Object Surface formed using
P
lm
∗lm
Psf
cw
120
Figures (4.37,4.38) were formed using I =
P
i+j ∗lm ij
Psf cw d q(x).
ijlm (−1)
We see an
optimal adjoint-waveform pair for velocity estimation. We see target fading at the
wrong velocities and we see very little mis-positioning effects. There are many ambiguities in position; however this may be resolved by adding more sensors around
the scattering scene.
121
Figure 4.39: Moving Object Image formed using
P
lm
∗lm
Pcw
122
Figure 4.40: Moving Object Surface formed using
P
lm
∗lm
Pcw
123
Figures (4.39,4.40) were formed using I =
P
i+j ∗lm ij
Pcw d q(x).
ijlm (−1)
We see multiple
ambiguities in position and see very little target fading. This is not a good adjointwaveform pair for velocity or position estimation.
4.3
Dispersive Reflector
4.3.1
Numerical Experiment 1 - Stationary Targets
In this experiment, we use the following parameters: Dispersive reflection medium,
single point-like stationary target, and an SFCW Waveform
4.3.1.1
Algorithm - Data
Let zrk ∈ R3 denote a receiver in the set {zrk : r = 1, ..., Nr , k = 1, 2} and ysj ∈ R3
denote a transmitter in the set {ysj : s = 1, ..., Ns , j = 1, 2}. Nr and Ns denote the
total number of receivers and transmitters used. In this experiment we only examine
(
d( 22) and Id 22). This interesting case investigates a high and low frequency wave.
This case results in looking at four possible paths. The geometry is described as
124
High Wave Low Wave Transmi(er Receiver Sta7onary Object Figure 4.41: Dispersive Reflector Geometry
We denote the high wave transmitters denoted by the subscript j = 2 and the low
wave transmitter as j = 1. The high wave receivers are denoted by the subscript
k = 2 and the low wave receivers as k = 1. This signifies that we have distinct
virtual positions determined by the take-off angle, frequency, and dispersion relation
in the medium. The sensor positions are determined by (3.34). Let x denote the
target and 4f denote the frequency step. Then the data the coordinates in (3.48)
125
can be simulated using the algorithm
Ejk = zeros(Nf , Nr , Ns ), where j, k = 1, 2
f = f0 : 4f : fmax
f or
r = 1 : Nr
f or
s = 1 : Ns
temp
(:, r, s)
Ejk
=
e
2πif
c
(|zrj −x|+|x−ysk |) 2πif ty
e
|zrj − x||x − ysk |
temp
Ejk (:, r, s) = Ejk (:, r, s) + Ejk
(:, r, s)
end
end
d = fft(E11 − E12 − E21 + E22 )
djk = fft(Ejk )
4.3.1.2
(4.16)
Algorithm - Image Formation
This algorithm describes how an image is formed for the various numerical experiments in this section. Let B denote the bandwidth of the SFCW waveform. We
again use the notation zrk and ysj for the receivers and transmitters. The point
pxy = [px , py , 0] is a grid point on the image. The image can be formed using the
126
algorithm
Ijk = zeros(Nf , Nf , Nr , Ns ), where j, k = 1, 2
tjk = zeros(Nf , Nf , Nr , Ns ), where j, k = 1, 2
f or
r = 1 : Nr
f or
s = 1 : Ns
f or
x = 1 : Nf
f or
y = 1 : Nf
tjk (x, y, r, s) = round
B
(|zrj − pxy | + |pxy − ysk |)
c
Case: Known Paths
temp
Ijk
(x, y, r, s) = djk (tjk (x, y, r, s), s, r)
Case: Unknown Paths
temp
Ilm
(x, y, r, s) = d(tjk (x, y, r, s), s, r), where l, m = 1, 2
end
end
end
end
temp
Ijk = Ijk + Ijk
(x, y, r, s)
temp
Ilm = Ilm + Ilm
(x, y, r, s)
end

plot 
Nf
X
j,k=1

Ijk (:, :, j, k)
(4.17)
127
4.3.1.3
Dispersive Reflector Experiment
In this experiment, we explore the case where we have two possible reflection points.
128
∗22
Figure 4.42: Stationary Object Image Formed using Psf
cw,dis
129
∗22
Figure 4.43: Stationary Object Surface Formed using Psf
cw,dis
130
(22)
∗22
In figures (4.42,4.43), the data dsf cw,dis is matched by the adjoint Psf
cw,dis where
the subscript sf cw and dis represents the waveform and that we used a dispersive
reflector. We see that we can uniquely recover the singularities of q(x) as we add
more transmitters. We also see that the resolution of the image degrades for this
case.
CHAPTER 5
Conclusions and Future Work
5.1
Conclusions
In this dissertation, we develop a method for forming an image of moving objects via
a modified backprojection algorithm. The modification involves using the scattering
data of multiple moving objects received via multiple sensors operating in multipath
environments and transmitting diverse waveforms. The resultant image is in 6dimensions. We are able to successfully recover properties such as position and
velocity information from the image. We are also able to determine the 3-dimensional
range resolution for the image. The approach we take to arrive at the imaging model
involves developing a physics-based data model for the information we expect to
receive at the receiving sensor and subsequently applying an adjoint to the data
model in order to form the image.
This approach yields many tools that may be used in application. For example,
since we account for physics-based considerations in our data model, we are able to
simulate data we expect to receive given specific parameters. The parameters we incorporate into our theoretical data model includes a model for a perfectly reflecting or
dispersive reflector, a scene of interest that includes multiple moving objects, waveform type, and activation time of each transmitter. Simulations are useful tools in
the design and implementation of imaging systems such as a radar or sonar systems.
This approach also yields a way to develop new and innovating imaging algorithms
131
132
that are designed to specifically account for the properties of interest and to a priori
mitigate for undesired results such as image artifacts.
Ultimately, through numerical simulation, we were able to successfully show how
waveforms, adjoints, and multiple sensors affect the resulting image. For example, we
showed that using an SFCW waveform yields images of stationary objects regardless
of whether or not the data can be separated according to path and that adding
more sensors reduces artifacts for both reflector types. In examining the resultant
images, we find that there is an optimal number of sensors that may be used in order
to minimize artifacts for both the separable and non-separable data. This optimal
number of sensors may be determined numerically or analytically, and is dependent
on the scene parameters. We also found that we can more accurately detect the
velocity of a moving object using a CW versus an SFCW waveform for the case
where we can separate the data according to path. We found that if we cannot
separate the data according to path we are still able to determine velocity but the
velocity resolution degrades.
This conclusion validates the assumption that the modified backprojection algorithm
developed in this dissertation is successful in recovering position and velocity information from multipath moving object scattering data. We also find that using the
data simulation tool provides insight into system design specifications such as how
many sensors are needed and what type of waveforms should be used to produce the
best image.
133
5.2
Future Work
It has been demonstrated in the experiments that resolution in the phase-space image
is not well understood. For example, we have shown that using a CW waveform
detects velocity information with better resolution than an SFCW waveform in the
separable data case. However, in the non-separable data case, we find that we can
recover velocity information up to a certain resolution. This demonstrates the need
for further resolution analysis for the moving object case. In this dissertation, we
also neglect the effect of target motion on the asymptotic analysis for the dispersive
reflector case. Although this case is not important for most applications, we believe
that there may be an interesting result when the rate of change of the doppler scale
factor is re-introduced to the model. Finally, we have shown numerical results to
demonstrate that we are able to recover the reflectivity function for a distribution of
moving objects up to a smooth error. We would like to use Microlocal Analysis to
analytically determine this result.
LITERATURE CITED
[1] Adams, R.A.(1975). Sobolev Spaces.Academic Press, Inc.
[2] Adams, R.A., Fournier, J.F. (2003). Sobolev Spaces. Elsevier Science Ltd.
[3] Ahlfors, L.V. (1979). Complex Analysis. McGraw-Hill, Inc.
[4] Antimirov, M.Y., Kolyskin, A.A, Vaillancourt, R. (1993). Applied Integral
Transforms. American Mathematical Society.
[5] Beltrami, E.J., Wohlers, M.R.(1966). Distributions and the Boundary Value
Problems of Analytic Functions.Academic Press, Inc.
[6] Bleistein, N., Handelsman, R.A. (1987) Asymptotic Expansions of Integrals.
Dover Publications, NY.
[7] Blondel, P. (2009) The handbook of sidescan sonar. Springer.
[8] Bolakrishnan, A.V. (1981). Applied Functional Analysis. Springer-Verlag New
York Inc.
[9] Borden, B., Cheney, M. (2009). Fundamentals of Radar Imaging. Society for
Industrial and Applied Mathematics.
[10] Brown, J.W., Churchill, R.V. (2004). Complex Variables and Applications.
McGraw Hill, Inc.
[11] Cabrelli, C. A. (1984) Minimum entropy deconvolution and simplicity: A
noniterative algorithm Geophysics v. 50 pp. 394-413.
[12] Cheney,M., Bonneau, R.J. (2004). Imaging that Exploits Multipath Scattering
from Point Scatterers. Inverse Problems. 20. pp. 1691-1711.
[13] Cheney, M., Borden, B.B. (2008).Imaging Moving Targets from Scattered
Waves Inverse Problems. 22.
134
135
[14] Cheney, M., Nolan, C.J. (2004) Synthetic-aperture imaging through a
dispersive layer. Inverse Problems 20 pp. 507-532.
[15] Cheney, W. (2001). Analysis for Applied Mathematics. Springer-Verlag New
York, Inc.
[16] Collin , R.E. (1991).Field Theory of Guided Waves, Second Edition.The
Institute of Electrical and Electronic Engineers, Inc.
[17] Collin, R.E. (1985). Antennas and Radiowave Propagation. McGraw-Hill Book
Company.
[18] Crawford, C.R., Kak, A.C. (1982) Multipath artifact correction in ultrasonic
transmission tomography. Ultrasonic Imaging 4 pp.234-266.
[19] Davies, K. (1990) Ionospheric Radio. Peter Peregrinus, Ltd. London
[20] Dettman, J.W. (1965). Applied Complex Variables. Dover Publications, Inc.
[21] Devaney, A.J., Oristagilo, M.L. (1983) Inversion Procedure for Inverse
Scattering within the Distorted-Wave Born Approximation. Physical Review
Letters v. 51 n. 4
[22] Dimovski, I.H. (1990). Convolutional Calculus. Kluwer Academic Publishers.
[23] Dragoset, B. (1999). A Practical Approach to Surface Multiple Attenuation.
The Leading Edge. v. 18 n. 1 pp. 104-108.
[24] Duistermaat, J.J. (1996). Fourier Integral Operators Birkhauser, Boston.
[25] Edmunds, V.E., Evans, W.D. (1987). Spectral Theory and Differential
Operators. Oxford Clarendon Press.
[26] Egorov, Y.V., Shubin, M.A. (1993). Partial Differential Equations IV.
Springer-Verlag.
[27] Evans, L.C. (1998). Partial Differential Equations .American Mathematical
Society.
[28] Ferreira, P.P., Antonio, M., Santos, C., and Luis, L. (2009). Characteristics of
the Free Surface Multiple attenuation Using Wave Field Extrapolation. AAPG
International Conference and Exhibition.
136
[29] Fridman, S. V., Nickisch, L.J. (2004) Signal Inversion for target extraction and
registration. Radio Science v. 39.
[30] Gaburro,R., Nolan, C. (2008). Microlocal Analysis of Synthetic Aperture
Radar Imaging in the Presence of a Vertical Wall. Journal of Physics:
Conference Series. 124.
[31] Garren, D.A., Goldstein, J.S., Obuchon, D.R., Grene, R.R., North, J.A.
(2004). SAR Image Formation Algorithm with Multipath Reflectivity
Estimation. Proceedings of the 2004 IEEE Radar Conference. pp. 323-328.
[32] Goodman, J.W. (1996) Introduction to Fourier Optics. McGraw-Hill.
[33] Greenleaf, A., Seeger,A. (1999). On Oscillatory Integral Operators with
Folding Canonical Relations. Studia Mathematica. 132(2).
[34] Greensite, F. (1995). Remote Reconstruction of Confined Wavefront
Propagation. Inverse Problems. 1.
[35] Finch, D., Lan, I., Uhlman, G. (2003). Inside Out: Inverse Problems and
Applications. Mathematical Sciences Research Institute pp. 193-296.
[36] Hayes, M.P. (2004). Multipath Reduction with a Three Element
Interferometric Synthetic Aperture Sonar Proceedings of the Seventh European
Conference on Underwater Acoustics.
[37] Hendee, W.R., Ritenour, E.R. (2002). Medical Imaging Physics Wiley-Liss,
Inc., New York.
[38] Hildebrand F.B. (1965). Methods of Applied Mathematics. Dover Publications
Inc. NY.
[39] Hormander, L. (1985). The Analysis of Linear Partial Differential Operators
I-IV. Springer-Verlag Berlin Heidenberg, New York, Tokyo.
[40] Ilyinsky, A.S., Slepyan, G.Y., Slepyan, A.Y.(1993). IEEE Electromagnetic
Waves Series 36: Propagation, Scattering, and Dissipation of Electromagnetic
Waves. Peter Pregrinus Ltd., 123-196.
[41] Ishimaru, A. (1978). Wave Propagation and Scattering in Random Media:
Single Scattering and Transport Media, Volume 1.Academic Press Inc.
137
[42] Ishimaru, A. (1978). Wave Propagation and Scattering in Random Media:
Multiple Scattering, Turbulence, Rough Surfaces, and Remote Sensing,
Volume 2.Academic Press Inc.
[43] Kanwal, R.P.(1998).Generalized Function Theory and Technique.Birkhauser.
[44] Klein, M.V. (1970). Optics. John Wiley and Sons, New York.
[45] Krantz, S.G. (1992). Several Complex Variables. American Mathematical
Society Chelsea Publishing.
[46] Leis, R. (1993). Initial Boundary Value Problems in Mathematical Physics.
John Wiley and Sons Ltd and B.G. Teubner, Stuttgart.
[47] Luminati, J.E., Hale, T.B., Temple, M.A., Havrilla, M.J., and Oxley, M.E.
(2004) Doppler aliasing artifact filtering in SAR imagery using randomised
stepped-frequency waveforms. Electronic Letters v. 40 i. 22 pp. 1445-1448.
[48] Malcolm, A. E., Bjorn, U., V. deHoop, M. (2008) Seismic Imagin and
illumination with internal multiples. Geophysical Journal International.
[49] Melrose, R. (2004). Introduction to Microlocal Analysis. WEBSITE.
[50] Meyer-Arendt, J.R. (1971). Introduction to Classical and Modern Optics.
Jurgen R. Meyer-Arendt.
[51] Mittra, R. (1975). Topics in Applied Physics and Asymptotic Techniques in
Electromagnetics. Springer-Verlag New York, Heidelberg, Berlin.
[52] Moghaddam, P.P., Amindavar, H., Kirlin, R.L. (2003). A new time-delay
estimation in multipath IEEE Transactions on Signals Processing. v. 51 n. 5.
[53] Nathanson, F.E., Reilly, J.P., Cohen, M.N.(1991). Radar Design Principles:
Signal Processing and the Environment.McGraw-Hill Inc.
[54] Nitzberg, R.(1999). Radar Signal Processing and Adaptive Systems.Artech
House, Inc.
[55] Nolan, C. J., Cheney, M., Dowling, T., and Gaburro, R. (2006). Enhanced
angular resolution from multiply scattered waves Inverse Problems. 22 pp.
1817-1834.
138
[56] Nolan, C.J., Cheney, M. (2002). Microlocal Analysis of Synthetic Aperture
Radar Imaging. The Journal of Fourier Analysis and Applications.
[57] Panofsky, W.K.,Phillips, M.(1962).Classical Electricity and
Magnetism.Addison-Wesley Publishing Company, Inc., pp. 49-53.
[58] Rihaczek, A.W. (1969) Principles of high resolution radar. McGraw-Hill, NY.
[59] Seybold, J.S. (2005). Introduction to RF Propagation. Wiley and Sons, Inc.
[60] Sogge, C.D. (1993). Fourier Integrals in Classical Analysis. Cambridge
University Press.
[61] Stakgold, I. (1998). Green’s Functions and Boundary Value Problems. John
Wiley and Sons, Inc. New York.
[62] Strichartz R. (1994). A guide to Distribution Theory and Fourier Transforms.
CRC Press, Inc.
[63] Tague, J.A., Pike, C.M., and Sullivan, E.J. (2005). Active Sonar Detection in
Multipath: A New Bispectral Analysis Approach. Circuits, Systems, and
Signal Processing. v. 13 n. 4. pp. 455-466.
[64] Taylor, J.L. (2002). Several Complex Variables with Connections to Algebraic
Geometric and Lie Groups. American Mathematical Society.
[65] Treves, F. (1980). Introduction to Pseudodifferential and Fourier Integral
Operators vol 1 and 2. Plenum Press, New York.
[66] Unz, H. (1956). Linear Arrays with Arbitrarily Distributed
Elements.University of California, Berkeley, Elect. Reac. Lab Report, 168.
[67] Varslot, T., Yazici, B., Cheney, M. (2008) Wide-band pulse-echo imaging with
distributed apertures in multi-path environments. Inverse Problems 24
[68] Varslot, T., Morales, J.H., Cheney, M. (2010) Synthetic-aperture radar
imaging through dispersive media Inverse Problems 26.
[69] Verschuur, D. J., Berkhout, A.J., and Wapenaar, C.P.A. (1992) Adaptive
surface-related multiple elimination. Geophysics v. 57 n. 9 pp. 1166-1177.
[70] Viro, O.Y., Ivanov, N.Y., Netsvetaev, N.Y., Kharlamov, V.M. (2008).
Elementary Topology. American Mathematical Society.
139
[71] Vosolov, V.M. (1973). Multiple Integrals, Field Theory and Series.Mir
Publishers.
[72] Vvedensky, D. (1993). Partial Differential Equations with Mathematica.
Addison Wesley Publishing Company Inc.
[73] Wait,J.(1981). Lectures on Wave Propagation Theory.Pentagon Press, Inc.
[74] Wangsness, R.K. (1986). Electromagnetic Fields. John Wiley and Sons.
[75] Wax, M., Leshem, A. (1997) Joint Estimation of Time Delays and Directions
of Arrival of Multiple Reflections of a Known Signal. IEEE Transactions on
Signal Processing. v. 45 n. 10.
[76] Weglein, A.B., Gasparotto, F.A., Carvalho, P.M., and Stolt, R.H. (1997) An
inverse-scattering series method for attenuating multiples in seismic reflection
data. Geophysics v. 62 i. 5.
[77] Wiggins, R.A. (1978) Minimum entropy deconvolution Geoexploration. pp
21-35.
[78] Williams, C.S., Becklund, O.A.(1972). Optics: A short course for Engineers
and Scientists. John Wiley and Sons, Inc.
[79] Willis, N.J, Griffiths, H. (2007) Advances in bistatic Radar. SciTech Publishing.
[80] Wong, R. (1989). Asymptotic Approximations of Integrals. Academic Press,
Inc.
[81] Yuanwei, J., Moura, J.M.F., O’Donoughue, N., Mulford, M.T., Samuel, A.A.
(2007). Time Reversal Synthetic Aperture Radar Imaging in Multipath.
Signals, Systems, and Computers. pp 1812-1816.
[82] Zahn, M. (1979). Electromagnetic Field Theory: A problem Solving Approach.
John Wiley and Sons, Inc. 567-662.
[83] Zauderer, E.(1989). Partial Differential Equations of Applied
Mathematics.John Wiley and Sons, Inc.
[84] Ziomek L. J. (1995). Fundamentals of Acoustic Field Theory and Space-Time
Signal Processing. CRC Press, Inc. Boca Raton.
APPENDIX A
Determining sgnDζ2 Φdsa (ζ 0 )
We note that the entries of Dζ2 Φdsa (ζ 0 ) are
2∂ζ1 z1 (ω 0 , êζ ) 2ζ1 ∂ζ1 z30 (ω 0 , êζ )
(ζ22 − 1)dsa
êζ
zx
· ∂ζ1 ζ1 z 0 (ω 0 , êζ ) +
− p
+
3
2
2
2
c
c
c 1 − ζ1 − ζ2
c(1 − ζ1 − ζ22 ) 2
êζ
2∂ζ2 z2 (ω 0 , êζ ) 2ζ2 ∂ζ2 z30 (ω 0 , êζ )
(ζ12 − 1)dsa
zx
∂ζ2 ζ2 Φdsa =
· ∂ζ2 ζ2 z 0 (ω 0 , êζ ) +
− p
+
3
2
2
2
c
c
c 1 − ζ1 − ζ2
c(1 − ζ1 − ζ22 ) 2
êζ
∂ζ z 0 (ω 0 , êζ ) ∂ζ2 z10 (ω 0 , êζ )
∂ζ1 ζ2 Φdsa =
· ∂ζ1 ζ2 z 0 (ω 0 , êζ ) + 1 2
+
c
c
c
0
0
0
0
ζ1 ∂ζ z (ω , êζ )
ζ2 ∂ζ1 z3 (ω , êζ )
ζ1 ζ2 dsa
zx
p
− p2 3 2
−
+
3
c 1 − ζ1 − ζ22 c 1 − ζ12 − ζ22 c(1 − ζ12 − ζ22 ) 2
êζ
∂ζ z 0 (ω 0 , êζ ) ∂ζ2 z10 (ω 0 , êζ )
∂ζ2 ζ1 Φdsa =
· ∂ζ2 ζ1 z 0 (ω 0 , êζ ) + 2 1
+
c
c
c
0
0
0
0
ζ2 ∂ζ z (ω , êζ )
ζ1 ∂ζ2 z3 (ω , êζ )
ζ2 ζ1 dsa
zx
p
− p1 3 2
−
+
3
c 1 − ζ1 − ζ22 c 1 − ζ12 − ζ22 c(1 − ζ12 − ζ22 ) 2
∂ζ1 ζ1 Φdsa =
(A.1)
0
0
Noting that 0 ≤ ζi < 1, that dsa
zx > 0, and assuming that ζ (ω , êζ ) is an increasing or
decreasing function and therefore has a positive or negative constant value assigned
at the critical point ζ 0 , then the sgnDζ2 Φdsa (ζ 0 ) will be a value Ss depending on the
functions z 0 (ω 0 , êζ ) and y 0 (ω 0 , êζ ). The same argument may be applied to arrive at
Sd = sgnDζ2 Φdma (ζ 0 ).
140
APPENDIX B
Determining the virtual sensor locations
We determine the virtual sensor locations for a known dispersive medium. In this
case, we consider a dispersive, anisotropic, and slowly varying medium such as the
ionosphere.[19] In order to properly determine the virtual sensor locations, a standard
method is to consider a plane wave E propagating in a horizontally stratified layer
in which the waves refract at specific frequencies and thus at a specific layer. The
ionosphere is a form of matter called plasma. It is made up of ionized electrons
that forms stratified layers. If a plane wave is incident on a layer of the ionosphere
at the right frequency, the ionosphere acts like a reflector, reflecting the wave back
to earth. For a flat layer ionosphere where there electron collisions are minimal, a
ray path tangent to the wave packet that forms when a plane wave interacts with
the ionosphere may be described in a simple way by using the take-off angle ζ0 and
frequency of reflection ωr .
y(ωr , ζ0 ) = sinζ0
Z
ωr
ω0
where q 2 = µ2 − sin2 ζ0 and µ =
c
ν
dω
q
(B.1)
is the real refractive index in which ν is the
group speed of the wave packet in the medium. Although a very simple model, this
function may be used to parametrize the sensor positions.
141
Download