Imaging Moving Targets from Scattered Waves

advertisement
Imaging Moving Targets from Scattered Waves
This version includes a Jacobian that is missing in the version published in
Inverse Problems 24 (2008) 035005
Margaret Cheney† and Brett Borden‡
† Department of Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY 12180
USA
‡ Physics Department, Naval Postgraduate School, Monterey, CA 93943 USA
Abstract. We develop a linearized imaging theory that combines the spatial, temporal, and
spectral aspects of scattered waves. We consider the case of fixed sensors and a general
distribution of objects, each undergoing linear motion; thus the theory deals with imaging
distributions in phase space. We derive a model for the data that is appropriate for any
waveform, and show how it specializes to familiar results in the cases when: a) the targets
are moving slowly, b) the targets are far from the antennas, and c) narrowband waveforms are
used. From these models, we develop a phase-space imaging formula that can be interpreted
in terms of filtered backprojection or matched filtering. For this imaging approach, we derive
the corresponding point-spread function. We show that special cases of the theory reduce to:
a) Range-Doppler imaging, b) Inverse Synthetic Aperture Radar (ISAR), c) Synthetic Aperture
Radar (SAR), d) Doppler SAR, e) Diffraction Tomography, and f) Tomography of Moving
Targets. We also show that the theory gives a new SAR imaging algorithm for waveforms with
arbitrary ridge-like ambiguity functions.
PACS numbers: 41.20.Jb, 42.30.Wb
2
1. Introduction
Many imaging techniques are based on measurements of scattered waves and, typically,
overall image resolution is proportional to the ratio of system aperture to signal wavelength.
When the physical (real) aperture is small in comparison with wavelength, an extended
“effective aperture” can sometimes be formed analytically: examples include ultrasound,
radar, sonar, and seismic prospecting. For the most part, these synthetic-aperture methods
rely on time-delay measurements of impulsive echo wavefronts and assume that the region to
be imaged is stationary.
For an imaging region containing moving objects, on the other hand, it is well-known that
measurements of the Doppler shift can be used to obtain information about velocities. This
principle is the foundation of Doppler ultrasound, police radar systems, and some Moving
Target Indicator (MTI) radar systems. The classical “radar ambiguity” theory [22, 5, 11]
for monostatic (backscattering) radar shows that the accuracy to which one can measure the
Doppler shift and time delay depends on the transmitted waveform. Ambiguity theory appears
as the bottom plane in figure 1. There is similar theory for the case when a single transmitter
and receiver are at different locations [22, 20, 21]. Some recent work [3] develops a way to
combine range and Doppler information measured at multiple receivers in order to maximize
the probability of detection.
In recent years a number of attempts to develop imaging techniques that can handle
moving objects have been proposed. Space-Time Adaptive Processing (STAP) [10] uses a
moving multiple-element array together with a real-aperture imaging technique to produce
Ground Moving Target Indicator (GMTI) image information. Velocity Synthetic Aperture
Radar (VSAR) [8] also uses a moving multiple-element array to form a SAR image from
each element; then from a comparison of image phases, the system deduces target motion.
The paper [15] uses a separate transmitter and reciever together with sequential pulses to
estimate target motion from multiple looks. Another technique [16] uses backscattering
data from a single sensor to identify rigid-body target motion, which is then used to form
a three-dimensional image. Other work [12] extracts target velocity information by exploiting
the smearing that appears in SAR images when the along-track target velocity is incorrectly
identified. All of these techniques make use of the so-called “start-stop approximation,” in
which a target in motion is assumed to be momentarily stationary while it is being interrogated
by a radar pulse. In other words, Doppler measurements are not actually made by these
systems. Thus these methods appear on the back plane of figure 1.
Imaging techniques that rely on Doppler data are generally real-aperture systems, and
provide spatial information with very low resolution. A synthetic-aperture approach to
imaging from a moving sensor and fixed target was developed in [2]. This Doppler imaging
method appears on the left plane of figure 1. An approach that explicitly considers the
Doppler shift is Tomography of Moving Targets (TMT) [9], which uses multiple transmitters
and receivers (together with matched-filter processing) to both form an image and find the
velocity.
Below we develop a theory that shows how to unify these different approaches. In the
3
process, we will demonstrate how we can fuze the spatial, temporal, and spectral aspects of
scattered-field-based imaging. The present discussion deals with the case of fixed distributed
sensors, although our results include monostatic configurations (in which a single transmitter
and receiver are co-located) as a special case. Methods for imaging stationary scenes in
the case when the transmitter and receiver are not co-located have been particularly welldeveloped in the geophysics literature [1] and have also been explored in the radar literature
[19, 21].
Figure 1. A diagram of various theories that combine spatial, spectral, and temporal aspects
of scattered waves.
In section 2, we develop a physics-based mathematical model that incorporates not only
the waveform and wave propagation effects due to moving scatterers, but also the effects of
spatial diversity of transmitters and receivers. In section 3, we show how a matched-filtering
technique produces images that depend on target velocity, and we relate the resulting pointspread function to the classical wideband and narrowband radar ambiguity functions. We also
work out the special cases of far-field imaging and narrowband imaging of slowly moving
targets. Finally, in section 4, we show that for special cases, the theory developed in section
3 reduces to familiar results. We also show that the theory provides a new imaging algorithm
for certain waveforms.
2. Model for Data
We model wave propagation and scattering by the scalar wave equation for the wavefield
ψ(t, x) due to a source waveform s̆(t, x):
[∇2 − c−2 (t, x)∂t2 ]ψ(t, x) = s̆(t, x) .
(1)
For example, a single isotroic source located at y transmitting the waveform s(t) starting at
time t = −Ty could be modeled as s̆(t, x) = δ(x − y)sy (t + Ty ), where the subscript y
reminds us that different transmitters could transmit different waveforms. For simplicity, in
4
this paper we consider only localized isotropic sources; the work can easily be extended to
more realistic antenna models [13].
A single scatterer moving at velocity v corresponds to an index-of-refraction distribution
2
n (x − vt):
2
c−2 (t, x) = c−2
0 [1 + n (x − vt)] ,
(2)
where c0 denotes the speed in the stationary background medium (here assumed constant).
2
For radar, c0 is the speed of light in vacuum. We write qv (x − vt) = c−2
0 n (x − vt). To model
multiple moving scatterers, we let qv (x − vt) d3 x d3 v be the corresponding quantity for the
scatterers in the volume d3 x d3 v centered at (x, v). Thus qv is the distribution, at time t = 0,
of scatterers moving with velocity v. Consequently, the scatterers in the spatial volume d3 x
(at x) give rise to
Z
−2
−2
c (t, x) = c0 + qv (x − vt) d3 v .
(3)
We note that the physical interpretation of qv involves a choice of a time origin. A choice
that is particularly appropriate, in view of our assumption about linear target velocities, is
a time during which the wave is interacting with targets of interest. This implies that the
activation of the antenna at y takes place at a negative time which we denote by −Ty ; we
write s̆(t, x) = sy (t + Ty )δ(x − y). The wave equation corresponding to (3) is then
Z
2
−2 2
3
2
∇ − c0 ∂t − qv (x − vt) d v ∂t ψ(t, x) = sy (t + Ty )δ(x − y) .
(4)
2.1. The Transmitted Field
We consider the transmitted field ψ in in the absence of any scatterers; ψ in satisifies the version
of equation (1) in which c is simply the constant background speed c0 , namely
2 in
[∇2 − c−2
0 ∂t ]ψ (t, x, y) = sy (t + Ty )δ(x − y) .
(5)
Henceforth we drop the subscript on c. We use the Green’s function g, which satisfies
2
[∇2 − c−2
0 ∂t ]g(t, x) = −δ(t)δ(x)
and is given by
Z
1
δ(t − |x|/c)
= 2
e−iω(t−|x|/c) dω
g(t, |x|) =
4π|x|
8π |x|
R
where we have used 2πδ(t) = exp(−iωt) dω. We use (6) to solve (5), obtaining
Z
in
ψ (t, x, y) = − g(t − t0 , |x − y|)sy (t0 + Ty )δ(x − y) dt0 dy
=−
sy (t + Ty − |x − y|/c)
.
4π|x − y|
(6)
(7)
5
2.2. The Scattered Field
We write ψ = ψ in + ψ̃ sc , which converts (1) into
Z
2
−2 2 sc
[∇ − c0 ∂t ]ψ̃ = qv (x − vt) d3 v ∂t2 ψ .
(8)
The map q 7→ ψ̃ sc can be linearized by replacing the full field ψ on the right side of (8) by
ψ in . This substitution results in the Born approximation ψ sc for the scattered field ψ̃ sc at the
location z:
Z
Z
sc
0
ψ (t, z) = − g(t − t , |z − x|) qv (x − vt0 ) d3 v ∂t20 ψ in (t0 , x) dt0 d3 x
Z
Z
δ(t − t0 − |z − x|/c)
sy (t0 + Ty − |x − y|/c) 3
d x dt0
(9)
=
qv (x − vt0 ) d3 v ∂t20
4π|z − x|
4π|x − y|
In equation (9), we make the change of variables x 7→ x0 = x − vt0 , (i.e., change to the frame
of reference in which the scatterer qv is fixed) to obtain
Z
δ(t − t0 − |x0 + vt0 − z|/c)
sc
ψ (t, z) =
0
0
Z 4π|x + vt 0− z|
s̈y (t + Ty − |x0 + vt0 − y|/c) 3 0 3
×
qv (x0 )
d x d v dt0
(10)
4π|x0 + vt0 − y|
The physical interpretation of (10) is as follows: The wave that emanates from y at time −Ty
encounters a target at time t0 . This target, during the time interval [0, t0 ], has moved from x to
x + vt0 . The wave scatters with strength qv (x), and then propagates from position x + vt0 to
z, arriving at time t.
Henceforth we will drop the primes on x. We introduce the notation
b = R/R.
Rx,z (t) = x + vt − z
R = |R|,
and
R
(11)
To evaluate the t0 integral of (10), we denote by t0 = tx,v (t) the implicit solution of
t − t0 − Rx,z (t0 )/c = 0, i.e.,
0 = t − tx,v (t) − Rx,z (tx,v (t))/c .
(12)
The time tx,v (t) is the time when the wave reaching the receiver at z at time t interacted with
the target that, at time t = 0, was located at x. This target, at time tx,v (t), is actually located
at x + vtx,v (t). The evaluation of the integral is done in two steps: we first make the change
of variables t0 7→ t00 = t − t0 − Rx, z(t0 )/c, which involves the Jacobian
00 ∂t 0
∂t0 = 1 + R̂x,z (t ) · v/c
We write µx,v (t) = 1 + R̂x,z (tx,v (t)) · v/c. With this notation, equation (10) becomes
Z
s̈y [tx,v (t) + Ty − Rx,y (tx,v (t))/c]
sc
ψ (t, z) =
qv (x) d3 x d3 v
2
(4π) Rx,z (tx,v (t)Rx,y (tx,v (t))µx,v (t)
Z
s̈y t + Ty − Rx,z (tx,v (t))/c − Rx,y (tx,v (t))/c
=
qv (x) d3 x d3 v
(13)
(4π)2 Rx,z (tx,v (t))Rx,y (tx,v (t))µx,v (t)
where, in the second line, we have used (12) to write the argument of s̈ in a more symmetrical
form.
6
Expression (13) is appropriate under very general circumstances. In particular, this
formula applies to wide-beam imaging where the commonly-used far-field approximations
do not apply. Equation (13) also applies to rapidly-moving targets, where the start-stop
approximation is not valid, and to very distant targets. Finally, this result can be used for
arbitrary transmitted waveforms.
2.2.1. The Slow-Mover Case. Here we assume that |v|t and k|v|2 t2 are much less than
|x−z| and |x−y|, where k = ωmax /c, with ωmax the (effective) maximum angular frequency
of the signal sy . In this case, we have
2
|vt|
b x,z (0) · vt + O
(14)
Rx,z (t) = |z − (x + vt)| = Rx,z (0) + R
Rx,z
b x,z (0) = Rx,z (0)/Rx,z (0). With (14),
where Rx,z (0) = x − z, Rx,z (0) = |Rx,z (0)|, and R
equation (12) becomes
Rx,z (0)
vtx,v
b
tx,v ≈ t −
+ Rx,z (0) ·
(15)
c
c
which can be solved explicitly to obtain
tx,v (t) ≈
t − Rx,z (0)/c
b x,z (0) · v/c
1+R
(16)
The argument of s̈ in equation (13) is then
!
b x,y (0) · v
Rx,y (tx,v (t))
Rx,y (0) R
tx,v (t) + Ty −
tx,v (t)
≈ tx,v (t) + Ty −
+
c
c
c
b x,y (0) · v/c 1−R
Rx,z (0)
Rx,y (0)
=
+ Ty
t−
−
b x,z (0) · v/c
c
c
1+R
(17)
The time dilation
α̃x,v =
b x,y (0) · v/c
1−R
b x,z (0) · v/c
1+R
(18)
is the Doppler scale factor, which, as we see below, is closely related to the Doppler shift.
With (17) and the notation (18), equation (13) becomes
Z
s̈y [α̃x,v (t − Rx,z (0)/c) − Rx,y (0)/c + Ty ]
sc
ψs (t, z, y) =
qv (x) d3 x d3 v
(19)
(4π)2 Rx,z (0)Rx,y (0)µx,v (0)
2.2.2. The Slow-Mover, Narrow-Band Case. Many radar systems use a narrowband
waveform, defined by
sy (t) = s̃y (t) e−iωy t
(20)
where s̃(t, y) is slowly varying, as a function of t, in comparison with exp(−iωy t), where ωy
is the carrier frequency for the transmitter at position y.
7
For the waveform (20), equation (19) becomes
Z
ωy2
sc
e−iωy [α̃x,v (t−Rx,z (0)/c)−Rx,y (0)/c+Ty ]
ψN (t, z, y) = −
2
(4π) Rx,z (0)Rx,y (0)
× s̃y (α̃x,v (t − Rx,z (0)/c) − Rx,y (0)/c + Ty ) qv (x) d3 x d3 v
Z
ωy2 eiϕx,v e−iωy α̃x,v t
≈−
s̃y (t + Ty − (Rx,z (0) + Rx,y (0))/c) qv (x) d3 x d3 v
2
(4π) Rx,z (0)Rx,y (0)
(21)
Here we have used the facts that s and the denominator are slowly varying to neglect v/c in
those factors. Thus we replace αx,v by 1 in the argument of s, and we have collected the
time-independent terms in the exponent into the quantity
ϕx,v = ky [Rx,y (0) − Ty + α̃x,v Rx,z (0)] ,
(22)
where ky = ωy /c.
If we write the Doppler scale factor (18) as α̃x,v ≈ 1 + βx,v , where
b x,y (0) + R
b x,z (0)) · v/c
βx,v = −(R
(23)
then the quantity ωy βx,v is the Doppler shift. We note that the Doppler shift depends on the
b x,y (0) + R
b x,z (0).
bistatic bisector vector R
With equation (23), we recognize (21) as the familiar superposition of time-delayed and
Doppler-shifted copies of the transmitted waveform.
2.2.3. The Small-Scene Case. A common situation is the one in which the transmitter-totarget and target-to-receiver distances are large in comparison with the scene dimensions (see
figure 2). We take the origin of coordinates to be at the scene center. Then, in the so-called
“far-field” or ”small-scene” approximation, |x + vt0 | and k|x + vt0 |2 are assumed to be much
less than either |z| or |y|, where k again corresponds to the (effective) maximum wave number
for the signal sy , and we have
|x + vt|2 Rx,z (t) = |z − (x + vt)| = |z| − ẑ · (x + vt) + O
.
(24)
|z|
Figure 2. The geometry in the far-field case. This shows a case in which the scene is not too
large.
In this approximation, equation (12) reduces to
|z|
x + vtx,v
tx,v ≈ t −
− ẑ ·
c
c
(25)
8
which can be solved explicitly to yield
tx,v (t) ≈
t − |z|/c + ẑ · x/c
.
1 − ẑ · v/c
(26)
The argument of s̈ in equation (13) is then
Rx,z (tx,v (t)) Rx,y (tx,v (t))
|y| yb · x yb · v
t + Ty −
tx,v (t) + Ty
−
≈ tx,v (t) −
−
−
c
c
c
c
c
1 + yb · v/c
|z| ẑ · x
|y| ŷ · x
=
t−
+
−
+
+ Ty
(27)
1 − zb · v/c
c
c
c
c
We see that when |v|/c 1, the Doppler scale factor becomes
αv =
1 + yb · v/c
≈ 1 + (ŷ + ẑ) · v/c .
1 − zb · v/c
(28)
We note that the signs here appear different from those in equation (18) because Rx,z (0) =
x − z points in a direction approximately opposite to z = z − 0. The scattered field is
Z
s̈y [αv (t − |z|/c + ẑ · x/c) − |y|/c + ŷ · x/c + Ty ]
sc
ψf (t, z, y) =
qv (x0 ) d3 x d3 v (29)
(4π)2 |z||y|(1 − ẑ · v/c)
2.2.4. The Small-Scene, Narrowband Case. With a narrowband waveform (20) and the
small-scene approximation (24), equation (29) becomes
Z
ωy2
sc
ψfN (t, z, y) = −
s̃y (t − |z|/c + ẑ · x/c − |y|/c + ŷ · x/c + Ty )
2 |z||y|
(4π)
× exp − iωy [αv (t − |z|/c + ẑ · x/c) − |y|/c + ŷ · x/c + Ty ] qv (x) d3 x d3 v
Z
ωy2
exp(iϕfx,v ) exp(−iωy αv t)
(30)
≈−
(4π)2 |z||y|
× s̃y (t + Ty − (|z| − ẑ · x + |y| − ŷ · x)/c) qv (x) d3 x d3 v
where ϕfx, v = −ωy [αv (−|z| + ẑ · x) − |y| + ŷ · x] /c.
If the transmitter is activated at time −Ty = −|y|/c, so that the transmitted wave arrives
sc
at the scene center at time t = 0 and if we evaluate ψfN
at t0 = t − |z|/c, we have
Z
−ωy2
0
sc 0
ψfN (t , z, y) =
eiϕ̃x,v e−iωy αv t s̃y (t0 + (ẑ + ŷ) · x/c)qv (x) d3 x d3 v
(31)
2
(4π) |z||y|
where
ϕ̃x,v = −ωy [αv ẑ + ŷ] · x/c
(32)
Again if we write αv = 1 + βv , where βv = (ŷ + ẑ) · v/c, we see that equation (31) is a
superposition of delayed and Doppler-shifted versions of the transmitted signal.
Much previous work has dealt with the case of far-field, single-frequency imaging of
a stationary scene. For a stationary scene (v = 0, which implies αv = 1 ), and a single
frequency ωy = ω0 (which corresponds to s̃ = 1), (31) reduces to the well-known expression
ψω∞0 (t, z, y) ≈
e−ik0 t
F∞ (ŷ, ẑ) ,
(4π)2 |z||y|
(33)
9
where the (Born-approximated) far-field pattern or scattering amplitude is
Z
F∞ (ŷ, ẑ) = e−ik0 (ẑ+ŷ)·x q(x) d3 x .
(34)
where k0 = ω0 /c. We note that our incident-wave direction convention (from target to
transmitter) is the opposite of that used in the classical scattering theory literature (which
uses transmitter to target).
3. Imaging via a Filtered Adjoint
We now address the question of extracting information from the scattered waveform described
by equation (13).
Our image formation algorithm depends on the data we have available. In general, we
form a (coherent) image as a filtered adjoint, but the variables we integrate over depend on
whether we have multiple transmitters, multiple receivers, and/or multiple frequencies. The
number of variables we integrate over, in turn, determines how many image dimensions we
can hope to reconstruct. If our data depends on two variables, for example, then we can
hope to reconstruct a two-dimensional scene; in this case the scene is commonly assumed to
lie on a known surface. If, in addition, we want to reconstruct two-dimensional velocities,
then we are seeking a four-dimensional image and thus we will require data depending on
at least four variables. More generally, determining a distribution of positions in space and
the corresponding three-dimensional velocities means that we seek to form an image in a
six-dimensional phase space.
3.1. Imaging Formula in the General Case
We form an image I(p, u) of the objects with velocity u that, at time t = 0, were located at
position p. Thus I(p, u) is constructed to be an approximation to qu (p). The strategy for the
imaging process is to write the data in terms of a Fourier Integral Operator F : qv 7→ ψ sc , and
then find an approximate inverse for F.
In the general case, we write the data (13) as
Z
0
(iω 0 )2 e−iω [tx,v (t)+Ty −Rx,y,v (tx,v (t))/c] Sy (ω 0 ) 0
−1
sc
dω qv (xq ) d3 x d3 v
(35)
ψ (t, z, y) = 5 3
2π
Rx,z,v (tx,v (t))Rx,y,v (tx,v (t))µx,v (t)
where we have written sy (t) in terms of its inverse Fourier transform:
Z
1
0
sy (t) =
e−iω t Sy (ω 0 ) dω 0 .
(36)
2π
We note that by the Paley-Weiner theorem, S is analytic since it is the inverse Fourier
transform of the finite-duration waveform s.
The corresponding imaging operation is a filtered version of the formal adjoint of the
“forward” operator F. Thus we form the phase-space image I(p, u) as
Z
I(p, u) = eiω[tp,u (t)+Ty −Rp,y (tp,u (t))/c] Q(ω, t, p, u, z, y) dω ψ sc (t, z, y) dt dn z dm y
(37)
10
where n = 1, 2 or 3, depending on the receiver configuration, and m = 1, 2, or 3, depending
on the transmitter configuration. This image reconstruction operator can be interpreted in
terms of filtered backpropagation, where Q is the filter.
The specific choice of filter is dictated by various considerations [1, 23]; here we make
choices that connect the resulting formulas with familiar theories. In particular, we take the
filter Q to be
(4π)2
(38)
Rp,z (tp,u (t))Rp,y (tp,u (t))µp,u (t)αp,u (t)J(ω, p, u, z, y)Sy∗ (ω) ,
Q=
2
(iω)
a choice which leads (below) to the matched filter. The matched filter is well-known [22] to be
the optimal filter for enhancing signal-to-noise in the presence of white Gaussian noise. The
quantity J depends on the geometry and is chosen (see below) to compensate for a certain
Jacobian that appears; α is the Doppler scale factor for the point (p, u) and is defined by
b p,y (tp,u (t)) · u/c
1−R
αp,u (t) =
.
(39)
b p,z (tp,u (t)) · u/c
1+R
In allowing (37) and (38) to depend on the transmitter y, we are implicitly assuming
that we can identify the part of the signal that is due to the transmitter at y. In the case of
multiple transmitters, this identification can be accomplished, for example, by having different
transmitters operate at different frequencies or, possibly, by quasi-orthogonal pulse-coding
schemes. We are also assuming a coherent system, i.e., that a common time clock is available
to all sensors.
For a limited number of discrete receivers (or transmitters), the integral over z (or y) in
(37) reduces to a sum.
Interpretation of Imaging as Matched Filtering. In the case when the geometrical factor
J˜ = J/ω 2 is independent of ω, we can use (36) to write the imaging operation of equation
(37) as
Z
2
I(p, u) = −(4π)
s∗y [tp,u (t) + Ty − Rp,y (tp,u (t))/c] Rp,z (tp,u (t))Rp,y (tp,u (t))
˜ u, z, y)µp,u (t)αp,u (t)ψ sc (t, z, y) dt dn z dm y
× J(p,
Z
= −(4π)2
s∗y [t + Ty − Rp,z (tp,u (t))/c − Rp,y (tp,u (t))/c] Rp,z (tp,u (t))Rp,y (tp,u (t))
˜ u, z, y)µp,u (t)αp,u (t)ψ sc (t, z, y) dt dn z dm y
× J(p,
(40)
which has the form of a weighted matched filter. To obtain the second line of (40), we have
used (12) to write the argument of s in a more symmetric form.
3.1.1. The Point-Spread Function for the General Case. We substitute equation (35) into
equation (37), obtaining
Z
I(p, u) = K(p, u; x, v) qv (x) d3 x d3 v
(41)
where K, the point spread function (PSF) is
Z
1
K(p, u; x, v) =
eiω[tp,u (t)+Ty −Rp,y,u (tp,u (t))/c] Rp,z (tp,u (t))Rp,y (tp,u (t))Sy∗ (ω)µp,u αp,u
2π
11
J(ω, p, u, z, y)
×
dω
(iω)2
Z
0
(iω 0 )2 e−iω (tx,v (t)+Ty −Rx,y (tx,v (t))/c)
Sy (ω 0 ) dω 0 dt dn z dm y .
Rx,z (tx,v (t))Rx,y (tx,v (t))µx,v (t)
(42)
Since Sy is smooth, we can carry out a stationary phase reduction of (42) in the variables
t and ω 0 . If we denote by φ the phase of (42), then the critical points are given by
∂tp,u
∂ Rp,y (tp,u (t))
∂ Rx,y (tx,v (t))
∂φ
0 ∂tx,v
=ω
−
−ω
−
(43)
0=
∂t
∂t
∂t
c
∂t
∂t
c
∂φ
Rx,z (tx,v (t)) Rx,y (tx,v (t))
Rx,y (tx,v (t))
0=
= t + Ty −
−
(44)
= tx,v (t) + Ty −
0
∂ω
c
c
c
where in (44) we have used equation (12). These two conditions determine the critical values
of ω 0 and t which give rise to the leading-order contributions to equation (42). We denote by
t∗x,v the value of t determined by (44).
We rewrite equation (43) by differentiating (12) implicitly:
∂tx,v
b x,z (tx,v (t)) · v ∂tx,v
−R
∂t
c ∂t
which can be solved for ∂tx,v /∂t to obtain
0=1−
∂tx,v
1
=
b x,z (tx,v (t)) · v/c
∂t
1+R
With (46), we see that the terms in brackets in (43) are of the form
b x,y (tx,v (t)) · v/c
∂tx,v
∂ Rx,y (tx,v (t))
1−R
−
= αx,v (t)
=
b x,z (tx,v (t)) · v/c
∂t
∂t
c
1+R
Thus we find that equation (43) implies that
αp,u (t∗x,v )
ω0 =
ω
αx,v (t∗x,v )
and that, moreover, the determinant of the Hessian of the phase is
2
∂φ
α
∂t2
x,v 2
.
= −αx,v
αx,v
0 The result of the stationary phase reduction of equation (42) is
Z
Rp,z (tp,u (t∗ ))Rp,y (tp,u (t∗ )) ∗
∗
∗
S (ω)
K(p, u; x, v) ∼ eiω[tp,u (tx,v )+Ty −Rp,y (tp,u (tx,v ))/c]
Rx,z (tx,v (t∗ ))Rx,y (tx,v (t∗ )) y
3
αp,u (t∗x,v )
µp,u αp,u
(t∗x,v )
ω
J(ω,
p,
u,
z,
y)
dω dn z dm y.
× Sy
3 (t∗ )
αx,v (t∗x,v )
µx,v αx,v
x,v
(45)
(46)
(47)
(48)
(49)
(50)
The quantity in square brackets in the phase of equation (50) can also be written
Rp,y (tp,u (t∗x,v ))
Rp,z (tp,u (t∗x,v )) Rp,y (tp,u (t∗x,v ))
= t∗x,v + Ty −
−
(51)
c
c
c
Rx,z (tx,v (t∗x,v )) Rx,y (tx,v (t∗x,v )) Rp,z (tp,u (t∗x,v )) Rp,y (tp,u (t∗x,v ))
=
+
−
−
+ Ty
c
c
c
c
where in the first line we have used the definition (12) of t and in the second line the definition
(44) of t∗ .
tp,u (t∗x,v ) + Ty −
12
We write equation (50) as
Z
αp,u
−iω∆τy,z (x,p,u,v) ∗
ω
K(p, u; x, v) ∼ e
Sy (ω)Sy
αx,v
3
µp,u αp,u
× R(x, y, z, p, u, v)J(ω, p, u, z, y)
dω dn z dm y
3
µx,v αx,v
where k = ω/c and where
c∆τy,z (x, p, u, v) = Rp,z (tp,u (t∗x,v )) + Rp,y (tp,u (t∗x,v ))
− Rx,z (tx,v (t∗x,v )) + Rx,y (tx,v (t∗x,v )) + cTy
(52)
(53)
and
Rp,z (tp,u (t∗ ))Rp,y (tp,u (t∗ ))
R(x, y, z, p, u, v) =
Rx,z (tx,v (t∗ ))Rx,y (tx,v (t∗ ))
(54)
3.1.2. Relation between Point Spread Function and Ambiguity Function in the General Case.
In equation (52) we use the definition of the inverse Fourier transform
Z
Sy (ω) = eiωt sy (t) dt
(55)
˜ 2 with J˜ independent of ω, converts (52) to
which, in the case when J = Jω
Z Z αp,u
αp,u −iω∆τy,z ∗
αp,u
e
Sy (ω)
exp i
ωt sy (t) dt
K(p, u; x, v) ∼
iω
−iω
αx,v
αx,v
αx,v
˜ u, z, y) µp,u αp,u dω dn z dm y
× R(x, y, z, p, u, v)J(p,
µx,v αx,v
Z
α
p,u
= ṡ∗y
t − ∆τy,z (x, p, u, v) s˙y (t) dt
αx,v
˜ u, z, y) µp,u αp,u dn z dm y
× R(x, y, z, p, u, v)J(p,
(56)
µx,v αx,v
Equation (56) can be written as
Z
K(p, u; x, v) ∼
Ay
αp,u
µp,u αp,u ˜ n m
, ∆τy,z (x, p, u, v) R(x, y, z, p, u, v)
Jd zd y
αx,v
µx,v αx,v
(57)
where Ay is the wideband radar ambiguity function [17] for the waveform ṡ, defined by
Z
(58)
Ay (σ, τ ) = ṡ∗y (σt − τ )ṡy (t) dt
Equation (57) is a general form appropriate to any target environment and any finite-duration
interrogating waveform.
For a narrowband waveform (20), the ambiguity function is
Z
2
Ay (σ, τ ) = ωy s̃∗y (σt − τ ) s̃y (t) eiω̃t e−iωy τ dt
(59)
where ω̃ = ωy (σ − 1). Since s̃y (t) is slowly varying, Ay (σ, τ ) ≈ ωy2 Ay (ω̃, τ ), where
Z
−iωy τ
Ay (ω̃, τ ) = e
s̃∗y (t − τ ) s̃y (t) eiω̃t dt .
(60)
13
In this narrowband case, the point-spread function of equation (57) can similarly be
approximated by
Z
αp,u
µp,u αp,u ˜ n m
2
− 1 , ∆τy,z R(x, y, z, p, u, v)
KNB (p, u; x, v) ∼ ωy Ay ωy
Jd zd y
αx,v
µx,v αx,v
(61)
where ∆τy,z is given by (53).
3.2. Imaging in the Slow-Mover Case
In the slow-mover case, we write equation (19) as
Z
0
(iω 0 )2 e−iω [αx,v (t−Rx,z (0)/c)−Rx,y (0)/c+Ty ]
sc
ψs (t, z, y) = −
Sy (ω 0 ) dω 0 qv (x0 ) d3 x d3 v
25 π 3 Rx,z (0)Rx,y (0)
(62)
where outside the phase, we have neglected v/c and thus replaced µx,v (0) by 1 in the
denominator.
The imaging formula in the slow-mover case is
Z
I(p, u) = eiω[αp,u (t−Rp,z (0)/c)−Rp,y (0)/c+Ty ] Q(ω, p, u, y, z)ψssc (t, z, y) dω dt d3 y d3 z (63)
The corresponding point spread function is
Z
1
0
s
eiω[αp,u (t−Rp,z (0)/c)−Rp,y (0)/c+Ty ] e−iω [αx,v (t−Rx,z (0)/c)−Rx,y (0)/c+Ty ]
K (p, u; x, v) =
2π
ω 02
× Sy∗ (ω)Sy (ω 0 )RJµp,u 2 dω dω 0 dt dn z dm y
(64)
ω
In (64), we apply the method of stationary phase in the t and ω 0 integrals. The critical set is
0 = αp,u ω − αx,v ω 0
0 = αx,v (t − Rx,z (0)/c) − Rx,y (0)/c + Ty
(65)
2
and the Hessian of the phase is −αx,v
. The leading-order contribution to (64) is thus
Z
Rx,y (0) − cTy
s
K (p, u; x, v) ∼ exp ik αp,u
+ Rx,z (0) − Rp,z (0) − Rp,y (0) + cTy
αx,v
3
αp,u
αp,u
∗
Sy (ω)RJ 3 dω dn z dm y
(66)
× Sy ω
αx,v
αx,v
where k = ω/c, or, in terms of the ambiguity function,
s
K (p, u; x, v) ∼
Z
A
αp,u
αp,u n m
s
, ∆τy,z (x, p, u, v) R(x, y, z, p, u, v)J˜
d zd y
αx,v
αx,v
(67)
14
under the assumption that J˜ = J/ω 2 is independent of ω. We note that the delay (second)
argument of the ambiguity function can be written as
Rx,z (0) − Rp,z (0)
αp,u Rx,y (0)
Rp,y (0)
s
+
− Ty −
+ Ty
(68)
∆τy,z = αp,u
c
αx,v
c
c
αp,u
Rx,y (0)
Rx,y (0) Rp,y (0)
αp,u
Rx,z (0) − Rp,z (0)
+
−1
+
−
−
− 1 Ty
= αp,u
c
αx,v
c
c
c
α
x,v
Rx,z (0) − Rp,z (0) Rx,y (0) − Rp,y (0)
αp,u
Rx,y (0)
= αp,u
+
+
−1
− Ty
c
c
αx,v
c u−v R (0)
x−p x,y
b x,z (0) + R
b x,y (0) +
b x,y (0) + R
b x,z (0)
≈
· αp,u R
· R
− Ty
c
c
c
where in the second line we have added and subtracted Rx,y (0)/c. If the target scene is not
too large, the transmitters can be activated at times −Ty chosen so that Rx,y (0)/c − Ty is
negligible.
3.3. Imaging in the Slow-Mover, Narrow-Band Case.
Under the slow-mover, narrow-band assumptions, the data (21) can be written
Z 2 iϕx,v −iωy αx,v t
e
s̃y (t + Ty − (Rx,z (0) + Rx,y (0))/c)
ωy e
sc
ψN (t, z, y) ≈ −
qv (x) d3 x d3 v
2
(4π) Rx,z (0)Rx,y (0)
(69)
The corresponding imaging operation [from equation (40)] is
Z
2
Icl (p, u) = −(4π)
e−iϕp,u eiωy αp,u t Rp,z (0)Rp,y (0)
×s̃∗y (t + Ty − (Rp,z (0) + Rp,y (0))/c) ψNsc (t, z, y)J˜ dt dn z dm y
(70)
The operation (70) amounts to geometry-corrected, phase-corrected matched filtering with a
time-delayed, Doppler-shifted version of the transmitted waveform, where the Doppler shift
is defined in terms of (23).
Under the assumption that J˜ is independent of ω, the point-spread function is
Z
Kcl (p, u; x, v) = ωy2 s̃∗y (t + Ty − (Rp,z (0) + Rp,y (0))/c) eiωy [βp,u −βx,v ]t
× s̃y (t + Ty − (Rx,z (0) + Rx,y (0))/c) exp (i[ϕx,v − ϕp,u ])
Rp,z (0)Rp,y (0) ˜
J dt dn z dm y
×
Rx,z (0)Rx,y (0)
(71)
In the second exponential of equation (71) we use (22) to write
ϕx,v − ϕp,u
= ky [Rx,y (0) − Ty + (1 + βx,v )Rx,z (0)]
−
h ky [Rp,y (0) − Ty + (1 + βp,u )Rp,z (0)]]i
cl
= ky − c∆τy,z
+ βx,v Rx,z (0) − βp,u Rp,z (0)
(72)
where
cl
c∆τy,z
= [Rp,z (0) + Rp,y (0)] − [Rx,z (0) + Rx,y (0)].
(73)
15
is the difference in bistatic ranges for the points x and p. If in (71) we make the change of
variables t0 = t + Ty − (Rx,z (0) + Rx,y (0))/c and use (60), then we obtain
Z
Kcl (p, u; x, v) =
cl
ωy2 A ωy [βp,u − βx,v ], ∆τy,z
exp (iωy [βp,u − βx,v ][Rx,y (0)/c − Ty ])
× exp (−iky βp,u [Rx,z (0) − Rp,z (0)])
Rp,z (0)Rp,y (0) ˜ n m
Jd zd y
Rx,z (0)Rx,y (0)
(74)
We note that the narrowband ambiguity function of (74) depends on the difference of bistatic
ranges (73) and on the difference between the velocity projections onto the bistatic bisectors
b x,y (0) + R
b x,z (0)) · v/c − (R
b p,y (0) + R
b p,z (0)) · u/c.
βp,u − βx,v = (R
For the case of a single transmitter and single receiver (a case for which J˜ = 1) and
a point target [which enables us to choose Ty so that the exponential in the second line
of (74) vanishes], and leaving aside the magnitude and phase corrections, (74) reduces to
a coordinate-independent version of the ambiguity function of [18].
3.4. Imaging in the Small-Scene Case
3.4.1. Imaging in the Small-Scene Wideband Case. Our filtered-adjoint imaging formula
reduces to
Z
I∞ (p, u) = eiω[αu (t−|z|/c+ẑ·p/c)−|y|/c+ŷ·p/c+Ty ]
× Q∞ (ω, p, u, z, y) dω ψ sc (t, z, y) dt dn z dm y
(75)
where we take the filter to be
Q∞ (ω, p, u, z, y) = −(4π)2 ω −2 |z||y|(1−ẑ·u/c)Sy∗ (ω)J(ω, p, u, z, y)αu .(76)
To determine the point spread function, we substitute (29) into (75) to obtain
Z
I∞ (p, u) = exp (iω[αu (t − |z|/c + ẑ · p/c) − |y|/c + ŷ · p/c + Ty ]) Q∞ (ω, p, u, z, y)
Z
×
exp (−iω 0 [αv (t − |z|/c + ẑ · x/c) − |y|/c + ŷ · x/c + Ty ])
ω 02 Sy (ω 0 )
dω 0 dωqv (x) d3 x d3 v dt dn z dm y . (77)
25 π 3 |z||y|(1 − ẑ · v/c)
For smooth Sy , we can apply the method of stationary phase to the t and ω 0 integrations
in (77). The critical conditions are
×
0 = ωαu − ω 0 αv
0 = αv (t − |z|/c + ẑ · x/c) − |y|/c + ŷ · x/c + Ty
(78)
and the Hessian of the phase is αv . This converts the kernel of (77) into
Z
ω α Jα3 (1 − ẑ · u/c)
∗
u
u
f
K∞ (p, u; x, v) ∼ exp −iω∆τy,z Sy (ω)Sy
dω dn z dm y (79)
3
αx,v αv (1 − ẑ · v/c)
where ∆τf is defined by
|y| − ŷ · x − cTy
f
− |z| + ẑ · p − |y| + ŷ · p + cTy
c∆τy,z = αu |z| − ẑ · x +
αv
16
αu
αu
− 1 (|y| − cTy ) − ŷ ·
x−p
= αu ẑ · (p − x) +
αv
αv
αu
αu
= αu ẑ · (p − x) +
− 1 (|y| − cTy ) − ŷ ·
− 1 x − ŷ · (x − p)
αv αv
αu
αu
= (αu ẑ + ŷ) · (p − x) +
− 1 (|y| − cTy ) −
− 1 ŷ · x
αv
αv
u−v
≈ (αu ẑ + ŷ) · (p − x) +
· (ŷ + ẑ) ŷ · x
c
(80)
where we have made the choice Ty = |y|/c in the last line and where we have used (28) to
write
αu
≈ 1 − (u − v) · (ŷ + ẑ)/c .
(81)
αv
To write (79) in terms of the ambiguity function, we assume that J˜ = J/ω 2 is
independent of ω and use (55) to obtain
Z
α
αu (1 − ẑ · u/c)
u
f
K∞ (p, u; x, v) ∼ ṡ∗y
t − ∆τy,z
dt dn z dm y
, y ṡy (t)J˜
αv
αv (1 − ẑ · v/c)
(82)
Using (58), we obtain for the wideband small-scene point spread function
Z
K∞ (p, u; x, v) ∼
Ay
α
u
αv
f
, ∆τy,z
˜ u, z, y) αu (1 − ẑ · u/c) dn z dm y .
J(p,
αv (1 − ẑ · v/c)
(83)
We note that (83) no longer involves a ratio of ranges, because now the corresponding ranges
are referenced to the scene center and not to the individual point x or p.
3.4.2. Imaging in the Small-Scene, Narrowband Case. In the case of a narrowband
waveform (20), we use the matched-filter interpretation of the imaging formula:
Z
(4π)2 |z||y|
0
I∞ (p, u) =
e−iϕ̃p,u e+iωy (1+βu )t (1 − ẑ · u/c)s̃∗y (t0 + (ẑ + ŷ) · p/c)
2
−ωy
ψ sc (t0 , z, y)J(ωy , p, u, z, y) dt0 dn z dm y
(84)
Inserting (31) into (84), we obtain for the point-spread function
Z
0
(NB)
2
K∞ (p, u; x, v) = ωy ei(ϕ̃x,v −ϕ̃p,u ) eiωy (βu −βv )t s̃∗y (t0 + (ẑ + ŷ) · p/c)
(1 − ẑ · u/c) 0 n m
×s̃y (t0 + (ẑ + ŷ) · x/c) J˜
dt d z d y
(1 − ẑ · v/c)
In the first exponential of equation (85) we use (32):
ωy
ϕ̃x,v − ϕ̃p,u =
[[(1 + βu )ẑ + ŷ] · p − [(1 + βv )ẑ + ŷ] · x]
c
= ky [(ẑ + ŷ) · (p − x) + ẑ · (βu p − βv x)]
where ky = ωy /c.
(85)
(86)
17
In (85) we also make the change of variables t = t0 + (ẑ + ŷ) · x/c, use (60) and
βv = (ŷ + ẑ) · v/c to obtain
α (1 − ẑ · u/c)
u
ωy2 A ky (ŷ + ẑ) · (u − v), (ẑ + ŷ) · (x − p)/c
αv (1 − ẑ · v/c)
× exp [i(ϕ̃x,v − ϕ̃p,u )] exp (−iky (βu − βv )(ẑ + ŷ) · x) J˜ dn z dm y ,
(87)
(NB)
K∞
(p, u; x, v)
Z
=
Here we see clearly the dependence on the familiar bistatic bisector ŷ + ẑ.
4. Reduction to familiar special cases
4.1. Range-Doppler Imaging
In classical range-Doppler imaging, the range and velocity of a moving target are estimated
from measurements made by a single transmitter and coincident receiver. Thus we take z = y,
J˜ = 1, and remove the integrals in (87) or (74) to show that the point-spread function is simply
proportional to the ambiguity function. In particular, for the case of a slowly moving distant
target, (87) reduces to the familiar result
ω (u − v) · ŷ (p − x) · ŷ c
(NB)
KRD (p, u; x, v) ∝ Ay 2
,2
(88)
c
c
We see that only range and down-range velocity information is obtained.
4.1.1. Range-Doppler Imaging of a Rotating Target. If the target is rotating about a point,
then we can use the connection between position and velocity to form a target image. We
choose the center of rotation as the origin of coordinates. In this case, a scatterer at x moves
as q(O(t)x), where O denotes an orthogonal matrix. This is not linear motion, but we can
still apply our results by approximating the motion as linear over a small angle.
In particular, we write O(t) = O(0) + tȮ(0), and assume that O(0) is the identity
operator. In this case, the velocity at the point x is Ȯ(0)x, so in our notation, the phase-space
distribution is qv (x) = qȮ(0)x (x).
For the case of two-dimensional imaging, we take ŷ = (1, 0), and O is given by
"
#
"
#
cos ω̇t sin ω̇t
0 1
O(t) =
,
Ȯ(0) = ω̇
,
(89)
− sin ω̇t cos ω̇t
−1 0
so (88) becomes
ω (u − v ) (p − x ) c 1
1
1
1
∝ Ay 2
,2
ω ω̇(pc − x ) (p c − x ) c
2
2
1
1
∝ Ay 2
,2
.
(90)
c
c
Thus we have the faimilar result that the delay provides the down-range target components
while the Doppler shift provides the cross-range components. Note that even though the
velocity v1 = ω̇x1 can be estimated from the radar measurements, the cross-range spatial
components are distorted by the unknown angular velocity ω̇.
(NB)
KRD (p, u; x, v)
18
The PSF depends on only two variables, which implies that the image obtained is an
R
approximation to q(x1 , x2 , x3 ) dx3 .
4.2. Inverse Synthetic-Aperture Radar (ISAR)
Modern ISAR methods do not actually require Doppler-shift measurements at all. Rather, the
technique depends on the target’s rotational motion to establish a pseudo-array of distributed
receivers. To show how ISAR follows from the results of this paper, we fix the coordinate
system to the rotating target. Succeeding pulses of a pulse train are transmitted from and
received at positions z = y that rotate around the target as ŷ = ŷ(θ). The pulses typically
used in ISAR are high range-resolution (HRR) ones, so that each pulse yields a ridge-like
ambiguity function Ay (ν, τ ) = Ay (τ ), with Ay (τ ) sharply peaked around τ = 0. Equation
(87) reduces to
Z
(p − x) · ŷ(θ) (NB)
2
˜ dθ .
KISAR (p, u; x, v) = −ω0 Ay 2
J|y|
(91)
c
where we have written ωy = ω0 because the pulses typically have the same center frequency.
Note that the set of all points p such that (p − x) · ŷ(θ) = 0 forms a plane through x
with normal ŷ(θ). Consequently, since Ay (τ ) is sharply peaked around τ = 0, the support of
the integral in (91) is a collection of planes, one for each pulse in the pulse train, and ISAR
imaging is seen to be based on backprojection.
Because knowledge of ŷ(θ) is needed, ISAR depends on knowledge of the rotational
motion; it is typically assumed to be uniform, and the rotation rate ω̇ must be determined by
other means.
Useful information can be obtained from the PSF as follows. If we approximate the HRR
ambiguity function by
Z
A(τ ) =
exp(iωτ ) dω,
(92)
B
where B denotes the bandwidth, then we find that (91) becomes
Z Z
(NB)
˜ dω dθ
KISAR (p, u; x, v) ∝
exp(i2ω(p − x) · ŷ(θ)/c)J|y|
B
Z
∂(ω, θ) ˜ d2 ξ
J|y|
∝ exp(i(p − x) · ξ) ∂ξ
(93)
where we have made the change of variables (ω, θ) 7→ ξ = 2ω ŷ(θ)/c. This is a twodimensional change of variables, so again the PSF depends on only two variables, and the
image involves a projection of the third variable onto the other two.
˜ [i.e., the
The factor J of (87) should be chosen to be the Jacobian |∂ξ/(ω, θ)| = ω 2 J(θ)
reciprocal of the factor |∂(ω, θ)/∂ξ| appearing in (91)]. The narrowband approximation uses
ω ≈ ω0 , so the result is
Z
(NB)
KISAR (p, u; x, v) ∝
exp(i(p − x) · ξ) d2 ξ
(94)
Ω
where Ω denotes the region in ξ-space for which we have data. This region Ω is a sector
of an annulus with width 2B and angular extent corresponding to the angular aperture. The
19
form (94) is useful for determining resolution of the imaging system. In particular, the Fourier
components of the target that are visible correspond to the values of ξ that are obtained as ω
ranges over the frequency band of the radar system and θ ranges over the angles of view.
4.3. Synthetic-Aperture Radar (SAR)
SAR also uses high range-resolution pulses, transmitted and received at locations along the
flight path y = y(ζ), where ζ denotes the flight path parameter. For a stationary scene,
u = v = 0, which implies that βu = βv = 0. In this case, (74) reduces to
Z
(R
2
p,y(ζ) (0) − Rx,y(ζ) (0) Rp,y(ζ) (0) ˜
(NB)
J|ẏ| dζ
(95)
KSAR (p, 0; x, 0) ∝ Ay 2
2
(0)
c
Rx,y(ζ)
where again Ay (τ ) is sharply peaked around τ = 0. Thus
backprojection over circles Rp,y(ζ) (0) = Rx,y(ζ) (0) (see figure 3).
SAR
imaging reduces to
Figure 3. For SAR, we consider a stationary scene and a sensor that (in the start-stop
approximation) does not move as it transmits and receives each pulse. Between pulses, the
sensor moves to a different location along the flight path.
If we again approximate the HRR ambiguity function by (92), then we find that (95)
becomes
Z
2
Rp,y(ζ) (0) − Rx,y(ζ) (0) Rp,y(ζ) (0) ˜
(NB)
J|ẏ| dζ dω
KSAR (p, 0; x, 0) ∝ exp i2ω
2
c
Rx,y(ζ)
(0)
Z
2
Rp,y(ζ)
(0) ∂(ω, ζ) ˜ ẏ| dξ
J|
∝ exp(i(p − x) · ξ) 2
(96)
Rx,y(ζ) (0) ∂ξ where we have made the Taylor expansion Rp,y(ζ) (0) − Rx,y(ζ) (0) = (x − p) · R̂p,y(ζ) (0) + · · ·
and the change of variables (ω, ζ) 7→ ξ = 2ω R̂p,y(ζ) (0). Again J should be chosen to be the
reciprocal of the Jacobian |∂(ω, θ)/∂ξ|, and we note that the reciprocal |∂ξ/∂(ω, θ)| is indeed
˜ with J˜ independent of ω.
of the form ω 2 J,
Again the fact that the PSF depends on two variables gives rise to an image of a projected
version of the target.
20
4.4. Doppler SAR
Doppler SAR uses a high Doppler-resolution waveform, such as an approximation to the ideal
single-frequency waveform s(t) = exp(−iω0 t) transmitted from locations along the flight
path y = y(ζ). If we transform to the sensor frame of reference, then the entire scene is
moving with velocity u = v = −ẏ(ζ) (see figure 4).
Figure 4. For Doppler SAR, we use a frame of reference in which the sensor is fixed and the
plane below is moving with a uniform velocity.
Although the stationary phase analysis of section 3.1.1 does not technically apply to
a single-frequency waveform, it does apply to the time-limited waveforms that are used in
practice. Consequently the analysis of section 3.1.1 applies to a realizable system.
For a high Doppler-resolution waveform, Ay (ν, τ ) = Ay (ν), where Ay (ν) is sharply
peaked around ν = 0. In the monostatic case, (74) reduces to
Z
(NB)
KHDR (p, −ẏ(ζ); x, −ẏ(ζ)) ∝ Ay ω0 [βp (ζ) − βx (ζ)] exp (iω0 [βp (ζ) − βx (ζ)][Rx,y (0) − Ty ])
× exp (−ik0 βp (ζ)[Rx,y (0) − Rp,y (0)])
2
Rp,y(ζ)
(0)
2
Rx,y(ζ)
(0)
J˜ dζ
(97)
b x,y(ζ) (0) · v/c = 2R
b x,y(ζ) (0) · ẏ(ζ)/c. Again the
where from (23), βx (ζ) ≡ βx,−ẏ(ζ) = −2R
b p,y(ζ) (0) · ẏ(ζ) = R
b x,y(ζ) (0) ·
imaging process reduces to backprojection over hyperbolas R
ẏ(ζ).
R
If we again approximate the ridge-like ambiguity function by A(ν) = exp(iνt0 ) dt0 ,
then (97) becomes
Z
(NB)
KHDR (p, −ẏ(ζ); x, −ẏ(ζ)) ∝ exp it0 ω0 [βp (ζ) − βx (ζ)] exp (iω0 [βp (ζ) − βx (ζ)][Rx,y (0) − Ty ])
× exp (−ik0 βp (ζ)[Rx,y (0) − Rp,y (0)])
Z
∝
2
Rp,y(ζ)
(0)
2
Rx,y(ζ)
(0)
J˜ dt0 dζ
exp i(p − x) · ξ exp (iω0 [βp (ζ) − βx (ζ)][Rx,y (0) − Ty ])
2
Rp,y(ζ)
(0) ∂(t0 , ζ) J˜ dξ
× exp (−ik0 βp (ζ)[Rx,y (0) − Rp,y (0)]) 2
Rx,y(ζ) (0) ∂ξ (98)
21
where we have made the Taylor expansion βp (ζ) − βx (ζ) = (p − x) · ∇p βp (ζ) and then
made the change of variables (t0 , ζ) 7→ ξ = t0 ω0 ∇p βp (ζ). We note that again J should be
˜ where J˜ is independent of ω0 . Again the image involves a projection.
|∂ξ/∂(t0 , ζ)| = ω02 J,
4.5. SAR for Other Ridge-like Ambiguity Functions
In the above scenario, SAR reconstruction can be done with a sensor moving along the path
y = y(ζ) while transmitting other waveforms. In particular, waveforms (such as chirps) can
be used which produce a ridge-like ambiguity function along any line ν = ητ in the delayDoppler plane. Then (74) reduces to
Z
(NB)
Kη (p, −ẏ(ζ); x, −ẏ(ζ)) ∝ Ay ω0 [βp (ζ) − βx (ζ)] − 2η(Rp,y(ζ) (0) − Rx,y(ζ) (0))/c J
× exp (iω0 [βp (ζ) − βx (ζ)][Rx,y (0) − Ty ])
2
(0)
Rp,y(ζ)
dζ
× exp (−ik0 βp (ζ)[Rx,y (0) − Rp,y (0)]) 2
Rx,y(ζ) (0)
(99)
which corresponds to backprojection over the planar curve Lp (ζ) = Lx (ζ), where
b p,y(ζ) · ẏ(ζ) + ηRp,y(ζ) (0)
Lp (ζ) = −ω0 R
(100)
The curve Lp (ζ) is the intersection of the imaging plane with a surface generated by rotating
a limaçon of Pascal (figure 5) about the flight velocity vector. We note that for a slowly-
3
2
1
-5
-4
-3
-2
-1
0
1
2
3
4
5
-1
-2
-3
Figure 5. A limaçon of Pascal. The origin is located at the transmitter position y.
moving sensor and a side-looking beam pattern, the curve obtained from the limaçon is almost
indistinguishable from a circle.
R
If we approximate the ridge-like ambiguity function as A(ν − ητ ) = exp[iω(ν −
ητ )] dω, then (99) can be written
Z
(NB)
Kη (p, u; x, v) ∝ exp (2ω[Lp (ζ) − Lx (ζ)]/c) J˜
× exp (iω0 [βp (ζ) − βx (ζ)][Rx,y (0) − Ty ])
2
Rp,y(ζ)
(0)
× exp (−ik0 βp (ζ)[Rx,y (0) − Rp,y (0)]) 2
dω dζ
Rx,y(ζ) (0)
22
Z
exp ([x − p] · ξ) J˜ exp (iω0 [βp (ζ) − βx (ζ)][Rx,y (0) − Ty ])
2
(0) ∂(ω, θ) Rp,y(ζ)
dξ
× exp (−ik0 βp (ζ)[Rx,y (0) − Rp,y (0)]) 2
Rx,y(ζ) (0) ∂ξ ∝
(101)
where we have made the Taylor expansion Lp (ζ) − Lx (ζ) = (p − x) · ∇p Lp (ζ) + · · · and
have made the change of variables (ω, ζ) 7→ ξ = 2ω∇p Lp (ζ)/c.
4.6. Diffraction Tomography
In this case the data is the far-field pattern (34). Our inversion formula is (84), where s̃y = 1
and there is no integration over t. The resulting formula is
Z
Ik0 (p) = eik0 (ẑ+ŷ)·p Qk0 (ẑ, ŷ)F∞ (ŷ, ẑ) dŷ dẑ
(102)
where Qk0 is the filter chosen according to [6]:
Qk0 =
k03
|ŷ + ẑ|
(2π)4
(103)
which corrects for the Jacobian that is ultimately needed. Here the factor of k03 corresponds to
three-dimensional imaging.
The associated point-spread function is obtained from substituting (34) into (102):
Z
Z
ik0 (ẑ+ŷ)·p
Ik0 (p) = e
Qk0 (ẑ, ŷ) e−ik0 (ẑ+ŷ)·x q(x) d3 x dŷ dẑ
Z
k03
eik0 (ẑ+ŷ)·(p−x) |ẑ + ŷ| dŷ dẑ q(x) d3 x
=
4
Z(2π)
= χ2k (ξ)eiξ·(p−x) d3 ξ q(x) d3 x
(104)
where χ2k denotes the function that is one inside the ball of radius 2k and zero outside and
where we have used the Devaney identity [6]
Z
Z
k3
iξ·r 3
χ2k (ξ)e d ξ =
eik(ŷ+ẑ)·r |ŷ + ê0 | dŷ dẑ .
(105)
4
(2π) S 2 ×S 2
The proof of this identity is outlined in the appendix. The two-dimensional version is [7]
Z
Z
∂ξ iξ·r 2
ik(ŷ+ẑ)·r χ2k (ξ)e d ξ =
e
(106)
∂(ŷ, ẑ) dŷ dẑ
1
1
S ×S
0
where ŷ = (cos θ, sin θ), ẑ = (cos θ , sin θ0 ), and
∂ξ p
2
∂(ŷ, ẑ) = 1 − (ŷ · ẑ) .
(107)
4.7. Moving Target Tomography.
Moving Target Tomography has been proposed in the paper [9], which models the signal using
a simplified version of (21). For this simplified model, our imaging formula (70) reduces to
the approach advocated in [9], namely matched-filter processing with a different filter for each
location p and for each possible velocity u.
23
5. Conclusions and Future Work
We have developed a linearized imaging theory that combines the spatial, temporal, and
spectral aspects of scattered waves.
This imaging theory is based on the general (linearized) expression we derived for waves
scattered from moving objects, which we model in terms of a distribution in phase space. The
expression for the scattered waves is of the form of a Fourier integral operator; consequently
we form a phase-space image as a filtered adjoint of this operator.
The theory allows for activation of multiple transmitters at different times, but the theory
is simpler when they are all activated so that the waves arrive at the target at roughly the same
time.
The general theory in this paper reduces to familiar results in a variety of special cases.
We leave for the future further analysis of the point-spread function. We also leave for the
future the case in which the sensors are moving independently, and the problem of combining
these ideas with tracking techniques.
Appendix A. The Devaney Identity in R3
This proof of (105) follows [6]. First we note that for r ∈ R3 ,
Z 2k Z
Z
χ2k (ξ) iξ·r 3
eiξ·r d2 ξ̂|ξ| d|ξ|
e dξ=
|ξ|
2
0 Z S
2k
sin |ξ|r
= 4π
|ξ| d|ξ|
|ξ|r
0
8π 1 − cos(2kr)
8π
= 2
= 2 sin2 kr
r
2
r
where we have written r = |r|. Using Stone’s formula [14]
Z
1
sin kr
=
eikŷ·r d2 ŷ ,
kr
4π
we can write (A.1) as
Z
Z
χ2k (ξ) iξ·r 3
k2
eik(ŷ+ẑ)·r d2 ŷ d2 ẑ .
e dξ=
|ξ|
(2π)2
(A.1)
(A.2)
(A.3)
Next we write
Z
iξ·r 3
χ2k (ξ)e
Z
χ2k (ξ) iξ·r
e |ξ| d3 ξ
|ξ|
Z
2 Z
Z
k
iξ·r
−iξ·r0
ik(ŷ+ê0 )·r0
= e
e
e
dŷ dê0 |ξ| d3 ξ dr 0 .
4
(2π)
dξ=
(A.4)
Carrying out the r 0 integration results in δ(ξ −k(ŷ + ẑ)); finally, carrying out the ξ integration
results in (105).
24
Acknowledgments
This work was supported by the Office of Naval Research, by the Air Force Office of Scientific
Research‡ under agreement number FA9550-06-1-0017, by Rensselaer Polytechnic Institute,
the Institute for Mathematics and Its Applications, and by the National Research Council.
References
[1] N. Bleistein, J.K. Cohen, and J.W. Stockwell, The Mathematics of Multidimensional Seismic Inversion,
(Springer: New York, 2000)
[2] B. Borden and M. Cheney, “Synthetic-Aperture Imaging from High-Doppler-Resolution Measurements,”
Inverse Problems, 21, pp. 1–11 (2005)
[3] I. Bradaric, G.T. Capraro, D.D. Weiner, and M.C. Wicks, “Multistatic Radar Systems Signal Processing”,
Proceedings of IEEE Radar Conference 2006.
[4] J. Cooper, “Scattering of Electromagnetic Fields by a Moving Boundary: The One-Dimensional Case”,
IEEE Antennas and Propagation, vol. AP-28, No. 6, pp. 791–795 (1980).
[5] C.E. Cook and M. Bernfeld, Radar Signals (Academic: New York, 1967)
[6] A.J. Devaney, “Inversion formula for inverse scattering within the Born approximation,” Optics Letters, 7,
pp. 111–112 (1982).
[7] A.J. Devaney, “A filtered backpropagation algorithm for diffraction tomography”, Ultrasonic Imaging, 4,
pp. 336–350 (1982).
[8] B. Friedlander and B. Porat, “V SAR: A High Resolution Radar System for Detection of Moving Targets,”
IEE Proc. Radar, Sonar, Navig., 144, pp. 205–18 (1997)
[9] B. Himed, H. Bascom, J. Clancy, and M.C. Wicks, “Tomography of Moving Targets (TMT)”, in Sensors,
Systems, and Next-Generation Satellites V., ed. H. Fujisada, J.B. Lurie, and K. Weber, Proc. SPIE Vol.
4540 (2001) pp. 608 – 619.
[10] R. Klemm, Principles of Space-Time Adaptive Processing, (Institution of Electrical Engineers: London,
2002)
[11] N. Levanon, Radar Principles, (Wiley, New York, 1988)
[12] M.J. Minardi, L.A. Gorham, and E.G. Zelnio, “Ground Moving Target Detection and Tracking Based on
Generaized S AR Processing and Change Detection,” Proceedings of SPIE, 5808, pp. 156–165 (2005)
[13] C.J. Nolan and M. Cheney, “Synthetic Aperture Inversion,” Inverse Problems, 18, pp. 221–36 (2002)
[14] M. Reed and B. Simon, Methods of Modern Mathematical Physics. I. Functional Analysis (Academic
Press: New York, 1972)
[15] M. Soumekh, “Bistatic Synthetic Aperture Radar Inversion with Application in Dynamic Object Imaging”,
IEEE Trans. Signal Processing, Vol. 39, No. 9, pp. 2044–2055 (1991).
[16] M. Stuff, M. Biancalana, G. Arnold, and J. Garbarino, “Imaging Moving Objects in 3D From Single
Aperture Synthetic Aperture Data,” Proceedings of IEEE Radar Conference 2004, pp. 94–98
[17] D. Swick, “A Review of Wideband Ambiguity Functions,” Naval Research Laboratory Rep. 6994 (1969)
[18] T. Tsao, M. Slamani, P. Varshney, D. Weiner, H. Schwarzlander, and S. Borek, “Ambiguity Function for a
Bistatic Radar,” IEEE Trans. Aerospace & Electronic Systems, 33, pp. 1041–51 (1997)
[19] N.J. Willis, Bistatic Radar (Artech House: Norwood, MA, 1991)
[20] N.J. Willis and H.D. Griffiths, Advances in Bistatic Radar (SciTech Publishing, Raleigh, North Carolina,
2007)
[21] N.J. Willis, “Bistatic Radar,” in Radar Handbook, M. Skolnik, ed., (McGraw-Hill: New York, 1990)
‡ Consequently the U.S. Government is authorized to reproduce and distribute reprints for Governmental
purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those
of the authors and should not be interpreted as necessarily representing the official policies or endorsements,
either expressed or implied, of the Air Force Research Laboratory or the U.S. Government.
25
[22] P.M. Woodward, Probability and Information Theory, with Applications to Radar, (McGraw-Hill: New
York, 1953)
[23] B. Yazici, M. Cheney, and C.E. Yarman, ”Synthetic-aperture inversion in the presence of noise and clutter”,
Inverse Problems 22, pp. 1705-1729 (2006) .
Download