Data-driven motion compensation techniques for noncooperative ISAR imaging

advertisement
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
1
Data-driven motion compensation techniques for
noncooperative ISAR imaging
Risto Vehmas, Student Member, IEEE, Juha Jylhä, Minna Väilä, Juho Vihonen, Member, IEEE, and Ari Visa,
Senior Member, IEEE
Abstract—We consider the data-driven motion compensation
problem in inverse synthetic aperture radar (ISAR) imaging.
We present optimization-based ISAR techniques and propose
improvements to the range alignment, time-window selection,
autofocus, time-frequency-based image reconstruction and crossrange scaling procedures. In experiments, the improvements
reduced the computational burden and increased the image
contrast by 50 percent at best and 28 percent on average in
several test cases including changing translational and rotational
motion.
Index Terms—Inverse synthetic aperture radar, radar imaging,
motion compensation, optimization, image contrast
I. I NTRODUCTION
Inverse synthetic aperture radar (ISAR) is a well-established
technique for high-resolution radar imaging of moving objects.
The principles of ISAR are analogous to synthetic aperture
radar (SAR) and have been introduced in the literature in great
detail in the 1980s (e.g., in [1]–[3]). A two-dimensional highresolution image of an object provides beneficial information
for target recognition applications. This imaging capability
renders ISAR a valuable tool in applications such as maritime
and air surveillance.
The traditional arrangement of noncooperative ISAR is to
consider a stationary monostatic radar and a moving object.
The radar obtains a time series of high-resolution range
profiles using a suitable high-bandwidth waveform. The large
antenna required for high cross-range resolution is synthesized
by exploiting the relative motion between the radar and the object. The synthetic phased array antenna created by the object
in different spatial positions can in principle be steered and
focused to produce high cross-range resolution. However, to
be able to focus this synthetic phased array antenna correctly,
the relative motion has to be known very accurately. When
the object is noncooperative, this motion is unknown and has
to be estimated. This situation calls for what is referred to as
data-driven motion compensation.
ISAR techniques can also be used to image moving objects
using a space- or airborne SAR sensor if the relative motion
between the radar and the object is suitable. This subject was
first introduced in Raney’s paper [4] and has been studied in
the SAR literature ever since, attracting a great deal of interest
more recently [5]–[12]. These techniques have an increasing
number of applications in space-, air-, and ground-based radar
Authors’ address: Tampere University of Technology, Laboratory of
Signal Processing, P.O. Box 553, FI-33101, Tampere, Finland, e-mail:
(risto.vehmas@tut.fi)
imaging. The trend of this technology is moving towards
increased spatial resolution, which motivates the development
of new algorithms that are capable of delivering enhanced
performance, preferably in real time.
A. Related work on data-driven motion compensation
Traditionally, the problem of data-driven motion compensation in ISAR is divided into several consecutive steps. The
reason for this is simple: the problem is too complex and
time-consuming to solve using a more direct approach. In a
two-dimensional setting, a direct non-approximated approach
would require three degrees of freedom to be estimated for
each element of the synthetic phased array antenna (the
position and orientation of the object). To make the problem
tractable, suitable approximations and simplifying choices are
needed in the development of an ISAR imaging algorithm.
In ISAR processing, a common practice is to decompose the
motion of the object into two parts: translational motion and
rotational motion. These concepts are defined more carefully in
Section II, but the important distinction between them is their
significance for cross-range processing. The translational motion is the undesired component, whereas the rotational motion
is essential and required for obtaining cross-range resolution.
The translational motion needs to be taken into account in the
imaging process, and its compensation is usually the first step
of the ISAR algorithm. Translational motion compensation is
usually performed in two parts: range alignment and autofocus. Range alignment is used to compensate for translational
motions larger in magnitude than the range resolution [13]. A
widely used approach is to minimize (or maximize) a quality
measure calculated from the shifted amplitude envelopes of
the range profiles [14]–[19]. After the range alignment procedure, the residual translational motions are corrected using an
autofocus algorithm. Autofocus algorithms have been studied
extensively in the literature; the most prominent techniques are
based either on the phase gradient autofocus algorithm [20],
[21] or contrast optimization autofocus [14], [22]–[25].
After the translational motion is compensated for, the
Doppler histories caused by the rotational motion can be used
to obtain a high cross-range resolution. To obtain a correctly
focused image, the nature of the rotational motion needs to be
taken into account in the image reconstruction. The process
of estimating the unknown rotational motion and using it in
the image reconstruction is generally more complicated than
the translational motion compensation. The simplest way to
perform the image reconstruction is with a one-dimensional
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
2
Fourier transform, that is, to assume a linear phase progression across the synthetic aperture. However, when the range
resolution is extremely high and a similarly high resolution is
desired in the cross-range direction, this simple range-Doppler
algorithm rarely produces an acceptable result. The reason
for this is the following: as the rotational angle increases,
scatterers start to migrate through range resolution cells and
the linear phase approximation breaks down. Moreover, if
the rotational motion is not uniform, the phase progression
is nonlinear even during a short time interval. As solutions
to these problems, a technique called keystone formatting
partially compensates for the range cell migration [6], [9],
[26], [27], while time-frequency representations are used to
mitigate the effects of nonlinear phase histories [28]–[33].
The former approach was first introduced by Perry et al. [6]
in the context of airborne SAR imaging of moving objects.
The approach based on time-frequency representations was
introduced by Chen [28], [30] in the 1990s and has since
matured into a well-established technique [29]–[38] in ISAR
processing. For example, the S-method has been demonstrated
to greatly surpass the conventional Fourier transform-based
range-Doppler approach in [37].
Performing the ISAR image reconstruction based on the
aforementioned techniques eliminates the need for explicitly
estimating the rotational motion parameters of the object.
These parameters are, however, required to determine the
spatial cross-range scale of the resulting ISAR image. This
problem, which is called cross-range scaling in the ISAR literature, is generally quite difficult to solve. Existing approaches
are based on either estimating high-order phase coefficients
from the range-compressed signal [39]–[41] or a suitable
optimization approach [42], [43]. The optimization approaches
rely on dividing the synthetic aperture into two or more parts
to produce multiple ISAR images, which are scaled and rotated
versions of each other.
B. Novel contributions detailed
Producing a very high cross-range resolution (≈ 10 cm)
in the presence of complicated target motions is an open
problem within the ISAR imaging community. Previously,
several approaches have been shown to produce satisfying
results in the individual parts of the motion compensation [15],
[17], [19], [22], [25], [42], [43], but often at the cost of a high
computational burden. However, the aforementioned papers
and references therein define several suitable approaches for
different parts of the data-driven motion compensation process.
We use and combine them into a computationally efficient noncooperative ISAR imaging algorithm based on mathematical
optimization. Our novel contributions can be listed as follows:
1) for the global range alignment method [15], we propose
using a carefully selected initial guess, after which we
derive an expression for the gradient of the loss function
and propose new loss functions. Together, these novelties
lead to increased estimation performance and reduced
computational burden;
2) we propose including the keystone formatting and autofocus procedures in the well-established time window optimization procedure [44], yielding a higher cross-range
resolution in cases with randomness in target motion over
a coherent processing interval;
3) we show that an expression for the second order partial
derivatives of the loss function can be derived, which
improves the contrast optimization autofocusing when
supplemented with low-complexity mathematical optimization;
4) we show how the rotation correlation [42] and polar
mapping methods [43] can be efficiently combined to
provide a straightforward solution to the cross-range
scaling problem; and
5) we propose using a contrast optimization procedure for
the time-frequency representations typical in ISAR imaging. This procedure reduces the spatially variant blurring
caused by the nonuniform rotational motion of the target.
Together, these contributions yield a higher spatial resolution
under complex target motion dynamics, which is experimentally validated using an X-band ISAR system. In reference to
the well-established optimization-based ISAR processing, the
performance is improved by some 28 percent as measured by
the image contrast, a standard quality measure in the cited
literature, with a lower computational cost.
The paper is organized in the following manner. Section
II recalls the conventional ISAR signal model for a twodimensional imaging geometry. Then Section III presents the
data-driven motion compensation and imaging algorithms as
well as the proposed improvements for them which lead to our
ISAR algorithm. Section IV demonstrates our ISAR algorithm
with a numerical example using measured radar data. Finally,
Section V draws conclusions.
II. ISAR SIGNAL MODEL
This Section analyzes the conventional ISAR signal model
[13], [14], [30], [34], [38], [39], [41], [42], [44] and its
underlying assumptions. The model is appropriate for a twodimensional imaging geometry with a moving object and
monostatic stationary radar. The model we use is extensively
discussed in references [34], [38]. The signal model is reviewed for two reasons. First, it is essential for understanding
the significance of different parts of any ISAR algorithm.
Second, as the desired resolution of the ISAR image increases,
it is important to carefully examine each simplifying approximation that is made in the analysis. Readers familiar with
basic ISAR principles may skip this Section and continue to
Section III.
Fig. 1 illustrates a two-dimensional ISAR geometry. The
object of interest is moving in the (x, y) coordinate system.
The (x0 , y 0 ) coordinate system (object frame of reference) is
rigidly attached to the object and is a shifted and rotated
version of the (x, y) system (radar frame of reference). These
definitions mean that the motion of the object is completely
described by the location of the origin of the object frame of
reference r0 (the line-of-sight vector) and the angle θ between
any unit vector in the object frame and r0 . The origin of
the (x0 , y 0 ) system is assumed to be located in the object’s
center of mass. As the aspect angle θ changes, the range
||Rp (t)|| between each different point on the object and the
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
3
radar changes uniquely. Slow-time t is the variable which
increases along the synthetic aperture and remains constant
during the formation of a single range profile. Fig. 1 represents
the imaging geometry for a fixed value of t, and as the
object moves, the primed coordinate system shifts and rotates
accordingly.
Assuming that the radar uses a pulsed waveform to produce
a series of high-resolution range profiles, we made the usual
start-stop approximation, meaning that the object is assumed
to be stationary between the transmission and reception of a
single pulse. We assumed that the principle of superposition
applies for the back-scattered echoes, in which case the point
target response (PTR) can be used to completely describe
the received signal. Assuming a linearly frequency modulated
(LFM) waveform and pulse compression are used, the PTR
ssp : R2 → C can be expressed in closed form. After range
compression and quadrature demodulation of the PTR as a
function of radial distance (range) r and slow-time t is
4π
2B
(r − Rp (t)) e−i λ Rp (t) ,
(1)
ssp (r, t) = sinc
c
where λ is the carrier wavelength, Rp (t) = kRp (t)k, B is the
temporal frequency bandwidth of the LFM waveform, and c
is the propagation speed of the radio wave. Equation (1) can
be derived by using the correlation theorem and the principle
of stationary phase and assuming that the transmitted signal
is band-limited [45].
The most important quantity in this analysis is Rp (t) since it
determines both the range location of the amplitude envelope
and the phase of the PTR (1). Fig. 1 has Rp = r0 − rp , from
which
Rp2 = r02 + rp2 − 2r0 · rp
π
= r02 + rp2 − 2r0 rp cos θp + θ +
2
= r02 + rp2 + 2r0 rp sin (θp + θ)
(2)
can easily be deduced. In (2) we have denoted r0 = kr0 k
and rp = krp k. The aspect angle θ is defined as (see Fig. 1)
r̂0 ·ŷ 0 = cos θ + π2 , where r̂0 = r0 /r0 and ŷ 0 is a unit vector
pointing in the y 0 -direction. The angle θp is the angle between
rp and ŷ 0 , which is a constant because the (x0 , y 0 ) coordinate
system is assumed to be rigidly attached to the object.
Notably, when the object is in the far field, we have r0 rp . This observation means that we are motivated to consider
Rp as a function of rp and retain only terms up to the first
order in its Taylor expansion. We have
∂Rp
r0 sin (θp + θ) + rp
,
=q
∂rp
r02 + rp2 + 2r0 rp sin (θp + θ)
(3)
which leads to
∂Rp (0)
rp + O(rp2 )
∂rp
= r0 + rp sin (θp + θ) + O(rp2 ).
Rp (rp ) = Rp (0) +
(4)
The second order term in the expansion (4) gives us a criterion
that can be used to evaluate the applicability of the farfield approximation. Namely, the magnitude of the second
order term should be significantly smaller than half the carrier
wavelength λ. This leads to the condition
rp2
λ
cos2 (θp + θ) .
r0
2
(5)
Expanding the sine term in equation (4) and denoting yp =
rp cos θp and xp = rp sin θp leads to
Rp ≈ r0 + xp cos θ + yp sin θ.
(6)
In equation (6) xp and yp are constants that are the initial
coordinates of the point scatterer in the primed coordinate
system. As the object moves, both r0 and θ change as a
function of slow-time t, so accounting for the motion of the
object results in the expression
Rp (t) ≈ r0 (t) + xp cos θ(t) + yp sin θ(t).
(7)
Under this approximation, the motion of each point can thus
be divided into a spatially invariant (the same for every point
p) term r0 , which is called translational motion, and a spatially
variant (depends on p) part containing the θ-dependence,
which is called rotational motion.
The Fourier transform of the PTR (1) with respect to r can
be expressed in closed form as
Ssp (kr , t) = Fr→kr {ssp (r, t)}
kr
e−i2(kr +kc )Rp (t)
=Π
(2πB)/c
kr c
e−i2(kr +kc )[r0 (t)+xp cos θ(t)+yp sin θ(t)] ,
≈Π
2πB
(8)
πB
where kc = 2π/λ is the carrier frequency, kr ∈ [− πB
c , c ]
is the baseband spatial frequency variable, and Π is the
rectangle function. Next, we assume that the object has a
continuous reflectivity function g, which does not depend on θ
or kr . Using the principle of superposition and for simplicity
neglecting the effect of the band-limited nature of the signal
(i.e. the rectangle function in (8)) we get
Z ∞Z ∞
Ss(kr , t) =
g(xp , yp )Ssp (kr , t)dxp dyp
−∞ −∞
Z ∞Z ∞
0
0
= e−i2kr r0 (t)
g(x, y)e−i2kr [x cos θ(t)+y sin θ(t)] dxdy
−∞
=e
−i2kr0 r0 (t)
G(2kr0
−∞
cos θ(t), 2kr0 sin θ(t)),
(9)
for the range-compressed signal as a function of (range)
spatial frequency and slow-time. In (9), G(kx , ky ) =
Fx→kx Fy→ky {g(x, y)} is the two-dimensional Fourier
transform of the reflectivity function g, and kr0 = kr + kc
is the pass-band spatial frequency variable. Equation (9) is
interpreted as follows: under the far field assumption (7), the
measured signal in the (kr , t)-domain is actually a phasemodulated slice of the two-dimensional Fourier transform of
the reflectivity function g. Taking into account the bandlimited nature of the signal in the kr variable, these slices
actually span an annular sector in the (kx , ky )-domain. The
Fourier transform relationship between the received signal and
the desired reflectivity function g is extremely useful and has
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
4
been analyzed thoroughly in the context of spotlight mode
SAR [46]–[48].
For noncooperative objects, both r0 and θ are unknown in
(9). Their values have to be estimated in order to reconstruct
the ideal ISAR image g. To be more precise, since the
signal is band-limited, the ideal image g cannot be exactly
reconstructed, but the ideal reconstruction yields a convolution
between g and the PTR. From this point of view, r0 is a
nuisance that does not provide any useful information (since
it is the same for every point (xp , yp )) and needs to be
compensated for. Otherwise, the result of inverse Fourier
transforming (9) will not result in a well-focused ISAR image.
The following Section presents methods to estimate or compensate for these unknown motion parameters and reconstruct
a properly focused and scaled ISAR image.
III. DATA - DRIVEN MOTION COMPENSATION
A. Range alignment
The sum envelope of the range-compressed signal is defined
as
M
−1
X
p(r, ∆r) =
|ss(r + ∆rm , tm )|,
(11)
m=0
where | · | denotes the absolute value and ∆r =
[∆r0 . . . ∆rM −1 ]T . Using (11), the sharpness of the energy
distribution of the sum envelope can be quantified by using a
loss function
Z ∞
L1 (∆r) =
ψ(p(r, ∆r))dr,
(12)
−∞
where ψ : R → R is a real valued function of a real variable.
For the contrast loss function ψ(p) = −p2 and ψ(p) = −p ln p
for the entropy loss function. The sum of the squared envelope
differences can be expressed as
Z ∞M
−2
X
L2 (∆r) =
[|ss(r + ∆rm+1 , tm+1 )|
(13)
−∞ m=0
2
The first step in our ISAR algorithm is the well-established
range alignment procedure. The purpose of this procedure is to
coarsely estimate the translational motion r0 (t), which is the
slow-time-dependent distance between the origin of the object
frame of reference and the radar. However, the estimate has to
be more accurate than the range resolution if a good result is
expected in the subsequent ISAR processing. In the first part
of this subsection, we introduce the essential background of
range alignment. Then, we describe our novel modifications.
The first important observation from equations (1) and
(7) is that compensating for the translational motion r0 (t)
essentially means shifting each range profile ss(r, tm ) in the
range direction by r0 (tm ), where m = 0, . . . , M − 1 and M
is the number of samples in the slow-time direction. The idea
of the global range alignment method [15] is to estimate these
shifts by minimizing a loss function, whose value is calculated
from the shifted amplitude envelopes of ss. The value of
the loss function has to quantitatively measure the quality
of the translational motion compensation. The advantage of
the global method is that the entire signal is used in the loss
function, whereas most other methods only use quality measures based on two adjacent range profiles. Another important
observation is that the main effect of the translational motion
is to shift the amplitude envelope of the back-scattered signal,
while the shape of the range profile is preserved during a
short slow-time interval. Thus, when the translational motion
is properly compensated for, adjacent range profiles should
be nearly identical. Quantitative measures for describing this
similarity are, for example, the contrast and entropy of the sum
envelope and the sum of the squared envelope differences [15],
[17], [18].
Since the range-compressed signal is sampled, shifting the
range profiles requires an interpolation. The ideal band-limited
interpolation is most efficiently and accurately done in the
Fourier domain by utilizing the convolution theorem (or in
this case the shift theorem, equivalently) as
ikr ∆rm
ss(r + ∆rm , tm ) = Fk−1
e
Fr→kr {ss(r, tm )} .
r →r
(10)
−|ss(r + ∆rm , tm )|] dr.
The range alignment problem is solved by minimizing L1
(or L2 ). By using a simple point scatterer model for g, it can be
easily deduced that these loss functions contain multiple local
minima. Thus, in this form the problem requires numerical
global optimization, which is computationally very inefficient
and does not guarantee that the optimal solution is found.
The minimum entropy method [14] deals with this problem
by using only two adjacent profiles in the definition of the
sum envelope (11) and in (12). This results in M − 2 onedimensional optimization problems, but this method has its
drawbacks. Namely, the method still requires one-dimensional
global optimization, and target scintillation (rapid variations
in the magnitude of g as the aspect angle θ changes) and
error accumulation cannot be handled with success. The global
method [15], [16] avoids error accumulation and can handle
target scintillation but requires global optimization in a multidimensional search space even with a parametric model for
the range shifts ∆r. By using a suitable heuristic optimization
algorithm and a parametric model, the global method can be
fairly effective [19]. However, the heuristic optimization algorithms are a compromise that does not guarantee an optimal
solution and are thus better suited for offline processing than
real-time scenarios.
To overcome these issues, we propose the following modic m for the range shifts is
fications. First, an initial estimate ∆r
obtained by tracking the movement of the center of mass of
the energy distribution of the range profiles, namely
R∞
r|ss(r, tm )|2 dr
c m = R−∞
.
(14)
∆r
∞
|ss(r, tm )|2 dr
−∞
We also note that another efficient way for obtaining the initial
estimate is to use the maximum correlation method [49]. This
means that the initial estimate is obtained from
c m = arg max {|ss(r, t0 )| ?r |ss(r, tm )|} ,
∆r
(15)
r
where ?r denotes one-dimensional cross-correlation with respect to the range variable r. The initial estimates (14) or
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
5
(15) are typically not accurate enough to be used for the
range alignment (as ∆r) themselves. This inaccuracy is the
case especially when the range resolution is extremely high.
Nevertheless, these estimates are accurate enough to turn the
global optimization problem into a local problem, especially
when a parametric model is used for the range shifts. A
parametric model means that
∆rm =
J
X
aj fj (tm ),
(16)
j=1
where {fj }j=1,...,J is a set of basis functions, and aj are their
coefficients. These can be chosen, for example, to be Legendre
polynomials (or any other orthogonal basis set), in which case
the least squares estimates b
aj for the coefficients of the initial
estimate are easily obtained from
b
aj =
M −1
1 X c
∆rm fj (tm ),
Cj m=0
importance of the results in (19)–(21) becomes evident when
they are combined with the first modification we proposed.
The combination produces a range alignment algorithm that
is computationally highly efficient.
The final modification we propose is to combine the loss
functions L1 and L2 . The loss functions (12) relying on the
sum envelope work ideally when the change in θ is very
small. As the change in θ during the coherent processing
interval increases, scatterers start to migrate though range
resolution cells, and the sum envelope becomes smeared even
if r0 is properly compensated for. On the other hand, the loss
function (13) works even if the shape of the range profile
changes due to the rotational motion, since only adjacent
envelopes are subtracted from each other. However, this again
means that error accumulation and scintillation are not handled
satisfactorily. A way to utilize the benefits of both is to
combine them as
(17)
where the time variable is scaled such that tm ∈ [−1, 1], and
the normalization factor is
sX
Cj =
fj2 (tk ).
(18)
k
Solving the range alignment problem is now equivalent to
minimizing the loss function with respect to the coefficients
aj of the parametric model (16) using (17) as the initial guess.
Our second modification is to use first order numerical optimization to locate the minimum of the loss function. Assuming
ψ is continuously differentiable, the partial derivatives of L1
with respect to aj can be obtained by applying the chain rule.
For (12) we get
Z ∞
M −1
∂L1
∂ψ(p(r, ∆r)) X ∂p(r, ∆r) ∂∆rm
=
dr
∂aj
∂p
∂∆rm
∂aj
−∞
m=0
(19)
M
−1
X
∂L1
=
fj (tm )
.
∂∆rm
m=0
A straightforward calculation gives
Z ∞
∂L1
∂ψ(p(r, ∆r)) ∂|ss(r + ∆rm , tm )|
=
dr. (20)
∂∆rm
∂p
∂r
−∞
In the derivation of (20), we assume that r is a continuous
variable, in which case the derivative property of the Fourier
transform can be utilized. In practice, the partial derivatives
∂|ss|/∂r are evaluated using finite difference approximations.
For the sum of squared envelope differences, a derivation with
similar arguments gives
Z ∞
∂L2
∂|ss(r + ∆rm , tm )|
=2
[2|ss(r + ∆rm , tm )|
∂∆rm
∂r
−∞
−|ss(r + ∆rm+1 , tm+1 )| − |ss(r + ∆rm−1 , tm−1 )|] dr.
(21)
If L2 is used as the loss function instead of L1 , we get the
parametric gradient by replacing L1 with L2 in the last row
of (19) (the result in the first row is not the same, but the
chain rule yields the result in the last row in any case). The
LRA (∆r) = H(L1 (∆r), L2 (∆r)),
(22)
where H : R2 → R is a function that combines L1 and L2 into
a single loss function. For example, if ψ(p) = −p2 , a suitable
choice would be H(L1 , L2 ) = −L2 /L1 . Similarly, for the
entropy ψ(p) = −p ln p, the choices H(L1 , L2 ) = L1 + L2
and H(L1 , L2 ) = L1 L2 are useful. This means that in the last
row of the parametric gradient (19) the expression
X ∂H ∂Lj
∂LRA
=
,
∂∆r
∂Lj ∂∆r
j
(23)
needs to be evaluated. In (23), ∂LRA /∂∆r denotes the
gradient vector whose mth component is ∂LRA /∂∆rm . The
loss function (22) could also be a summation of different
functions H, but even a single function H brings out the
benefits of both L1 and L2 .
In summary, our range alignment algorithm consists of the
following steps. First, an initial estimate is obtained using
(14) or (15), after which the least squares estimates for the
parametric model of order J are obtained using (17). After
this, the minimum of the loss function (22) is located by using
a first order numerical optimization algorithm (for example
the steepest descent, nonlinear conjugate gradient, or quasiNewton method) with a line search procedure. An iterative
procedure
∆r k+1 = ∆r k − αg k
(24)
is used, where g k is a descent direction obtained by using
the gradient ∂LRA /∂a, and a = [a1 . . . aJ ]T . In (24), the
superscripts stand for the number of iteration, ∆r 0 is the
initial estimate (obtained by using either (14) or (15)), and
the step length α is obtained by minimizing the function
η k (α) = LRA (∆r k − αg k ) using a line search procedure.
Once the estimate ∆r ? that minimizes the loss function
LRA is obtained, it is used to compensate for the translational motion r0 . Switching back to continuous notation
∆r ? → rb0 (t), the coarse translational motion compensation
is performed using
0
SsRA (kr , t) = Ss(kr , t)ei2kr rb0 (t) .
(25)
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
6
After the phase correction (25) is applied, a residual phase
error caused by the residual spatially invariant translational
motion
4π
φe (t) = − (r0 (t) − rb0 (t))
(26)
λ
remains in the (r, t) domain, and the model for the rangealigned signal becomes
ssRA (r, t) = Fk−1
{SsRA (kr , t)}
r →r
Z ∞Z ∞
g(xp , yp )ssp (r, t)dxp dyp .
= eiφe (t)
−∞
−∞
(27)
B. Time window optimization
The next part of our ISAR algorithm chooses an optimal
slow-time window to be processed in the ISAR image reconstruction. This means locating a slow-time window during
which the motion of the object is as smooth as possible.
Thus, we need a suitable loss function, whose value quantifies the quality of the result obtained from reconstructing
an ISAR image using a certain slow-time window. At this
point, there are two difficulties associated with this problem.
First, the range alignment procedure compensates for r0 (t)
only coarsely, leaving a residual phase error (26) in the
range-aligned signal (27). Also, the rotational motion causes
spatially variant range cell migration (RCM), which means that
scatterers with different initial positions (xp , yp ) each have a
unique range history Rp (t) due to the rotational motion. If
the range resolution is very high and an equally high crossrange resolution is desired, these effects have to be taken into
account when choosing the optimal slow-time window.
The basic idea we use is the same as in [44]. We use the
contrast of the intensity-normalized ISAR image as the quality
measure to determine the optimal length T and position tc
for the slow-time window. To obtain the best possible result,
the two aforementioned difficulties need to be addressed.
This means that the RCM caused by the rotational motion
is partially compensated for using keystone formatting [6]. In
addition, we use the phase gradient autofocus (PGA) algorithm
[20], [21], [48] to remove the spatially invariant phase errors
φe in (27). The drawback of introducing these steps into the
time window optimization process is that the computational
complexity slightly increases. However, by suitably modifying
the PGA algorithm, using a sub-optimal interpolation in the
keystone formatting, and using a local numerical optimization
algorithm to minimize the loss function, reasonable results can
be achieved without significantly increasing the computational
burden of the original method [44].
In [50], we used similar ideas for determining an optimal
slow-time window for the keystone formatting and ISAR
image reconstruction. However, the loss function we present
here combines the two different parts of the algorithm used
in [50]. Essentially, the slow-time window for the keystone
formatting and short-time Fourier transform is determined
simultaneously in this formulation, reducing the computational
complexity.
Evaluating the loss function for given values of T and tc
proceeds as follows. First, the range-aligned signal ssRA is
windowed with a rectangular window of length T centered at
tc . Then, the range-Doppler image function
t − tc
ssRA (r, t) , (28)
sS(tc , T ; r, ω) = Ft→ω Π
T
where ω is the Doppler-variable, is formed. As T and tc are
fixed in the following steps, we suppress the dependence of the
signal sS (and ss) from them to simplify the notation. Before
applying the keystone formatting operation, the residual phase
errors φe need to be estimated and removed using an autofocus
procedure. Otherwise, the nonlinear part of φe might affect
the result in an undesirable way. The standard PGA autofocus
algorithm is used here [21], [48], with one simple modification
to speed up its convergence.
The modification functions to replace the circular shifting
operation of PGA by a more efficient procedure. The purpose
of the circular shift in the ω-domain is to remove linear
phase ramps, which would result in different phase offsets
for the derivatives of the phase in each range bin. If the
range-Doppler image is severely blurred, this procedure is
not effective and multiple iterations of the PGA algorithm are
required. This is a problem because multiple iterations increase
the computational cost since PGA is used every time the loss
function is evaluated. To speed up the convergence, we use an
additional procedure proposed originally in [51]. To remove
the phase offsets for different range bins, the mean value of
the phase of ê in each range bin is calculated as
(M −1
)
X
h∠ê(r)i = ∠
ê(r, tm ) ,
(29)
m=0
where h·i denotes taking the mean with respect to the slowtime variable and ∠ {·} denotes taking the phase angle of the
complex number and
ê(r, tm ) = ss(r, tm )ss∗ (r, tm−1 ).
(30)
In (30), ss stands for the inverse Fourier transform of the
range-Doppler image (28), which is windowed symmetrically
around the strongest scatterer in each range bin. Subtracting
(29) from the phase of (30) produces
ê0 (r, tm ) = ê(r, tm )e−ih∠ê(r)i ,
(31)
which removes the effect of different constant phase-offsets for
different range bins. After performing the operation in (31),
the usual procedure is used to obtain the estimate for the phase
error function, namely [21]
Z ∞
m
X
0
φ̂e (tm ) =
∠
ê (r, tk )dr .
(32)
k=0
−∞
After obtaining the maximum likelihood estimate (32), the
range-compressed signal is phase-corrected as
tm − tc
ssRA (r, tm )e−iφ̂e (tm ) .
ssP GA (tc , T ; r, tm ) = Π
T
(33)
To reduce the computational burden of the optimization procedure, only one iteration of PGA is applied in our algorithm.
The purpose of utilizing the procedure above is to make the
most of this single iteration.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
7
After this single iteration of PGA, keystone formatting is
performed to produce an intermediate result
n
ssK (tc , T ; r, τ ) = Fk−1
{Kt→τ
{SsP GA (kr , t)}} .
r →r
(34)
n
The keystone formatting operation Kt→τ
{·} is the change of
variables
1/n
kc τ
,
(35)
t=
kr + kc
which in practice is carried out by using a band-limited
interpolation technique. The effect of the keystone formatting
operation (35) for n = 1 is to remove the linear component
of the RCM of every scatterer p simultaneously. This can be
deduced by considering the Taylor expansion of the phase of
Ssp with respect to t [6].
The ISAR image after PGA and keystone formatting is
obtained from sSK (tc , T ; r, ω) = Fτ →ω {ssK (tc , T ; r, τ )}.
The intensity-normalized intensity image is defined as
ˆ c , T ; r, ω) = R ∞ R ∞ I(tc , T ; r, ω)
,
I(t
I(tc , T ; r0 , ω 0 )dr0 dω 0
−∞ −∞
(36)
where I(tc , T ; r, ω) = |sSK (tc , T ; r, ω)|2 . The loss function
is chosen so that it quantitatively describes the quality of the
ˆ This is a familiar problem in the context of contrast
result I.
optimization autofocus [14], [22]–[25], [52], and a simple and
useful choice is to define the loss function as
Z ∞Z ∞
ˆ c , T ; r, ω))drdω,
LT W (tc , T ) =
ψ(I(t
(37)
−∞
−∞
where ψ is similar in type to the range alignment problem.
The optimal slow-time window is obtained by minimizing
the loss function (37). Typically the pulse repetition frequency
(PRF) of the radar is high compared to the time scale, in which
the motion parameters of the object change significantly. Intuitively, this means that the loss function (37) behaves relatively
smoothly, and thus, a computationally efficient local optimization algorithm can be used to find a suitable time window
that locally minimizes LT W . The time window optimization
process thus produces the optimal window parameters
[t?c T ? ]T = arg min LT W (tc , T ).
tc ,T
The windowed signal is then autofocused using PGA and
keystone formatted. After these procedures, the signal is
denoted as ssT W .
C. Contrast optimization autofocus
We proposed using the PGA in the time window optimization due to its simplicity and computational efficiency
compared to other possible approaches. However, the contrast
optimization autofocus (COA) approach has been demonstrated to produce very good results [14], [22]–[25] and can
perform better than PGA or any other autofocus algorithm
in certain situations. This superior result is especially true in
ISAR imaging, since the object to be imaged typically has at
least some strong approximately point-like features, which are
emphasized by the loss functions used in the COA.
The PGA and COA are in a certain sense complementary.
This means that after the optimal time window is selected,
autofocused with PGA, and keystone formatted, it can be
beneficial to perform COA. We show here that when the
residual phase errors after performing PGA are small, COA
can be applied in a highly efficient manner. The reason for
this is explained in [25]. Stated briefly, the loss function can
be regarded as separable when the phase error estimates are
relatively small in magnitude. Thus, COA is essentially reduced into a series of one-dimensional optimization problems,
which can be solved in parallel. Because the loss function is
only approximately separable, typically an iterative procedure
is required.
In COA, the input signal is phase corrected with an estimate
φm of the phase errors to produce an intermediate signal
ssC (r, τm ) = ssT W (r, τm )e−iφm , which is used to produce
an intensity image I(φ; r, ω) = |sSC (φ; r, ω)|2 , where φ =
[φ0 . . . φM −1 ]T and sSC (φ; r, ω) = Fτm →ω {ssC (r, τm )}.
When a loss function of the form
Z ∞Z ∞
LAF (φ) =
ψ(I(φ; r, ω))drdω,
(38)
−∞
−∞
is used, an expression for the gradient can be obtained from
Z ∞Z ∞
∂ψ(I(φ; r, ω)) ∂I(φ; r, ω)
∂LAF
=
drdω. (39)
∂φ
∂I
∂φ
−∞ −∞
This result is due to Fienup [23], [24], and a straightforward
derivation yields
Z ∞
∂LAF
=2
Im {ssC (r, τm )M Φ∗ (r, τm )} dr,
(40)
∂φm
−∞
for the mth component, where
∂ψ(I(φ; r, ω))
−1
sS
(r,
ω)
.
Φ(r, τm ) = Fω→τ
C
m
∂I
(41)
In Fienup’s original papers [23], [24], a first order numerical
optimization algorithm (nonlinear conjugate gradient method)
was used to minimize LAF . This kind of approach requires
a line search procedure (or alternatively, trust region methods
can be used), which increases the computational cost because
every time the loss function (38) (or the gradient (40)) is
evaluated, a Fourier transform is needed to obtain the rangeDoppler image function sSC .
To further increase the computational efficiency of COA, we
propose the following method when the residual phase errors
are small in magnitude (as they will be after applying PGA).
Assuming ψ is twice continuously differentiable, the Hessian
matrix can be evaluated and the (m, n) element of it is
∂ 2 LAF
=
∂φm ∂φn
Z ∞Z ∞
−∞
−∞
∂ 2 ψ ∂I ∂I
∂ψ ∂ 2 I
+
∂I 2 ∂φm ∂φn
∂I ∂φm ∂φn
(42)
drdω,
where a shorthand notation I = I(φ; r, ω) is used. In principle,
(42) could be used to solve the autofocus problem with the
Newton-Raphson method. However, this is not suitable for
two reasons. First, evaluating and inverting the Hessian (42) is
numerically a lot more costly than just evaluating the gradient
(39). Second, the loss function can be approximated to be a
sinusoid in each coordinate direction φm [25], which means
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
8
that far away from the minimum, the Hessian is not positive
definite and is close to singular.
When the problem is close to being separable (meaning
LAF (φ) is approximately a sum of one-dimensional functions), the essential curvature information is in the diagonal
elements of the Hessian (42). This means that we can use
the one-dimensional Newton-Raphson method (which is also
called the secant method) for each phase error component
separately. It proceeds iteratively, calculating the phase error
estimates and updating them according to
k+1
φm
= φkm −
Gm (φk )
,
Hmm (φk )
(43)
where the superscripts refer to the number of iteration,
Gm = ∂LAF /∂φm and Hmm = ∂ 2 LAF /∂φ2m . The diagonal
elements of the Hessian can be obtained by straightforward
differentiation, which yields
Z ∞
Z ∞
∂ψ(I)
∂ 2 LAF
2
|ss
(r,
τ
)|
=
drdω
C
m
∂φ2m
∂I
−∞
−∞
Z ∞
−2
Re {ssC (r, τm )M Φ∗ (r, τm )} dr
(44)
−∞
Z ∞Z ∞ 2
∂ ψ(I)
+4
·
∂I 2
−∞ −∞
2
∗
Im ssC (r, τm )e−iωτm sSC
(r, ω) drdω,
where the term in last last row can be expanded to avoid
unnecessary multiplications in the double integral. In our
algorithm, after PGA, a few iterations according to (43) are
used to minimize LAF . The signal ssT W (r, τm ) is phase
corrected with the obtained estimate as in (33). Denoting
φ? = arg minφ LAF (φ), the time windowed, keystone formatted, and autofocused signal is obtained from
?
ssAF (r, τm ) = ssT W (r, τm )e−iφm .
(45)
D. Cross-range scale factor estimation
When the ISAR image is reconstructed from the signal
ssAF using a Fourier transform with respect to τ , the crossrange dimension of the ISAR image is the angular Doppler
frequency variable ω ∈ [−πfP R , πfP R ], where fP R is the
sampling frequency in the slow-time direction (which is the
PRF of the radar and is assumed to be a constant). To produce
an image with a known spatial scale (i.e. the size of the image
in spatial units) in both dimensions, a relationship between ω
and the cross-range variable y needs to be established.
The problem of determining the spatial scale of the ISAR
image in the cross-range dimension is referred to as crossrange scaling in the ISAR literature [39]–[43]. Thoroughly
understanding the problem requires understanding how the
aspect angle θ is related to the cross-range scale of the
ISAR image. Re-examining the signal model (9) helps answer
this problem. Since the keystone formatting operation is a
change of variables (35), it alters the relationship between the
slow-time variable and θ. For this reason, we perform crossrange scaling using the range-aligned and autofocused signal
which is not keystone-formatted. After the range alignment
and autofocus procedures, the spatially invariant phase term is
assumed to be compensated for and ideally we have
SsAF (kr , t) = Fr→kr {ssAF (r, t)}
= G(2kr cos θ(t), 2kr sin θ(t)).
(46)
According to (46), the autofocused signal SsAF is the twodimensional Fourier transform of g (the ideal ISAR image) evaluated at the points where ky = 2kr sin θ(t) and
kx = 2kr cos θ(t) in the two-dimensional spatial frequency
domain. Stated another way, in this model, the slow-time
variable t actually corresponds to the cross-range spatial
frequency variable ky . Thus, the cross-range scale of the
ISAR image can be deduced if the sample spacing in the
ky variable is known. This follows from applying the Fourier
uncertainty relations, which mean that Y ∆ky = 2π, where
Y is the cross-range support of the ISAR image and ∆ky
is the sample spacing in the cross-range spatial frequency
variable. Assuming the signal is relatively narrow-band, we
can approximate ky = 2kc sin θ(t). The sample spacing is
thus approximately ∆ky = (4π/λ)∆θ(t), where ∆θ(t) is the
sample spacing in the aspect angle, which is not necessarily a
constant. This means that the cross-range support of the signal
is approximately
λ
.
(47)
Y =
2∆θ
Thus, to determine the cross-range scale of the ISAR image,
we need to estimate the aspect angle sample spacing ∆θ.
The approach we use to estimate ∆θ is a combination of
the rotation correlation [42] and polar mapping [43] methods.
They are based on minimizing a certain loss function whose
value depends on the estimate for ∆θ. By utilizing two different parts of the signal in the t (ky ) direction, (46) simply means
that we are using a rotated version of the Fourier transform
G in the image reconstruction. The rotation property of the
Fourier transform states that rotating the Fourier transform
by an angle θ and inverse transforming back to the spatial
domain rotates the original function g by θ. This means that
the resulting two images will simply be rotated and scaled
versions of each other.
If the ISAR image is simply rotated in the spatial domain,
the location of the center of the effective rotational motion
needs to be known. This leads to the main drawback of the
original rotation correlation method [42], which is the additional degree of freedom incurred by this process. However,
the location of the center of rotation does not need to be known
if the image is rotated in the frequency domain utilizing the
rotation and shift properties of the Fourier transform.
The sample spacing ∆θ can be estimated by determining the
angular shift between two ISAR images that are made from
different parts of the signal. The angular shift between the two
images is estimated by maximizing the intensity correlation
of the two sub-images. Suppose we use two different parts of
the signal ssAF (r, t) to reconstruct two distinct ISAR images.
The two distinct signals ss1 and ss2 are obtained by applying
rectangular windows as
t − tcj
ssj (r, t) = Π
ssAF (r, t),
(48)
T
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
9
where j = 1, 2, tcj are the time centers of the windows
and T is the length of the time window which is obtained
by minimizing (37). Next, an estimate for the change of θ
between tc1 and tc2 is made. We denote this estimate with θ̂.
The second signal ss2 is Fourier transformed with respect to
r to produce the signal Ss2 (kr , t) ≈ G2 (kx , ky ). Using (46),
we obtain
G1 (kx , ky ) = R(θ(tc1 ) − θ(tc2 ))G2 (kx , ky ),
(49)
where R(θ) denotes the operation which rotates a function by
the angle θ. The support of the range spatial frequency variable
is kr ∈ [−π/∆r, π/∆r], and the corresponding Cartesian
variable is approximated to be kx = 2kr cos θ(t) ≈ 2kr .
To rotate the image correctly by utilizing the properties of
the Fourier transform, the support of the ky variable has to
be known. Using the estimate θ̂, an estimate for the angular
spacing is obtained from
c = |tc2 − tc1 | θ̂.
(50)
∆θ
T
Consequently, the support of the ky variable is estimated as
#
"
c
c 2π ∆θ
2π ∆θ
,
.
(51)
ky ∈ −
λ
λ
c is made, it is used in (51). After this,
Once the estimate ∆θ
the Fourier transform Ss2 = G2 is rotated by the angle θ̂. A
highly efficient and accurate way to do this is to decompose
the rotation into a series of three shears, which can all be
implemented using one-dimensional operations. This idea has
been introduced in [53] and elaborated with detail in [54]. The
rotated coordinates (kx0 , ky0 ) are obtained using the sequence
of transformations
0
ky
=
kx0
(52)
1
0
ky
1 − tan θ̂2
1 − tan θ̂2
.
kx
sin θ̂ 1
0
1
0
1
The inverse Fourier transform of the rotated spectrum
G2 (kx0 , ky0 ) = R(θ̂)G2 (kx , ky ) produces the rotated and scaled
image as
n
o
−1
0
0
g2 (θ̂, x, y) = Fk−1
F
G
(k
,
k
)
.
(53)
2
x y
ky →y
x →x
The loss function to be minimized is the intensity correlation
between g2 and g1 , which is defined as
Z ∞Z ∞
b x, y)|2 dxdy, (54)
LCR (θ̂) = −
|g1 (x, y)|2 |g2 (θ,
−∞
−∞
n
o
−1
where g1 (x, y) = Fk−1
F
{G
(k
,
k
)}
, and G1 =
1
x
y
ky →y
x →x
?
b
Ss1 . Once θ = arg minθb LCR (θ) is obtained, the angular
spacing can be deduced using (50) and consequently, the crossrange support is obtained using (47).
The above formulation is valid when the sub-images g1
and g2 are made from relatively short slow-time intervals
during which the aspect angle θ changes linearly. This method
thus only applies if the change in θ is quasi-linear. The
advantage of our formulation compared to the polar mapping
method [43] is that neither a two-dimensional resampling or
a complex iterative procedure is needed. The process that
we describe above thus combines the desirable qualities of
the rotation correlation [42] and polar mapping [43] methods
and overcomes some of their individual shortcomings and
limitations.
E. Time-frequency imaging approach
The final stage in our ISAR imaging algorithm is modifying
the ISAR image resulting from the range alignment, time
windowing, keystone formatting, and autofocus procedures.
This is achieved by replacing the Fourier transform with a
high-resolution time-frequency representation. The Cohen’s
class is a collection bi-linear time-frequency representations,
which are defined as a two-dimensional convolution between
a low-pass kernel function and the Wigner-Ville distribution
of the signal [55]. An equivalent definition can also be made
in terms of the ambiguity function, in which case the twodimensional Fourier transform of the representation belonging
to the Cohen’s class is obtained by multiplying the ambiguity
function with a suitable kernel function.
Several different time-frequency representations have been
successfully used in the ISAR image reconstruction [27], [29]–
[33], [36], [37], [56]. Our choice is again motivated by simplicity and computational efficiency. Choosing the optimal kernel
function for the time-frequency representation is not trivial,
and in this case depends on the motion of the noncooperative
object. According to our principles, an optimization procedure
should be used to determine the parameters of the optimal
kernel function, which results in the quantitatively best possible image. The quantitative measure describing the quality
of result is again chosen to be the contrast of the intensitynormalized ISAR image. We have previously introduced this
methodology in [50].
The above reasoning leads us to use the S-method [57]
to produce the final ISAR image. The S-method was first
demonstrated in ISAR in [37], where it was shown to surpass
the conventional Fourier transform-based image reconstruction
by improving blurred and distorted ISAR images due to
various target motions. Importantly, the advantages of the Smethod-based approach include e.g. the compensation of highorder phase terms due to both rotational and translational
motions and computational efficiency.
The simplicity and efficiency choice are based on the fact
that, by utilizing the S-method, we can start with the rangeDoppler image sS, which is then used to calculate the Smethod time slice
Z ∞
∗
SM (r, ω) =
P (ν)sSAF (r, ω + ν)sSAF
(r, ω − ν)dν.
−∞
(55)
The simplest choice is to use P (ω) = Π (ω/Ω), which means
that the kernel of the S-method is described by only one
parameter, the length of the integration window in (55). This
choice results in
Z Ω/2
∗
SM (Ω; r, ω) =
sSAF (r, ω + ν)sSAF
(r, ω − ν)dν.
−Ω/2
(56)
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
10
To deal with the spatially variant defocusing caused by the
rotational motion, we propose using a contrast optimization
procedure to determine an optimal range-dependent integration
window for the S-method. This means that Ω = Ω(r) is not a
constant, but different for each range resolution cell. For this
purpose, the loss function to be minimized for range cell r is
Z ∞
d (Ω(r); r, ω))dω,
LSM (Ω(r); r) =
ψ(SM
(57)
−∞
where
d (Ω(r); r, ω) = R ∞ |SM (Ω(r); r, ω)|
SM
|SM (Ω(r); r, ω 0 )|dω 0
−∞
(58)
is the intensity-normalized cross-range profile at range r. The
optimal range-dependent integration window Ω(r) is obtained
by minimizing (57) for each range cell r.
The contrast optimization approach is well-suited for determining the optimal kernel function (in this case Ω) for two
reasons. First, if the S-method is able to enhance the resolution
of the image in the cross-range direction, then the contrast of
the image increases. As the window Ω starts to lengthen, crossterms start to appear between the scatterers, which blur the
ISAR image and decrease its contrast. Thus, minimizing LSM
will result in a compromise between enhanced resolution and
cross-terms. Second, the optimal length for Ω will be relatively
short when there are strong scattering centers nearby. Since
ω is in reality a sampled variable, a very small amount of
computation is thus needed for minimizing LSM .
We note that the slow-time window optimization procedure
presented in Section III-B essentially performs a part of
determining the kernel function of the time-frequency representation. When using the S-method, only a single time sample
(ISAR image frame) of the time-frequency representation can
be calculated very efficiently using (56). A further procedure
which can be used to enhance the contrast of the ISAR image
is the time-frequency reassignment method [58]. We have used
this procedure in a previous version of our algorithm [50], and
its use in ISAR has been demonstrated with experimental data
in [59], [60]. However, the procedure we use in this paper
automatically locates a slow-time interval during which the
motion of the object is as smooth as possible. So, while the
reassignment method possibly increases the contrast slightly,
the increase in image quality does not warrant the extra computation required to calculate the reassigned image function.
Furthermore, if the reassignment operation is performed, it
can be done only in the frequency direction ω, since the full
time-frequency representation is not calculated in our ISAR
algorithm.
Fig. 2 presents a summary of the ISAR processing. It
also briefly describes the proposed improvements, which are
decomposed into several successive steps for clarity.
IV. N UMERICAL RESULTS
A. Experimental setup
We demonstrate the ISAR algorithm presented in Fig. 2
with a numerical example. The data we use were obtained
using a turntable and a radar system with a stepped-frequency
waveform with horizontal polarization. For benchmarking our
algorithm, we produced a noncooperative scenario with objects
having hard and unknown motion states. We induced artificial
translational motions in the turntable data, after which we
resampled the data in the slow-time direction to produce a
nonlinear change in the aspect angle θ. The purpose of this
experiment is to test challenging motion compensation scenarios and demonstrate the performance of our ISAR algorithm
using real data while knowing the ground truth of the motions.
Then we are able to compare the obtained results with the
ground truth.
The measurements were conducted at Ylöjärvi, Finland, at
an open space measurement range, approximately 240 meters
away from the target and at an elevation angle of 0 degrees.
The hardware used in the measurement campaign included a
radar manufactured by System Planning Corporation, USA,
and a turntable manufactured by Dynaset Oy, Finland. We
used five measurements of different objects. The measured
vehicles can be seen in Fig. 3. The length of the slowtime window for the range-compressed signal prior to ISAR
processing was chosen to be 1000 samples corresponding to
an integration time of approximately 30 seconds. The burst
repetition frequency of the radar was approximately 35 Hz
and the objects rotated approximately 60 degrees during the
chosen slow-time interval. The frequency bandwidth of the
waveform was B = 1 GHz producing a range resolution of
δr = 0.15 m, while the carrier frequency was set to fc = 8.5
GHz. However, for our noncooperative scenario, this data was
resampled so that the data follows the artificial motions with
an adequate pulse repetition frequency avoiding cross-range
ambiguities.
For each vehicle, four different trajectories were emulated.
The range trajectories were polynomials of randomly chosen
order (between second and fifth) with random uniformly
distributed coefficients. The range trajectories were normalized
to make the difference between the maximum and minimum
values of r0 slightly smaller than the unambiguous range
window of 70 meters. The angular trajectories were emulated
similarly, the only difference being that the polynomials were
chosen to be monotonically increasing. As an example, both
the translational and rotational motions emulated for the fifth
vehicle are shown in Fig. 4. Altogether we have 20 different
target trajectories for the subsequently considered tests.
B. Results
Referring back to Fig. 2, our ISAR algorithm starts with
the range alignment procedure. We used the steepest descent
method with a fifth order Legendre polynomial parametrization and a line search procedure to solve it. The progress
of the numerical optimization algorithm is demonstrated for
the mean squared envelope difference loss function in Fig. 5,
which shows the loss values and gradient norms after each
iteration.
The quality of the estimation result can be analyzed by
calculating the standard deviations of the residual error re (t) =
r0 (t) − rb0 (t) as
q
2
σe = h(re (t) − hre (t)i) i.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
11
TABLE I: Mean residual errors for the different initial guesses
and loss functions. The loss functions L1 and L2 are the
benchmarks, while L1 + L2 and L1 L2 are the proposed new
loss functions.
Loss function / Initial guess
(14)
(15)
L1
L2
L1 + L2
L1 L2
σe
0.5313
0.3044
0.1674
0.1512
0.1076
0.0979
Ideally, this value should be smaller than half the range
resolution δr. We compared the performance of two different
initial guesses and four different loss functions in terms of
the residual errors. The initial guesses we compared were the
center of mass (14) and the maximum correlation (15), and
the loss functions we considered were the entropy of the sum
envelope (L1 ), the mean squared envelope difference (L2 ),
and the two combinations which are the sum L1 + L2 and the
product L1 L2 of these two. The results are presented in Table
I. The maximum correlation method was more accurate as the
initial guess and the product LRA = L1 L2 was slightly more
accurate than the sum, although the difference between them
was only one centimeter. As seen from the results in Table I,
the proposed loss functions surpassed the benchmarks by 35
percent in terms of the residual estimation error. This validates
the partially heuristic justification made in Section III-A,
where we introduced these new loss functions. The intensity
of the range-aligned signal for one trajectory realization after
using the obtained estimate to align the range profiles is shown
in Fig. 6 (b).
After the range alignment, the optimal slow-time window
was located by minimizing LT W . In this part of our ISAR
algorithm, it is of interest to compare the result with the maximum contrast time window optimization procedure in [44].
In [44], an image contrast-based autofocusing—essentially the
same algorithm as COA—is performed prior to evaluating the
loss function of negative image contrast. The most important
factor in this comparison is the motion compensation included
in the loss evaluation. To ensure that the results of different
motion compensation strategies were comparable, we used the
same optimization algorithm for all of them. The optimization
algorithm we used was a simple coordinate descent search,
where one-dimensional line searches are performed successively in different coordinate directions. This optimization
algorithm is slightly more computationally intensive than the
one used in [44] but it also avoids the need for choosing
the threshold value included in [44], to which the algorithm
is sensitive and which can significantly affect the length
of the time-window. The results for four different motion
compensation strategies in the time-window optimization are
shown in Table II. As seen from these results, the strategy of
including the keystone formatting is computationally most efficient and produces the best image contrast value with the PGA.
However, the inclusion of the keystone formatting increased
the contrast by 6–10 percent compared to both benchmarks,
COA and PGA. Additionally, the proposed strategy of using
TABLE II: Mean values for the optimal window lengths, loss
values, and relative computational speeds for different motion
compensation strategies in the time-window optimization. The
rows COA and PGA are the benchmarks.
Motion compensation
COA
COA + Keystone formatting
PGA
PGA + Keystone formatting
T?
336
352
320
363
Image contrast
77.57
82.55
90.30
99.89
Relative speed
1
0.79
7.5
4.8
TABLE III: The image contrasts for three different cases:
the range-Doppler image without the S-method modification
(L = 0), the S-method image with a fixed window length L
(benchmark), and the S-method image with a range dependent
window length L(r).
S-method window
L=0
Fixed L
Range-dependent L = L(r)
Image contrast
99.89
121.34
128.73
the PGA in conjunction with the keystone formatting was
almost five times faster than the benchmark based on COA
alone.
After the optimal slow-time window was located, the signal
was autofocused using PGA, keystone formatted, and autofocused using COA. In our example, the COA consisted of a
single iteration of (43). A single iteration sufficed because the
residual phase errors after performing PGA were very small
in magnitude. After these procedures, the S-method kernel
optimization and cross-range scaling procedures followed. The
optimal integration window for the S-method was obtained by
minimizing the entropy of the S-method cross-range profiles.
The entropy of the cross-range profile containing the highest
total energy for the first vehicle and trajectory realization is
shown in Fig. 7 (a). In Table III, the resulting image contrasts
for a fixed integration window L and a range-dependent
L are presented alongside with that of the range-Doppler
(Fourier-transform-based) image (L = 0). As seen in Table III,
our modification increased the image contrast by six percent
compared to a fixed L.
The cross-range scaling problem was solved by dividing
the optimal slow-time window into two non-overlapping parts.
Then the second part was rotated in the spatial frequency
domain, and the rotational angle which minimizes the intensity
correlation loss function (54) (shown in Fig. 7 (b)) was used
to evaluate the cross-range scale of the ISAR image. This
concluded our ISAR algorithm, and the obtained result for
each vehicle and one of the trajectories are shown in Fig. 3
on homogeneous two-dimensional Cartesian coordinate grids
in the spatial domain.
For reference, we implemented an ISAR algorithm using
the same well-established optimization steps that we use but
without the modifications proposed in this paper. The result for
the first vehicle and trajectory realization without and with our
modifications are shown in Figs. 8 (a) and (b), respectively.
For both images, we calculated a widely used quality measure
used to assess image contrast, the ratio between the standard
deviation and the mean of the image intensities. As can
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
12
be seen, the use of the proposed improvements produced
an ISAR image with a notable 50 percent higher contrast
value than that in the reference for this trajectory realization.
Taking into account all the 20 different use-cases, the image
contrast increased by 28 percent on average. Less spatially
variant blurring also occurs. It should be noted that the result
in Fig. 8 (b) represents one of the best outcomes of the
performed experiments. When the motion of the object was
significantly simpler due to the randomly emulated trajectory,
the proposed optimization steps were naturally not able to
increase the image contrast as much as above. Thus, the
advantages of the proposed improvements are the greatest
when the noncooperative object exhibits changing translational
and rotational motions during the ISAR data acquisition.
The results of well-focused ISAR imaging in Fig. 3 contain
plenty of useful information for noncooperative target recognition (NCTR) or automatic target recognition (ATR) purposes.
Importantly, the size, the shape, and the dominant scatterers of
the objects in Figs 3 (f), (g), (i), and (j) are clearly distinguishable because of the high quality of the motion compensation
and cross-range scaling. A more difficult case is presented in
Fig. 3 (h), where the aspect from the front of the car makes
it more challenging to determine the shape or the size of the
object. Notice that the color scalings in Figs. 3 (f)–(j) are based
on the maximum image intensity. This makes some of their
features harder to interpret visually. In many current NCTR
and ATR approaches, major challenges are related to image
formation due to clutter and the unsolvable motions of the
imaged object. Another challenge is the problem of limited
training data concerning all the necessary object classes, radar
parameters, and aspect angles. Overcoming these challenges
is the subject of current NCTR/ATR research but is out of the
scope of this paper.
C. Computational load analysis
To consider the computational load of our ISAR algorithm,
we evaluated the computational cost of each stage of the
algorithm using the analysis and expressions we derived for
the different loss functions and their derivatives. The order
of magnitude and relative costs between the different parts of
the algorithm are of interest rather than the exact number of
arithmetic operations. Table IV shows approximations (which
are used to evaluate the correct order of magnitude) for the
total number of arithmetic operations required for evaluating
each loss function or its derivative. In Table IV, M is the
number of samples in the slow-time direction, and N is the
number of samples in the range direction. In this analysis, we
have not taken into account the fact that the number of samples
M in the slow-time direction changes in different parts of the
algorithm. However, we assume that the order of magnitude of
the number of samples does not change after the time window
optimization procedure, in which case the values in Table IV
provide a useful comparison.
The computational costs are similar for all loss functions
except for the derivatives of LAF , which are included in
Gm /Hmm . However, to calculate the total cost we need to
take into account how many times each of the expressions
TABLE IV: The approximate number of arithmetic operations
required for evaluating each expression.
Expression
LRA
∂LRA /∂∆r
Operation count
M N log N
M N log N
LT W
LAF
Gm /Hmm
M N log M 2 N
M N log M
M N (log M + M )
LCR
M N log N 2 M
3
3
Relative cost (M = N )
0.03
0.03
0.08
0.03
1
0.08
in Table IV is evaluated in our ISAR algorithm. Since COA
is performed after PGA to remove the residual phase errors,
Gm /Hmm is evaluated only once in our ISAR algorithm;
every other loss function and derivative in Table IV is used in
a local optimization algorithm, which includes a line search
procedure. Considering this and the numerical results of this
Section, we see that these loss functions need to be evaluated
far more often. Typically the algorithms are set to run a
maximum of ten iterations, and each iteration includes a line
search procedure with a maximum of ten evaluations of the
loss or gradient function. In conclusion, the computational
costs of the individual parts of the ISAR algorithm are actually
nearly identical.
To illustrate the computational load of our ISAR algorithm,
the algorithms were implemented using MATLAB and tested
using a laptop with a 2.60 GHz dual-core processor and 8
GB random access memory. In the experiments performed in
this Section, our algorithm took about 25 seconds to complete
on the average. The most time-consuming part was the timewindow optimization procedure, which took approximately
half of the total computation time. The ISAR algorithm using
the same well-established optimization steps without our modifications averaged at 45 seconds. The most significant speedup was achieved in the range alignment phase of the ISAR
algorithm, where our modification increased the computational
speed by an order of magnitude. This can be attributed to the
fact that we use a carefully selected initial guess and first order
optimization, whereas in the original method [15] black box
numerical optimization was used. Regarding the time-window
optimization procedure, the computational speeds of different
motion compensation strategies were listed in Table II. These
results show that using PGA and keystone formatting rather
than COA was five times faster.
The comparison between the computation times should
be considered only as indicative, since all the optimization
steps in Fig. 2 have not been considered together as a wellestablished approach before. Moreover, the exact implementations of the optimization algorithms are not fully described in
all the references, which presents a challenge for a meaningful
comparison of the overall ISAR algorithm. Also, as seen
from Table II, the motion compensation strategy used in the
time window optimization significantly affects the computation time (PGA was used in these experiments). However,
considering the results of the performed tests, the proposed
improvements resulted in increased computational efficiency
in the ISAR processing overall.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
13
D. Summary of the results
The key results of this paper can be summarized as follows:
1) The computational cost of the global range alignment
method was reduced by an order of magnitude, and the
estimation performance was enhanced by 35 percent (see
Table I). These results were achieved by using an initial
guess for the range shifts (14), deriving an expression
for the gradient of the loss function (19) and using
combinations of the loss functions as in (22).
2) Including keystone formatting in the time-window optimization procedure increased the image contrast by 6–10
percent compared to the benchmarks. In addition, the proposed improvement performed the motion compensation
and loss evaluation five times faster than COA.
3) A new expression for the second order partial derivatives
of the loss function used in COA was derived in (38)–
(44). This expression provides a practical solution to the
long-standing contrast maximization problem typically
involving heuristic numerical optimization. In particular,
the solution not only reduces the computational complexity but also leads to simple algorithmic implementation,
as discussed in Section III-C.
4) The rotation correlation method, which fundamentally
assumes a known center of target rotation, and the polar
mapping method, which relies on a two-dimensional
resampling operation, are typically used separately to
solve the cross-range scaling problem. However, since the
two methods rely on a cost function of equivalent nature,
see (54), combining them was empirically validated to
yield a low discrepancy of 4–8 percent in terms of the
true width of the imaged objects. It is noteworthy that
no knowledge of the target center was required, and only
computationally efficient one-dimensional interpolations
were used.
5) The proposed contrast optimization procedure, which
determines the optimal kernel for the time-frequency
representation per range bin (55)–(58), enhanced the
contrast of the ISAR image reconstruction by six percent
compared to the traditional S-method approach with with
a fixed length kernel. Evidently, the spatially variant
blurring caused by the nonuniform rotational motion of
the car was significantly reduced, as shown in Fig. 8.
In this way, our modifications to the optimization-based
data-driven motion compensation techniques were shown both
to increase the computational efficiency and image quality.
V. C ONCLUSION
We have described in detail how optimization can be
used in every part of data-driven motion compensation and
ISAR image reconstruction. Firstly, the performance of the
global range alignment method was enhanced by combining
previously suggested loss functions and utilizing first order
numerical optimization. After that, the time window optimization process was enhanced by performing autofocus and
keystone formatting before evaluating the contrast of the ISAR
image. Then, the contrast optimization autofocus approach
was utilized by deriving an expression for the second order
partial derivatives of the loss function. The time-frequency
based imaging approach was also enhanced by choosing the
optimal kernel for the time-frequency representation based on
the image contrast. Finally, the cross-range scaling problem
was solved by combining the rotation correlation and polar
mapping methods. By combining all the proposed findings,
a computationally efficient ISAR algorithm was demonstrated
to improve the imaging performance under complicated target
motion dynamics. These optimization steps not only improve
ISAR imaging but may also be applicable, for example, to the
computer-aided tomography widely used in the medical field.
In this sense, by viewing the above findings for radar in the
context of image processing, this work continues the trend that
began in the 1980s.
ACKNOWLEDGEMENT
The authors would like to thank the anonymous reviewers
for their helpful comments, the Finnish Scientific Advisory
Board for Defence (MATINE) for funding this study, and the
Finnish Defence Research Agency for providing the experimental data.
R EFERENCES
[1] C. C. Chen and H. C. Andrews, “Target-motion induced radar imaging,”
IEEE Transactions on Aerospace and Electronic Systems, vol. AES-16,
no. 1, pp. 2–14, Jan. 1980.
[2] J. L. Walker, “Range-Doppler imaging of rotating objects,” IEEE Transactions on Aerospace and Electronic Systems, vol. AES-16, no. 1, pp.
23–52, Jan. 1980.
[3] D. A. Ausherman, A. Kozma, J. L. Walker, H. M. Jones, and E. C. Poggio, “Developments in radar imaging,” IEEE Transactions on Aerospace
and Electronic Systems, vol. AES-20, no. 4, pp. 363–400, Jul. 1984.
[4] R. K. Raney, “Synthetic aperture imaging radar and moving targets,”
IEEE Transactions on Aerospace and Electronic Systems, vol. AES-7,
no. 3, pp. 499–505, May 1971.
[5] S. Werness, W. Carrara, L. Joyce, and D. Franczak, “Moving target
imaging algorithm for SAR data,” IEEE Transactions on Aerospace and
Electronic Systems, vol. 26, no. 1, pp. 57–67, Jan. 1990.
[6] R. P. Perry, R. C. DiPietro, and R. L. Fante, “SAR imaging of moving
targets,” IEEE Transactions on Aerospace and Electronic Systems,
vol. 35, no. 1, pp. 188–200, Jan. 1999.
[7] J. R. Fienup, “Detecting moving targets in SAR imagery by focusing,”
IEEE Transactions on Aerospace and Electronic Systems, vol. 37, no. 3,
pp. 794–809, Jul. 2001.
[8] F. Zhou, R. Wu, M. Xing, and Z. Bao, “Approach for single channel
SAR ground moving target imaging and motion parameter estimation,”
IET Radar, Sonar and Navigation, vol. 1, no. 1, pp. 59–66, Feb. 2007.
[9] D. Kirkland, “Imaging moving targets using the second-order keystone
transform,” IET Radar, Sonar and Navigation, vol. 5, no. 8, pp. 902–910,
Oct. 2011.
[10] M. Martorella, E. Giusti, F. Berizzi, A. Bacci, and E. D. Mese, “ISAR
based techniques for refocusing non-cooperative targets in SAR images,”
IET Radar, Sonar and Navigation, vol. 6, no. 5, pp. 332–340, Jun. 2012.
[11] T. K. Sjögren, V. T. Vu, M. I. Pettersson, A. Gustavsson, and L. M. H.
Ulander, “Moving target relative speed estimation and refocusing in
synthetic aperture radar images,” IEEE Transactions on Aerospace and
Electronic Systems, vol. 48, no. 3, pp. 2426–2436, Jul. 2012.
[12] C. Noviello, G. Fornaro, and M. Martorella, “Focused SAR image
formation of moving targets based on Doppler parameters estimation,”
IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 6,
pp. 3460–3470, Jun. 2015.
[13] H. Wu, D. Grenier, G. Y. Delisle, and D.-G. Fang, “Translational motion
compensation in ISAR image processing,” IEEE Transactions on Image
Processing, vol. 4, no. 11, pp. 1561–1571, Nov. 1995.
[14] L. Xi, L. Guosui, and J. Ni, “Autofocusing of ISAR images based on
entropy minimization,” IEEE Transactions on Aerospace and Electronic
Systems, vol. 35, no. 4, pp. 1240–1252, Oct. 1999.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
14
[15] J. Wang and D. Kasilingam, “Global range alignment for ISAR,” IEEE
Transactions on Aerospace and Electronic Systems, vol. 39, no. 1, pp.
351–357, Jan. 2003.
[16] J. Wang and X. Liu, “Improved global range alignment for ISAR,” IEEE
Transactions on Aerospace and Electronic Systems, vol. 43, no. 3, pp.
1070–1075, Jul. 2007.
[17] D. Zhu, L. Wang, Y. Yu, Q. Tao, and Z. Zhu, “Robust ISAR range
alignment via minimizing the entropy of the average range profile,” IEEE
Geoscience and Remote Sensing Letters, vol. 6, no. 2, pp. 204–208, Apr.
2009.
[18] V. J. van Rensburg, A. Mishra, and W. Nel, “Quality measures for
HRR alignment based ISAR imaging algorithms,” in 2013 IEEE Radar
Conference, May 2013, pp. 1–4.
[19] R. Vehmas, J. Jylhä, M. Väilä, and J. Kylmälä, “A computationally
feasible optimization approach to inverse SAR translational motion
compensation,” in Proceedings of the12th European Radar Conference,
Sep. 2015, pp. 17–20.
[20] P. H. Eichel, D. C. Ghiglia, and C. V. Jakowatz, “Speckle processing
method for synthetic-aperture-radar phase correction,” Optics Letters,
vol. 14, no. 1, pp. 1–3, Jan. 1989.
[21] D. E. Wahl, P. H. Eichel, D. C. Ghiglia, and C. V. Jakowatz, “Phase
gradient autofocus - a robusttool for high resolution SAR phase correction,” IEEE Transactions on Aerospace and Electronic Systems, vol. 30,
no. 3, pp. 827–835, Jul. 1994.
[22] F. Berizzi and G. Corsini, “Autofocusing of inverse synthetic aperture radar images using contrast optimization,” IEEE Transactions on
Aerospace and Electronic Systems, vol. 32, no. 3, pp. 1185–1191, Jul.
1996.
[23] J. R. Fienup, “Synthetic aperture radar autofocus by maximizing sharpness,” Optics Letters, vol. 25, no. 4, pp. 221–223, Feb. 2000.
[24] J. R. Fienup and J. J. Miller, “Aberration correction by maximizing
generalized sharpness metrics,” The Journal of the Optical Society of
America A, vol. 20, no. 4, pp. 609–620, Apr. 2003.
[25] R. L. Morrison, M. N. Do, and D. C. Munson, “SAR image autofocus
by sharpness optimization: a theoretical study,” IEEE Transactions on
Image Processing, vol. 16, no. 9, pp. 2309–2321, Sep. 2007.
[26] M. Xing, R. Wu, J. Lan, and Z. Bao, “Migration through resolution cell
compensation in ISAR imaging,” IEEE Geoscience and Remote Sensing
Letters, vol. 1, no. 2, pp. 141–144, Apr. 2004.
[27] M. Xing, R. Wu, and Z. Bao, “High resolution ISAR imaging of
high speed moving targets,” IEE Proceedings on Radar, Sonar and
Navigation, vol. 152, no. 2, pp. 58–67, Apr. 2005.
[28] V. C. Chen, “Reconstruction of inverse synthetic aperture radar image
using adaptive time-frequency wavelet transform,” in SPIE’s 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics. International Society for Optics and Photonics, 1995, pp. 373–386.
[29] L. C. Trintinalia and H. Ling, “Joint time-frequency ISAR using adaptive
processing,” IEEE Transactions on Antennas and Propagation, vol. 45,
no. 2, pp. 221–227, Feb. 1997.
[30] V. C. Chen and S. Qian, “Joint time-frequency transform for radar rangedoppler imaging,” IEEE Transactions on Aerospace and Electronic
Systems, vol. 34, no. 2, pp. 486–499, Apr. 1998.
[31] Y. Wang, H. Ling, and V. C. Chen, “ISAR motion compensation
via adaptive joint time-frequency technique,” IEEE Transactions on
Aerospace and Electronic Systems, vol. 34, no. 2, pp. 670–677, Apr.
1998.
[32] F. Berizzi, E. D. Mese, M. Diani, and M. Martorella, “High-resolution
ISAR imaging of maneuvering targets by means of the range instantaneous Doppler technique: modeling and performance analysis,” IEEE
Transactions on Image Processing, vol. 10, no. 12, pp. 1880–1890, Dec.
2001.
[33] L. J. Stankovic, T. Thayaparan, M. Dakovic, and V. Popovic, “S-method
in radar imaging,” in Proceedings of the14th European Signal Processing
Conference, Sep. 2006, pp. 1–5.
[34] V. C. Chen and H. Ling, Time-frequency transforms for radar imaging
and signal analysis. Artech House, 2002.
[35] I. Djurović, T. Thayaparan, and L. Stanković, “Adaptive local polynomial fourier transform in ISAR,” EURASIP Journal on Applied Signal
Processing, vol. 2006, pp. 129–129, 2006.
[36] L. J. Stankovic, T. Thayaparan, V. Popovic, I. Djurovic, and M. Dakovic,
“Adaptive S-method for SAR/ISAR imaging,” EURASIP Journal on
Advances in Signal Processing, vol. 2008, pp. 1–10, 2008.
[37] T. Thayaparan, L. Stankovic, C. Wernik, and M. Dakovic, “Real-time
motion compensation, image formation and image enhancement of
moving targets in ISAR and SAR using s-method-based approach,” IET
Signal Processing, vol. 2, no. 3, pp. 247–264, 2008.
[38] L. Stanković, M. Daković, and T. Thayaparan, Time-frequency signal
analysis with applications. Artech house, 2014.
[39] M. Martorella, “Novel approach for ISAR image cross-range scaling,”
IEEE Transactions on Aerospace and Electronic Systems, vol. 44, no. 1,
pp. 281–294, Jan. 2008.
[40] F. Prodi, “ISAR cross-range scaling using a correlation based functional,” in IEEE RADAR 2008, May 2008, pp. 1–6.
[41] C.-M. Yeh, J.Yang, Y.-N. Peng, and X.-M. Shan, “Rotation estimation
for ISAR targets with a space-time analysis technique,” IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 5, pp. 899–903, Sep.
2011.
[42] C.-M. Yeh, J. Xu, Y.-N. Peng, and X.-T. Wang, “Cross-range scaling
for ISAR based on image rotation correlation,” IEEE Geoscience and
Remote Sensing Letters, vol. 6, no. 3, pp. 597–601, Jul. 2009.
[43] S.-H. Park, H.-T. Kim, and K.-T. Kim, “Cross-range scaling algorithm
for ISAR images using 2-D Fourier transform and polar mapping,” IEEE
Transactions on Geoscience and Remote Sensing, vol. 49, no. 2, pp.
868–877, Feb. 2011.
[44] M. Martorella and F. Berizzi, “Time windowing for highly focused ISAR
image reconstruction,” IEEE Transactions on Aerospace and Electronic
Systems, vol. 41, no. 3, pp. 992–1007, Jul. 2005.
[45] M. A. Richards, J. A. Scheer, W. A. Holm, and W. L. Melvin, Principles
of Modern Radar: Basic Principles. SciTech Publishing, 2010.
[46] D. C. Munson, J. D. O’Brien, and W. K. Jenkins, “A tomographic
formulation of spotlight-mode synthetic aperture radar,” Proceedings of
the IEEE, vol. 72, no. 8, pp. 917–925, Aug. 1983.
[47] M. D. Desai and W. K. Jenkins, “Convolution backprojection image
reconstruction for spotlight mode synthetic aperture radar,” IEEE Transactions on Image Processing, vol. 1, no. 4, pp. 505–517, Oct. 1992.
[48] C. V. Jakowatz Jr., D. Wahl, P. Eichel, D. Ghiglia, and P. Thompson,
Spotlight-mode Synthetic Aperture Radar: A signal processing approach.
Boston, MA: Kluwer Academic Publishers, 1996.
[49] B. D. Steinberg, “Microwave imaging of aircraft,” Proceedings of the
IEEE, vol. 76, no. 12, pp. 1578–1592, Dec. 1988.
[50] R. Vehmas, J. Jylhä, M. Väilä, and A. Visa, “ISAR imaging of noncooperative objects with non-uniform rotational motion,” in 2016 IEEE
Radar Conference, May 2016, pp. 932–937.
[51] S. A. Fortune, “Phase error estimation for synthetic aperture imagery,”
Ph.D. dissertation, University of Canterbury, Christchurch, New Zealand,
2005.
[52] T. J. Schulz, “Optimal sharpness function for SAR autofocus,” IEEE
Signal Processing Letters, vol. 14, no. 1, pp. 27–30, Jan. 2007.
[53] A. W. Paeth, “A fast algorithm for general raster rotation,” in Graphics
interface ’86 - Vision interface ’86, May 1986, pp. 77–81.
[54] M. Unser, P. Thevenaz, and L. Yaroslavsky, “Convolution-based interpolation for fast, high-quality rotation of images,” IEEE Transactions
on Image Processing, vol. 4, no. 10, pp. 1371–1381, Oct. 1995.
[55] L. Cohen, Time-frequency analysis: theory and applications. NJ, USA:
Prentice-Hall, 1995.
[56] S. Stankovic, I. Orovic, and A. Krylov, “Two-dimensional hermite smethod for high-resolution inverse synthetic aperture radar imaging
applications,” IET Signal Processing, vol. 4, no. 4, pp. 352–362, Aug.
2010.
[57] L. Stankovic, “A method for time-frequency analysis,” IEEE Transactions on Signal Processing, vol. 42, no. 1, pp. 225–229, Jan. 1994.
[58] F. Auger and P. Flandrin, “Improving the readability of time-frequency
and time-scale representations by the reassignment method,” IEEE
Transactions on Signal Processing, vol. 43, no. 5, pp. 1068–1089, May
1995.
[59] R. Wang and Y. C. Jiang, “ISAR ship imaging based on reassigned
smoothed pseudo wigner-ville distribution,” in 2010 International Conference on Multimedia Technology, Oct. 2010, pp. 1–3.
[60] P. Suresh, T. Thayaparan, and K. Venkataramaniah, “ISAR imaging
of moving targets based on reassigned smoothed pseudo wigner-ville
distribution,” in Proceedings of International Radar Symposium India,
IRSI-2011, 2011.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
15
Risto Vehmas (S’17) is a Ph.D. student at the Tampere University of Technology (TUT) in Tampere,
Finland. He received his M.Sc. degree in theoretical
physics from the University of Oulu, Oulu, Finland,
in 2014. He has been working at the Department
of Signal Processing at TUT since 2015 on topics
related to synthetic aperture radar signal processing.
Juha Jylhä is a researcher and project manager in
industrial projects at Tampere University of Technology. He received his M.Sc. degree in electrical
engineering from Tampere University of Technology,
Tampere, Finland, in 2005. His current research
interests include system modeling, signal processing,
and artificial intelligence in remote sensing and
military applications. Jylhä is a candidate for a Ph.D.
in computing and electrical engineering at Tampere
University of Technology.
Ari Visa (M’79, SM’96) received the M.S. degree in computer science and technology from the
Linköping University of Technology, Linköping,
Sweden, in 1981, and the Doctor of Technology
degree in information science from Helsinki University of Technology (Aalto), Espoo, Finland, in
1990. He is also a Docent in image analysis at
Aalto University. Dr. Visa is currently a Professor
in Digital Signal Processing and the Vice Dean of
the Faculty of Computing and Electrical Engineering
at Tampere University of Technology.
From 1981 to 1983, he was with the Linköping University of Technology
as a Research and Teaching Assistant in the Computer Vision Laboratory. He
was involved in several image processing projects. From 1984 to 1985, he
was employed by the Finnish Research Centre, as a researcher in the field
of image analysis in paper quality control. From 1985 to 1988, he was a
senior member in technical staff in industrial applications of image analysis
at Jaakko Pöyry Corporation. From 1988 to 1993, he was a senior researcher
and a project manager in the field of image analysis with neural networks at
the Finnish Research Centre. In 1993, he joined the laboratory of Computer
and Information Science. From 1996 to 2000 he was professor in Computer
Science at Lappeenranta University of Technology. Currently, Dr. Visa is a
professor in Signal Processing at Tampere University of Technology. He is
co-author to more than 200 refereed international papers. He holds several
patents. He is active in many conference program committees and evaluates
research papers and proposals. Dr. Visas current research interests are in multimedia, adaptive systems, wireless communications, distributed computing,
soft computing, computer vision, knowledge mining, and knowledge retrieval.
Dr. Visa is the former President and Vice President of the Pattern Recognition Society of Finland and the former chairman of IAPR Workgroup TC3
Machine Learning.
Minna Väilä is a researcher at the Tampere University of Technology in Finland. She received her
M.Sc. degree in signal processing from the Tampere University of Technology, Tampere, Finland, in
2010. She is currently a doctoral student in computing and electrical engineering at Tampere University
of Technology. Her research interests are in topics
relating to radar, including target recognition, radar
response simulation, and performance modeling.
Juho Vihonen (M’15) received his M.Sc. in automation engineering and D.Sc. (Tech.) in signal
processing from Tampere University of Technology
(TUT), Finland, in 2003 and 2009, respectively.
From 2001 to 2017, he was with the Department
of Signal Processing (DSP), TUT, where he held the
position of a Senior Research Fellow. Currently, he
is with Cargotec, where his responsibilities have focused on AI capabilities and analytics architectures
for digital services. Cargotec is a leading provider
of cargo and load handling solutions. His research
interests lie in the fields of machine learning and real-time processing of
multi-sensor signals for control of heavy-duty robotic manipulators.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
16
Fig. 1: Illustration of a two-dimensional imaging geometry for
ISAR for a fixed instant of slow-time. The primed coordinate
system is rigidly attached to the object which moves in the
unprimed system. The coordinate systems are related by a shift
and a rotation about the axis perpendicular to the (x, y)-plane.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
17
Fig. 2: State-of-the-art ISAR processing decomposed into algorithmic steps. The novel contributions proposed are indicated
below the steps. In a typical use-case, for example, the time window is kept fixed without optimization.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
18
Fig. 3: The noncooperative objects used in the ISAR experiments (a)–(e) and the ISAR images of them resulting from our
ISAR algorithm (f)–(j).
(a) The emulated translational motions for the fifth vehicle.
(b) The emulated rotational motions for the fifth vehicle.
Fig. 4: The artificially induced translational (a) and rotational (b) motions of the car in the noncooperative scenario presented
as functions of slow-time t. Note the introduced accelerations and decelerations, which present a challenge for any data-driven
motion compensation.
Fig. 5: The loss values (left) and gradient norms (right) after each iteration in the range alignment problem. Note the steep
reduction of the errors just after a few iterations, attributing to computational efficiency, as discussed.
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TAES.2017.2756518, IEEE
Transactions on Aerospace and Electronic Systems
19
(a) Before range alignment.
(b) After range alignment.
Fig. 6: The intensity of the range-compressed signal (a) before and (b) after range alignment. Without the range alignment,
the translational motion in (a) would impair the image reconstruction. By compensating for the translational motion, the best
result in Table I produces a smooth, accurate outcome, as shown in (b).
(a) The entropy of the S-method cross-range profile.
(b) The graph of the intensity correlation loss function (54).
Fig. 7: As can be seen in (a), the optimal length of the S-method window is four samples. Thereby, to compensate for the
blurring caused by the rotational motion, only a small amount of computation is required. Recall also that the cross-range
scale of the ISAR image is determined by the minimum of LCR in (b), which is easy to evaluate because of the smoothness
of the graph.
(a) ISAR image obtained without our algorithmic improvements. (b) ISAR image obtained with our algorithmic improvements.
Fig. 8: The result of an ISAR algorithm without our improvements for the noncooperative object Ford Focus depicted in
Fig. 3 (a) is shown in (a), while the result of our ISAR algorithm is shown in (b). Note the improved contrast value in (b).
0018-9251 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Download