clutter rejection

advertisement
Integration of IR Sensor Clutter Rejection Techniques with
Pixel Cluster Frame Manipulation
Allison Floyd, Sabino Gadaleta, Dan Macumber, and Aubrey Poore
Numerica Corporation, P.O. Box 271246, Fort Collins, CO 80527
ABSTRACT
Track initiation in dense clutter can result in severe algorithm runtime performance degradation, particularly when using
advanced tracking algorithms such as the Multiple-Frame Assignment (MFA) tracker. This is due to the exponential
growth in the number of initiation hypotheses to be considered as the initiation window length increases. However,
longer track initiation windows produce significantly improved track association. In balancing the need for robust track
initiation with real-world runtime constraints, several possible approaches might be considered. This paper discusses
basic single and multiple-sensor infrared clutter rejection techniques, and then goes on to discuss integration of those
techniques with a full measurement preprocessing stage suitable for use with pixel cluster decomposition and group
tracking frameworks. Clutter rejection processing inherently overlaps the track initiation function; in both cases, candidate
measurement sequences (arcs) are developed that then undergo some form of batch estimation. In considering clutter
rejection at the same time as pixel processing, we note that uncertainty exists in the validity of the measurement (whether
or not the measurement is of a clutter point or a true target), in the measurement state (position and intensity), and in
the degree of resolution (whether a measurement represents one underlying object, or multiple). An integrated clutter
rejection and pixel processing subsystem must take into account all of these processes in generating an accurate sequence
of measurement frames, while minimizing the amount of unrejected clutter. We present a mechanism for combining
clutter rejection with focal plane processing, and provide simulation results showing the impact of clutter processing on
the runtime and tracking performance of a typical space-based infrared tracking system.
Keywords: Infrared Sensor Surveillance, Clutter Rejection, Pixel (Clump) Cluster Tracking, Pixel-Cluster Decomposition, Image Processing
1. INTRODUCTION
We consider the problem of geosynchronous IR satellites observing multiple ballistic missile launch events in dense clutter. Most of the techniques described here are extensible to LEO IR satellites performing below-the-horizon and Earth
limb observation; above-the-horizon viewing will tend not to exhibit the same dense clutter problem, and is therefore
not addressed in this discussion. The multitarget tracking software we utilize as a reference is Numerica’s Multi-Frame
Assignment (MFA) Tracker, 1 although the problem at hand will be present in any multiple frame or multiple hypothesis
tracking system. The MFA Tracker typically uses a six-frame window for track initiation on IR angle-only measurements. While this process does result in high association accuracy, even in the presence of multiple target tracks initiating
near-simultaneously, its runtime performance suffers in the presence of dense clutter. Accordingly, we have developed a
preprocessor system that merges elements of clutter rejection, pixel cluster decomposition, and group clustering to synthesize revised measurement frames which mitigate the runtime bottleneck of track initiation. This preprocessor is intended
to reside in the mission data processing sequence between signal processing and tracking; its input and output can both be
specified as measurement frames with auxiliary data (Figure 1). This processing reduces the number of measurements in
a frame that the tracking system must consider for both track initiation and association while running in a seperate process
space, reducing the computational demands on the tracking system and improving runtime performance.
Clutter rejection techniques can be loosely divided into kinematic and feature-aided components. Kinematic clutter
rejection relies on the ability to identify dynamic characteristics of targets of interest (persistence, particular ranges of
Further author information: (Send correspondence to A.F or A.P.)
A.F.: E-mail: lafloyd@numerica.us, Telephone: (970) 419 8343 x15
A.P.: E-mail: abpoore@numerica.us, Telephone: (970) 419 8343 x20
Mission Data Processing
Sensor
Track States
Image
Image Processing
Raw Measurement Frame
Clutter Rejection
Cluster Processing
Edited Measurement Frame
Tracker
Figure 1: Mission Data Processing Architecture.
velocity and acceleration) that are unlikely to be shared by clutter returns. These properties are dependent on the scene
phenomenology as well as the target type. The kinematic clutter rejection process is thus analogous to the gating process
within a target tracker; measurements which do not pass gating tests with any other measurement over a requisite number
of frames are assumed to be false alarms, i.e. clutter, and are removed from the output measurement frame. In order for
a preprocessor to be able to make use of the full range of gating tools, it must have the ability to accumulate frames over
time, and it must also receive feedback in the form of current track states.
Feature-aided clutter rejection incorporates a priori knowledge of expected target feature information; in this case, inband infrared intensity information. For many systems, clutter returns that can be differentiated from target returns on the
basis of single- or multiple-frame intensity information will have been removed as part of signal processing. Accordingly,
we do not assume that clutter returns can be immediately removed on the basis of feature data. However, if a model can
be generated of the expected time variation in feature data for target returns, it is possible to utilize feature data as part of
a gating distance metric, thus further restricting the number of measurement arcs under consideration. Clutter rejection
and more generalized gating techniques are discussed in Section 3.
Basic point-to-point clutter rejection suffices when the clutter is sparse enough that the average clutter point does not
tend to gate with any other clutter point or true target, for a sufficient set of gates. Spatially correlated clutter resulting
from landmass edges, cloud formations, and the terminator may be intractable using these techniques, however. These
regions of dense clutter require a different approach. We apply our group cluster tracking technology 2 to the problem of
tracking temporally persistent, spatially correlated clutter regions. We discuss the application of group cluster tracking to
the clutter environment in Section 4.2.
The final component of our complexity reduction preprocessor is not itself aimed at clutter rejection, but rather at
merged measurement processing. The infrared tracking problem from geosynchronous altitude is predominantly a boost
phase tracking problem, and thus does not tend to have the same level of closely-spaced object challenge as the midcourse
tracking problem. However, due to the long range from target to sensor, it is still possible for one or more observers
of a scene to perceive multiple launch events as a single pixel cluster. While it is not necessary to address the pixel
cluster tracking problem at the same time as clutter rejection, the introduction of feedback data into the preprocessor as
part of the clutter rejection process suggests that some additional time can be saved by simultaneously addressing the
track-to-measurement multiassignment problem and applying pixel-cluster decomposition to those measurements which
have been identified as candidate CSOs. In this fashion, the preprocessor handles all of the mechanics required to take a
raw measurement frame and convert it to a reduced-clutter frame in which the remaining clutter measurements have been
clustered to reduce processor load, and the hypothesized target measurements have been reassessed to minimize CSO
conditions. We discuss the implications of pixel cluster decomposition on the GEO IR tracking problem in Section 4.1.
2. MATHEMATICAL PROBLEM FORMULATION
Before describing the preprocessing algorithms, we discuss the underlying modeling assumptions for clutter and target
characteristics. For this section, we adopt the following notation:
u, v: horizontal and vertical axes in the focal plane coordinate system
ρ: distance from a point target location to the current focal plane
I(ρ): Intensity (W/cm2 ) at a specified distance from a point target location
σpsf : Radial (two-dimensional) standard deviation of the optical point spread function (PSF)
I0 : Radiant intensity of the point target at the aperture (W/cm 2 )
ci : Random clutter point i
mij : Markov clutter point i at time j
2.1. Clutter Model
Scene clutter for an EO sensor can be modeled as a superposition of a random component, a spatially uncorrelated
persistent (Markov) component, and a spatially correlated (specular reflection point and edge effects) component. The first
component, if not dense compared to the average true target frame to frame motion, can be readily addressed via clutter
rejection processing. The second component may be addressed via clutter rejection if sufficient data exists to differentiate
target motion from stationary clutter. The third component, typically, must be handled separately (see Section 2.2). We
describe the random and Markov clutter components as follows:
ci ∈ U(< [−π/2, π/2], [0, 2π] >)
mij = mi + νj , mi ∈ U(< [−π/2, π/2], [0, 2π] >), νj ∈ N (0, R)
(1)
for a number of random and Markov clutter points i determined by a clutter density estimate, and Markov time sequence j
determined by the Markov transition probability for persistent clutter. Both the random and the Markov clutter components
are assumed to be uniformly distributed in latitude and longitude at a reference altitude (U(< [−π/2, π/2], [0, 2π] >)); the
Markov clutter, rather than being exactly fixed with respect to time, is normally distributed with variance corresponding
to the measurement noise R (N (0, R)).
2.2. Group Cluster Model
While the group clustering techniques were originally designed to address dense target environments, they can also be
applied to spatially correlated persistent clutter. We expect spatially correlated clutter to fall into two general categories:
clutter present in a Gaussian distribution around a central point (e.g., the specular reflection point), and clutter present in
a Gaussian distribution around some edge phenomenon. We assume that these edges can be described as one or more
quadrics of the form
a1 u2 + a2 uv + a3 v 2 + b1 u + b2 v + c = 0
or, in vector notation
T
T
x Ax + b x + c = 0, x =
u
v
(2)
,
expressed in focal plane coordinates.
2.3. CSO Model
The unresolved CSO problems addressed in Section 1 are the result of finite focal plane resolution, imaging blur due to
optics and jitter, atmospheric turbluence blur, and thermal, shot and background noise. In general, the optical point spread
function (PSF) is not Gaussian; however, for many applications a Gaussian approximation suffices.
The two-dimensional Gaussian approximation to the optical PSF is then given by
2
I0 − 2σρ psf
2
e
.
I(ρ) =
2
2πσpsf
(3)
The σpsf resulting from optical blur is typically measured experimentally for a given telescope. We obtain pixel intensity
values by integrating Equation (3) over the pixel dimensions. If the target is δ x and δy from the edge of the pixel, then for
i = 1 : n:
i − n+1
i − n+1
2 − δx
2 − δx
X(i) = erf
− 0.5 · erf
σx
σx
(4)
n+1
n+1
i − 2 − δy
i − 2 − δy
Y (i) = erf
− 0.5 · erf
σy
σy
Given a target point intensity at the aperture of I 0 , the blur submatrix is then:
Iblurred (i, j) = X(i) · Y (j) · I0
(5)
The individual submatrices for each point target are accumulated into the final image sum. Multiple targets present within
a small region of space results in overlapping blur matrices, which can lead to merged measurements after signal and
image processing. 3
3. CLUTTER REJECTION APPROACH
The clutter rejection approach involves subjecting successive frames of data to a sequence of gates, which produce arc
suggestions to which various kinematic tests can be applied. Output frames contain arc suggestions which pass the tests.
3.1. Gating
Given a sequence of input frames, we utilize a gating strategy composed of several layers of gates applied in the order of
increasing processing time. The basic series involves a cell (or bin) gate, followed by a pair velocity gate, and concluding with a standard filter gate 4 ; we do not utilize higher-order measurement gates (three point gates) for this application
because the typical measurement noise from a GEO observer makes direction estimation from a small number of measurements fragile. We have developed a cell and pair gate for infrared observations that functions regardless of whether
the successive frames are from a single observer or from multiple observers, as described below.
3.2. Altitude Intersection Gate (Mono or Stereo View)
The gate developed in this section differs from previously developed gates in that it is effective for either stereo or mono
view satellite data. This is possible because this gate transforms a measured line of sight vector ˆ1eci from the satellite
at t1 to the target at t1 into a mean possible line of sight vector ˆ2eci from the satellite at t2 to the target at t1 , as seen in
Figure 2. In order to precisely define the vector ˆ2eci the position of the target must be known, which it is not because
the range r1 to the target pointed at by ˆ1eci is unknown. However, this range is bounded by two cases; r 1surf ace when
the target is at its minimum possible altitude r surf ace and r1altitude when the target is at its maximum possible altitude
raltitude .
These two bounding ranges may be solved by the following equations
Xs1eci + r1surf ace ˆ1eci = rsurf ace
Xs1 + r1altitude ˆ1 = raltitude
eci
eci
which have the solutions
T
T
2
(Xs1
Xs1eci + rsurf
ˆ )2 − Xs1
ace
eci 1eci
eci
T
T
2
± (Xs1
Xs1eci + raltitude
ˆ )2 − Xs1
eci 1eci
eci
T
r1surf ace = −Xs1
ˆ ±
eci 1eci
T
r1altitude = −Xs1
ˆ
eci 1eci
When solving any quadratic equation there are some special cases which must be accounted for. First, each range
equation has two solutions; one being the intersection of the line of sight vector with the desired Earth radius closest to
X s2
ι 2min
ι1
X s1
Xs1+ r1min ι 1
ι 2max
Xs1+ r1max ι 1
raltitude
rsu
rfac
e
Figure 2: Intersection of satellite line of sight vector with rsurf ace and raltitude
the satellite and the other being the more distant intersection. In this problem we select the closest intersection which is
written
T
T
T
2
ˆ
Xs1eci + rsurf
(6)
ˆ )2 − Xs1
r1surf ace = −Xs1eci 1eci − (Xs1
ace
eci 1eci
eci
T
T
T
2
r1altitude = −Xs1
Xs1eci + raltitude
(7)
ˆ − (Xs1
ˆ )2 − Xs1
eci 1eci
eci 1eci
eci
The second condition to be careful of when solving these equations is that the value under the square root may be
negative, giving an imaginary solution for the range to the target. When solving for r 1altitude this condition would mean
that the given line of sight vector does not intersect the maximum altitude and thus cannot represent a real target. When
solving for r1surf ace this means that a given line of sight vector does not intersect the Earth and this case should not be
ignored. To ensure that true associations are not gated out, in the case that a line of sight does not intersect the Earth the
second position considered corresponds to the further intersection with the maximum altitude
T
ˆ1 + (X T ˆ1 )2 − X T Xs1 + r2
(8)
r1surf ace = −Xs1
eci
eci
s1eci eci
s1eci
altitude
eci
Once the maximum and minimum range to the target along ˆ1eci are known then the maximum and minimum line of
sight vectors to the possible target positions are
r2surf ace = Xs1eci + r1surf ace ˆ1eci − Xs2eci Xs1eci + r1surf ace ˆ1eci − Xs2eci
ace
=
ˆsurf
2eci
r2surf ace
r2altitude = Xs1eci + r1altitude ˆ1eci − Xs2eci Xs1eci + r1altitude ˆ1eci − Xs2eci
=
ˆaltitude
2eci
r2altitude
ace
All other possible target line of sight measurements lie on the plane between ˆsurf
and ˆaltitude
. The mean of these
2eci
2eci
values is then
ˆsurf ace + ˆaltitude
2eci
ˆ2eci = 2eci
2
and the maximum error in ˆ2eci is
ˆ2eci =
ace
− ˆaltitude
ˆsurf
2eci
2eci
2
Then, if the target’s maximum change in position is V max |t| the difference between a line of sight vector at t 2 and
the projection of the line of sight vector from t 1 is bounded by
ˆ2eci − ˆ2eci ≤ ˆ2eci +
Vmax |t|
min(r2surf ace , r2altitude )
(9)
ˆ we have a bound on the difference
If instead of the true line of sight vector we have a measured line of sight vector ,
ˆ
between 2eci and 2eci given by the measurement noise ν:
ν2max = νmax
By a geometrical argument we see that the maximum difference between 2eci and ˆ2eci due to errors in 1eci is bounded
by
r1surf ace
r1altitude
ν1max ≤ max |atan
tan(νmax ) |, |atan
tan(νmax ) |
r2surf ace
r2altitude
r1surf ace
r1altitude
≤ max
νmax ,
νmax
r2surf ace
r2altitude
and the small angle assumption for ν 1max gives
2eci ∼ ˆ2eci
So that
Vmax |t|
min(r2surf ace , r2altitude )
Vmax |t|
+ ν1max + ν2max
+
min(r2surf ace , r2altitude )
2eci + ν1max − 2eci − ν2max ≤ 2eci +
2eci − 2eci ≤ 2eci
(10)
(11)
For any vector (1i − 2i )2 + (1j − 2j )2 ≤ (1i − 2i )2 + (1j − 2j )2 + (1k − 2k )2
Therefore, the bound developed in Equation (10) holds for comparisons between only the i and j components of 2eci and
2eci . Therefore, it is valid to rotate 2eci and 2eci into satellite 2’s local sensor frame at t 2 and perform either the radial
gate
Vmax |t|
+ ν1max + ν2max
(iT 2eci − iT 2eci )2 + (j T 2eci − j T 2eci )2 ≤ 2eci +
(12)
min(r2surf ace , r2altitude )
or the cuboid gate
Vmax |t|
+ ν1max + ν2max
min(r2surf ace , r2altitude )
Vmax |t|
+ ν1max + ν2max
+
min(r2surf ace , r2altitude )
|iT 2eci − iT 2eci | ≤ 2eci +
(13)
|j T 2eci − j T 2eci | ≤ 2eci
(14)
3.3. Arc Generation
All sequences that pass cell and pair gates are considered for further analysis. Those measurements which also pass a
filter gate with an existing track need no further investigation; they are immediately added to an output frame. Sequences
which do not correspond to an existing track must be considered for track initiation.
The system under consideration is assumed to have a known minimum probability of detection for true targets. Based
on this probability of detection, we assume that for any N frames of data, true targets will be detected at least M times.
As additional frames are considered, the successive pairs of observations are assembled into arcs. An arc consists of a
sequence of points from consecutive frames. The points can either be measurements, or a marker representing the fact
that no measurement passed the pair gate with the arc in the frame. On receipt of a new frame, the clutter rejection filter
applies its gating sequence to the measurements in the incoming frame; if the last point of an arc passes gating with the
new measurement, that measurement is added to the arc. If more than one measurement gates to the arc, a new arc is
added to the arc set with the second matched measurement. If the measurement does not pass the proximity test with any
of the existing arcs, a new arc is added containing only the new measurement. At the end of frame processing, arcs which
have not been updated within the last N frames are deleted. Figure 3 depicts this process for a sequence of N frames.
As an optimization, the interface between the clutter rejection preprocessor and the tracker can include the full arc
set, rather than just the measurements in the current output frame. This prevents the tracker from re-running gating, as it
already posesses the full set of arc information. Measurement arcs not corresponding to an existing track are run through
track initiation. Measurements which gate with an existing track are candidates for track update or spawning.
3.4. Target Dynamics
If the system is expected to perceive target motion sufficient to dominate the measurement noise, an additional dynamics
test can be applied to arcs which pass the M of N test. As previously noted, it is not typical for target motion to be
guaranteed perceptible within the usual N frames; accordingly, application of a motion test requires accumulation of
arbitrary-length arcs. Once the target motion exhibits characteristics noticeably different from stationary clutter, the entire
sequence of measurements can be submitted to track initiation. This process can dramatically reduce the initiation of
tracks on persistent clutter, at the expense of lag in track initiation on true targets.
4. PIXEL CLUSTER AND GROUP CLUSTER GENERATION
4.1. Pixel Cluster Decomposition
The pixel cluster decomposition algorithm we utilize takes feedback from stereo tracking to assist in detecting and resolving CSO measurements. This algorithm has been discussed in detail elsewhere 5 ; in this context we note that the primary
advantage to performing pixel cluster decomposition as part of the preprocessing stage is that all of the necessary feedback
and assignment information has already been computed, and therefore pixel cluster decomposition is a trivial additional
step. This step reduces duplication of effort between tracker and preprocessor.
4.2. Group Cluster Tracking
Group cluster tracking involves the replacement of a large number of measurements with some much smaller number
of characteristic measurements using one of several possible algorithms. Group clusters may be formed from any set of
measurements which gate with one another. 6 Our previous work on group cluster tracking 2 has predominantly utilized
Expectation-Maximization (EM) clustering with an assumption of Gaussian density functions. While this process will
adequately represent specular reflection point clutter, cloud and landmass edge clutter 7 exhibits characteristics that are
more consistent with distributions around a polynomial function. Fuzzy clustering schemes are more adroit at extracting
clusters exhibiting these characteristics. 8 In particular, we tailor the Fuzzy C Quadric Shells (FCQS) algorithm to this
problem. The algorithm can be summarized as follows:
• Hypothesize clusters j ∈ 1, .., M
• Optimize cluster states
– Given measurements x i = [ui , vi ], i ∈ 1, .., N
C4
C3
C1
T1
M=3
N=4
T1
C2
First frame: Target and two clutter
points with search neighborhood for
M of N test.
Arcs: [C1]
[T1]
[C2]
At end of frame processing, the arcs
are shifted:
[C1 0]
[T1 0]
[C2 0]
Second frame: Search neighborhood for
clutter points does not contain a measurement.
Search frame for true target contains a
measurement.
[C1 0 0]
[C1 0]
[T1 T1 0]
[T1 T1]
Arcs: [C2 0]
Shifted: [C2 0 0]
[C3 0]
[C3]
[C4 0]
[C4]
C5
C7
T1
T1
T2
T2
C6
C8
Third frame. Second target appears.
[C1 0 0 0]
[C1 0 0]
[T1 T1 T1 0]
[T1 T1 T1]
[C2 0 0 0]
[C2 0 0]
Arcs: [C3 0]
Shifted: [C3 0 0]
[C4 0 0]
[C4 0]
[T1 T1 T2 0]
[T1 T1 T2]
[C5 0]
[C5]
[C6 0]
[C6]
Fourth frame. Second target passes
proximity test. After frame shifting,
first frame clutter points expire.
[C1 0 0 0]
Arcs: [T1 T1 T1 T1]
[C2 0 0 0]
[C3 0 0]
[C4 0 0]
[T1 T1 T2 T2]
[C5 0]
[C6 0]
[C7]
[C8]
[T1 T1 T1 0]
[C3 0 0 0]
[C4 0 0 0]
[T1 T2 T2 0]
Shifted: [C5 0 0]
[C6 0 0]
[C7 0]
[C8 0]
Figure 3: Example showing the clutter rejection filter methodology.
N M
2
wij
(xT Ax + bT x + c)
M
– subject to the constraints j=1 wij = 1, i = 1, .., N, wij ∈ [0..1]
– Minimize
i=1
j=1
• Iterate over range of possible cluster values to minimize residual
Each measurement is then assigned to its maximal weight cluster. The cluster centroid and extent are computed by
taking the covariance-weighted mean and second moment of the measurements in the cluster. This centroid and extent are
then reported as the measurement to be sent to tracking. Some care must be taken to ensure that low-probability cluster
associations, which may represent true targets emerging from a dense clutter region, are not collected in the cluster.
Table 1: Statistics showing the performance of the clutter rejection filter.
Sensor
Sensor 1
Sensor 2
Clutter Rejected
388000 (99.95%)
387925 (99.93%)
Clutter Passed
194
269
Target Rejected
40 (12.08 %)
60 (17.65 %)
Target Passed
291
280
5. RESULTS
This subsection presents simulation results on a simulated ballistic missile scenario in the presence of Earth background
clutter. The scenario contains two missile launches observed by two GEO satellites using a notional unclassified sensor
design. The clutter rejection algorithm component, in this scenario, has been tuned to aggressively reject stationary clutter
by requiring perceptible motion over the system noise.
Since this is a simulated scenario, it is possible to determine when the clutter rejection algorithm rejects or passes
a clutter observation. Table 1 shows the rejection statistics for the clutter rejection algorithm. With this set of tuning
parameters, the clutter rejection algorithm does an excellent job removing most of the clutter, at the expense of also
removing some of the early target observations.
From the viewing geometry present in this scenario, one of the observers perceives the two targets as a single CSO
measurement for a significant fraction of the scenario. As a result, the pixel cluster decomposition algorithm component
is required in order to provide effective stereo track accuracy. For this scenario, image data was not available; when this
occurs, the pixel cluster decomposition algorithm is designed to utilize a measurement multi-assignment strategy. Figure 4
shows the x coordinate of the estimated tracks in ECEF with and without the pixel cluster decomposition algorithm.
A total of nine tracks are observed. Two of the tracks are tracking the truth objects. One track is a persistent multiview clutter track, i.e., a persistent track on clutter measurements viewed by both sensors. This track is on the cluster of
measurements at the solar specular reflection point. The other six tracks are clutter tracks formed on data reported by a
single sensor. The nature of each track is indicated in Figure 4. The tracks on the truth targets are more accurate when
using the pixel cluster decomposition algorithm. This can be seen in the RMSE metric shown in Figure 5.
X Coordinate of Target Trajectories
6
7
x 10
6
x 10
6
Truth target tracks
Truth target tracks
5
5
4
4
x [ECEF, m]
x [ECEF, m]
X Coordinate of Target Trajectories
6
7
3
2
Multi−sensor
clutter track
Single sensor
clutter tracks
1
3
2
0
0
−1
−1
−2
Stage 1
Stage 1
Stage 2
Multi−sensor
clutter track
−2
Stage 3
Single sensor
clutter tracks
1
Stage 1
Stage 1
Burnout
−3
0
20
40
60
80
100
120
140
160
180
0
Time (Seconds)
(a)
Stage 2
Stage 3
Burnout
−3
20
40
60
80
100
120
140
160
180
Time (Seconds)
(b)
Figure 4. Scenario B1 with clutter. x coordinate of estimated tracks in ECEF (a) without pixel cluster decomposition and (b) with pixel
cluster decomposition.
The data for this scenario contained approximately 2000 clutter points per frame for 194 seconds at an update rate of
one frame per second). This scenario was run 10 times and the average time to process this data was approximately 196
seconds.∗
∗
All tests were performed on a dual processor Intel Xenon 3.0GHz with 8GBs of memory running a 2.69 Mandrake Linux Kernel.
RMSE Position Error (Target 1)
5
10
RMSE Position Error (Target 2)
4
10
With Multi−Assignment
Without Multi−Assignment
4
RMSE Pos [m] (log−scale)
RMSE Pos [m] (log−scale)
With Multi−Assignment
Without Multi−Assignment
10
3
10
2
10
Stage 1
1
10
50
Stage 1
Burnout
Stage 2
3
10
Stage 3
Stage 1
2
100
Time (Seconds)
10
150
(a)
50
Stage 1
Burnout
Stage 3
Stage 2
100
Time (Seconds)
150
(b)
Figure 5. Scenario B1 with clutter. (a) RMSE metric for truth target 1. The red curve is obtained using pixel cluster decomposition.
The black-dashed curve is obtained without the pixel cluster decomposition method. (b) RMSE metric for truth target 2. The red curve
is obtained using pixel cluster decomposition. The black-dashed curve is obtained without the pixel cluster decomposition method.
6. CONCLUSION
We present an algorithm architecture for integrating clutter mitigation techniques with measurement preprocessing using feedback from stereo tracking. This mechanism allows a multi-frame assignment algorithm to operate in realtime
despite the presence of dense clutter, while maintaining good track accuracy in CSO conditions. The preprocessor architecture, while optimized for use with the Numerica MFA tracker, has been developed so as to be useful with any desired
downstream tracking algorithm.
ACKNOWLEDGMENTS
This work was supported in part by Lockheed Martin.
REFERENCES
1. A. B. Poore, S. Lu, and B. J. Suchomel, “Data association using multiple frame assignments,” in Handbook of Multisensor Data Fusion, CRC Press LLC, 2001.
2. S. Gadaleta, M. Klusman, A. B. Poore, and B. J. Slocumb, “Multiple frame cluster tracking,” in SPIE Vol. 4728,
Signal and Data Processing of Small Targets, pp. 275–289, 2002.
3. D. Macumber, S. Gadaleta, A. Floyd, and A. Poore, “Hierarchical closely-spaced object (cso) resolution for ir sensor
surveillance,” in to appear in Proceedings of SPIE, 5913, 2005.
4. S. Blackman and R. Popoli, Design and analysis of modern tracking systems, Artech House, Boston, London, 1999.
5. S. Gadaleta, A. Poore, and B. Slocumb, “Pixel-cluster decomposition tracking for multiple IR-sensor surveillance,”
in SPIE Vol. 5204, Signal and Data Processing of Small Targets, pp. 270–282, 2003.
6. S. Gadaleta, A. B. Poore, S. Roberts, and B. J. Slocumb, “Multiple Hypothesis Clustering Multiple Frame Assignment
tracking,” in SPIE Vol. 5428, Signal and Data Processing of Small Targets, pp. 294–307, August 2004.
7. M. L. Hartless, “Likelihood ratio test using edge information for false alarm mitigation,” in Proceedings of SPIE,
1954, pp. 104–114, 1993.
8. S. Theodoridis and K. Koutroumbas, Pattern Recognition, Academic Press, 1999.
Download