PowerPoint - Microsoft Research

advertisement
Short Tutorial
Engineering Statistics for
Multi-Object Tracking
Ronald Mahler
Lockheed Martin NE&SS Tactical Systems
Eagan, Minnesota, USA
651-456-4819 / ronald.p.mahler@lmco.com
IEEE Workshop on Multi-Object Tracking
Madison WI
June 21, 2003
This work was supported by the U.S. Army Research Office under contracts DAAH04-94-C-0011 and DAAG55-98-C-0039,
and by the U.S. Air Force Office of Scientific Research under contract FF49620-01-C-0031. The content does not
necessarily reflect the position or the policy of the Government. No official endorsement should be inferred.
Top-Down vs. Bottom-Up Data Fusion
usual “bottom-up” approach
to data fusion
data
fusion
heuristically integrate a
patchwork of optimal or
heuristic algorithms that
address specific functions
(mostly done by operators)
detection
sensor mgmt
“top-down,” system-level
approach
multiple sensors, multiple
targets = single system
specify states of system
= multisensor-multitarget
Bayesian-probabilistic
formulation of problem
requires random set theory

tracking
reformulate in “Statistics 101,
“engineering-friendly” terms
low-level functions subsumed
in optimally integrated algorithms
diverse
data
I.D.
ill-characterized
data
fuzzy logic data association
Our Topic: “Finite-Set Statistics” (FISST)
•
•
•
•
•
Systematic probabilistic foundation for multisensor, multitarget problems
Preserves the “Statistics 101” formalism that engineers prefer
Corrects unexpected difficulties in multitarget information fusion
Potentially powerful new computational techniques (1st-order moment filtering)
Unifies much of multisource, multitarget, multiplatform data fusion
detection
performance
estimation
sensor
management
fuzzy logic
unified Bayesian
data fusion
tracking
information
theory
group
targets
DempsterShafer
identification
control
theory
Bayes
rules
FISST
FISST-Related Books and Monographs
book (out of print)
hardcover
proceedings
Theory and Decision Library
I.R. Goodman, Ronald. P.S. Mahler
and Hung T. Nguyen
MATHEMATICS
OF
DATA FUSION
Kluwer Academic Publishers
monograph
Volune 97
John Goutsias, Ronald. P.S. Mahler
Hung T. Nguyen
Editors
Random Sets
Theory and Applications
book chapter 14
SPIE Milestone Series
Lockheed
Martin
Volume MS
124
An Introduction to
Multisource-Multitarget Statistics
and its Applications
HANDBOOK
OF
MULTISENSOR
DATA FUSION
Ronald Mahler
March 15, 2000
Edited by
DAVID L. HALL
JAMES LLINAS
Springer
Scientific Workshops
Invited Presentations
1995 ONR/ARO/LM Workshop
Invited Session: SPIE AeroSense'99
MDA/POET
AFIT
NRaD
Harvard
Johns Hopkins
U. Massachusetts
New Mexico State
IDC2002
ATR Working Group
USAF Correlation Symp.
SPIE AeroSense Conf.
Nat'l Symp. on Data Fusion
IEEE Conf. Dec. & Contr.
Optical Discr. Alg's Conf.
ONR Workshop on Tracking
U. Wisconsin
Current FISST Technology Transition
= basic research project
= applied research project
basic research
in data fusion,
tracking, & NCTI
robust joint tracking and
NCTI using HRRR
and track radar
ARO
robust
INTELL fusion
LMTS
anti-air multisource
radar fusion
BMD technology
= completed research project
MRDEC
AFRL
AFRL
finite-set
statistics
SSC/ONR
LMTS
AFOSR
AFRL
BMD technology
robust multitarget
ASW /ASUW
fusion & NCTI
scientific
performance
estimation for
Level 1 fusion
AFRL
MDA
MDA
scientific
performance
estimation for Levels
2,3,4 data fusion
unified
collection
& control
for UAVs
robust SAR
ATR against
ground targets
robust ASW
acoustic
target ID
AFOSR = Air Force Office of Scientific Research
AFRL = Air Force Research Laboratory
ARO = Army Research Office
LMTS = Lockheed Martin Tactical Systems
MDA = Missile Defense Agency
MRDEC = Army Missile Research, Dev &Eval Center
ONR = Office of Naval Research
SSC = SPAWAR Systems Center
Topics
1.
Introducing “Dr. Dogbert,” Critic of FISST
2.
What is “engineering statistics”?
3.
Finite-set statistics (FISST)
4.
Multitarget Bayes Filtering
5.
Brief introduction to applications
Dr. Dogbert, Critic of FISST
The multisource-multitarget
engineering problems
addressed by FISST actually
require nothing more
complicated than Bayes’
rule…
…which means that FISST is of mere
theoretical interest at best. At worst,
it is nothing more than pointless
mathematical obfuscation.
multitarget calculus
multitarget statistics
multitarget modeling
What is extraordinary about
this assertion is not the
ignorance it displays
regarding FISST but, rather,
the ignorance it displays
regarding Bayes’Rule!
The Bayesian Iceberg
Nothing
deep here!
Bayes’ rule
non-standard application
The optimality and
simplicity of the Bayesian
framework can be taken for
granted only within the confines
of standard applications addressed
by standard textbooks. When one
ventures out of these confines one must
exercise proper engineering prudence—which
includes verifying that standard textbook assumptions
still apply.
This is especially true in multitarget problems!
How Multi- and Single-Object Statistics Differ
I sit astride a
mountaintop!
Bayes’ rule
multitarget maximum
likelihood estimator (MLE)
is defined but multitarget
maximum a posteriori
(MAP) estimator is not!
new multitarget state
estimators must be defined
and proved optimal!
multitarget Shannon
entropy cannot be
defined!
multitarget L2
(i.e., RMS distance)
metric cannot be
defined!
need explicit methods for
constructing provably true
multisensor-multitarget
likelihood functions from
explicit sensor models!
need explicit methods for
constructing provably true
multitarget Markov
transition densities from
explicit motion models!
multitarget expected
values are not defined!
Dr. Dogbert (Ctd.)
I, the great Dr. Dogbert , have
discovered a powerful new research
strategy! Strip off the mathematics
that makes FISST systematic,
general, and rigorous, copy its
main ideas, change notation,
and rename it Dogbert Theory!
This is a great advance because it’s much
simpler: It doesn’t need all that ugly math
(Blech!) that no one understands anyway!
a mere change of name and
notation does not constitute
a great advance.
Dogbert Theory is straightforward
precisely because it avoids specifying
explicit general methods and foundations—and yet is still fully general
and derived from first principles!
It is easy to claim invention of a simple
and yet elastically all-subsuming theory if
one does so by neglecting, avoiding, or
glossing over the specifics required to
actually substantiate puffed-up boasts
of generality and “first principles”.
What is “Engineering Statistics”?
• Engineering statistics, like all engineering mathematics, is a
tool and not an end in itself. It must have two (inherently
conflicting) characteristics:
– Trustworthiness: Constructed upon systematic, reliable mathematical
foundations, to which we can appeal when the going gets rough.
– Fire and forget: These foundations can be safely neglected in most
applications, leaving a parsimonious and serviceable mathematical
machinery in their place.
• The “Bar-Shalom Test” defines the dividing line between
the serviceable and the simplistic:
– “Things should be as simple as possible—but no simpler!”
–
Y. Bar-Shalom, quoting Einstein
• If foundations are:
– so mathematically complex that they cannot be taken for granted in
most situations, then they are shackles and not foundations!
– so simple that they repeatedly lead us into engineering blunders, then
they are simplistic, not simple!
Single-Sensor / Target Engineering Statistics
• Progress in single-sensor, single-target applications has been
greatly facilitated by the existence of a systematic, rigorous,
and yet practical engineering statistics that supports the
development of new concepts in this field.
observations
random
observation
vector
sensor measurement model
z
x
Z
X
Zk = hk(x)+Wk
Xk+1= Fk(x)+Vk
random
state vector
target Markov motion model
states
probability mass function
pk(S)=Pr(ZkS)
pk+1|k(S)=Pr(Xk+1S)
probability mass function
integro-differential
calculus
miss distance
||z1-z2||
||x1-x2||
miss distance
Bayes-optimal filtering
E[Z]
E[X]
expectations &
other moments
x^k|k
Bayes-optimal
state estimators
d
 dx
likelihood function
fk|k(x|Zk)
fk+1|k(x|Zk)
fk(z|x)
fk+1|k(y|x)
posterior & prior densities
Markov transition density
What is Single-Target Bayes Statistics?
random
observation-vector
observation space, Z
Z
random
state-vector
X
sensor likelihood function*
Lz(x) = f(z|x) = Pr(Z=z|X=x)
state space, X
f0(x) = Pr(X = x)
Classical statistics:
Bayesian statistics:
prior distribution* of X
unknown state-vector x of the target is a fixed parameter
unknown state is always a random vector X even if we do not
explicitly know either it or its distribution f0(X) (the prior). The
single-sensor, single-target likelihood function is a cond. prob. density
* equalities temporarily assume discrete state & observationspaces for ease of argument
Single-Target Bayes Statistics: Distributions
specify single-sensor/target
measurement space and
its measure structure
Z
X
specify single-target state space and
its measure structure
random variables
(e.g. random vectors)
f(z|x) = Pr(Z=z|X=x)
signifies a random draw from
a single measurement space
signifies a random draw
from a single state-space
f0(x) = Pr(X=x)
The Recursive Bayes Filter
1. time-update step via prediction integral
prediction of posterior
to time of new datum z
current posterior, conditioned
on old data-stream
Zk = {z1,…, zk}
fk+1|k(y|Zk) =  fk+1|k(y|x) fk|k(x|Zk)dx
assuming fk+1|k(y|x,Zk) = fk+1|k(y|x)
Xk|k=x
random
target state
target
interim
target
state-change
true Markov transition density
Xk+1|k = Fk(x)+ Vk
(motion model)
fk+1|k(y|x) = fVk(y- Fk(x))
Xk+1|k=y
predicted
random
state
The Recursive Bayes Filter
2. data-update step via Bayes’ rule
new posterior, conditioned
on all data Zk+1 ={z1,…, zk+1}
prior = predicted
posterior
fk+1|k+1(x|Zk+1)  fk+1(zk+1|x) fk+1|k(x|Zk)
predicted
random
state
Xk+1|k
sensor
assuming fk+1(z|x,Zk) = fk+1(z|x)
zk+1
observation
true likelihood function
Zk = hk(x)+ Wk
(measurement model)
fk(z|x) = fWk (z- hk(x))
Xk+1|k+1
dataupdated
random
state
The Recursive Bayes Filter
3. estimation step via Bayes-optimal state estimator
Xk+1|k
argsupx fk+1|k+1(x|Zk+1)
^
xk+1|k+1 =
{
maximum a posteriori (MAP)
expected a posteriori (EAP)
 x fk+1|k+1(x|Zk+1)dx
current best estimate of xk+1
Xk+1|k+1
Basic Issues in Single-Target Bayes Filtering
without “true”
likelihood for
both target &
background,
Bayesoptimal claim
is hollow
“over-tuned”
likelihoods may be
non-robust w/r/t
deviations between
model & reality
f(z|x)
without guaranteed
convergence, approx.
filter may be
unstable, non-robust
^x
sensor
models
poor motion-model
selection leads to
performance
degradation
motion
models
without “true”
Markov density,
good model
selection is wasted
fk+1|k(x|y)
computk)
state
f
(y|Z
ability
k|k
k|k estimation/
convergence
poorly chosen state estimators
may have hidden inefficiencies
(erratic, inaccurate,
slowly-convergent, etc.)
filtering
equations are
computationally
nasty
“textbook”
approaches create
computational
inefficiencies: e.g,
numerical instability
due to central finitedifference schemes
Bayes-Optimal State Estimators
pre-specified
cost function
C(x,y)
^ ,…, z )
x(z
1
m
state estimator: family
of state-valued functions of the observations z1,…, zm, m>0
Bayes risk = expected value of the total posterior cost
^
^ ,…, z )) f(z ,…, z |x) f (x)dz  dz dx
RC(x,m)
  C(x, x(z
1
m
1
m
0
1
m
^ is “Bayes-optimal” with respect to
state estimator x
cost C if it minimizes the Bayes risk
without a Bayes-optimal estimator, we aren’t Bayes-optimal!
True Likelihood Functions & Markov Densities
OBSERVATIONS
measurement model
random deterministic sensor
obser- measurement noise
model
process
vation
Bayesian modeling
Step 1:
construct
model
pk(S|x) = Pr(Zk  S)
= probability that observation is in S
true likelihood function
dpk
fk(z|x) =
dz
= Radon-Nikodým derivative of pk
motion model
random
state of
moved target
deterministic
motion
model
motion
noise
process
Xk+1|k = Fk+1|k(x) + Vk
Zk = hk(x) + Wk
probability-mass function
MOTION
Step 2:
use probabilitymass function
to describe
model statistics
Step 3:
construct “true”
density function
that faithfully
describes
model
probability-mass function
pk+1|k(S|x) = Pr(Xk+1|k  S)
= probability that moved target is in S
true Markov transition density
dpk+1|k
fk+1|k(y|x) =
dy
= Radon-Nikodým derivative of pk+1|k
Finite-Set Statistics (FISST)
• One would expect that multisensor, multitarget applications rest
upon a similar engineering statistics
• Not so, despite long existence of point process theory (PPT)
• FISST is in part an “engineering friendly” version of PPT
multisensormultitarget
observations
Z
X
multitarget states
?
random
observationset
S
X
?
random
state-set
Sk = hk(X)  Ck ?
Xk+1= Fk(X)Bk
belief-mass function
bk(S)=Pr(SkS)
bk+1|k(S)=Pr(Xk+1 S)
multitarget Markov motion model
?
multi-observation
miss distance
belief-mass function
FISST integrodifferential calculus
d
 dX
?
Bayes-optimal filtering
d(Z1,Z2) DS(z)
d(X1,X2) DX(x)
multitarget
miss distance
multitarget measurement model
PHDs &
other moments
X^k|k
?
Bayes-optimal
multitarget state estimators
fk|k(X|Z(k)) ?
fk+1|k(X|Z(k))
fk(Z|X)
fk+1|k(Y|X)
multitarget posterior &
prior densities
multitarget Markov
transition density
FISST, I
systematic, probabilistic framework for modeling uncertainties in data
likelihood function, f(z|x)
generalized likelihood function, r(z|x)
 f ( z | x ) dz  1
 r (z | x)dz  
common probabilistic framework: random closed sets,
random set
model
statistical
radar report
random set
model
fuzzy

random set
model
Englishlanguage
report
imprecise
contingent
datalink
attribute
rule
poorly characterized likelihood
Dealing With Ill-Characterized Data
Observation = “Gustav is NEAR the tower”
p1
p2
p3
p4
interpretations of “NEAR”
(random set model  of
natural language statement)
probabilities
for each
interpretation
FISST, II
reformulate multi-object problem as generalized single-object problem
multisensor state
sensors
X*k = {x*1,…,x*s}
“metasensor”
diverse observations
“metaobservation”
“metatarget”
targets
multisensormultitarget
observation
Z = {z 1,…,z m}
random
observationset
S
X
random
stateset
multitarget state
X = {x1,…,xn}
FISST, III
multisource-multitarget “Statistics 101”
single-sensor/target
multi-sensor/target
sensor
target
vector observation, z
vector state, x
meta-sensor
meta-target
finite-set observation, Z
finite-set state, X
derivative, dpZ /dz
integral,  f(x) dx
set derivative, dbS /dZ
set integral,  f(Z) dZ
prob.-mass func., pZ(S)
likelihood, fZ(z|x)
prior PDF, f0(x)
information theory
filtering theory
Almost-parallel
Worlds
Principle
(APWOP)
belief-mass func., bS(S)
multitarget likelihood, fS(Z|X)
multitarget prior PDF, f0(X)
multitarget information theory
multitarget filtering theory
FISST, IV
systematic, rigorous foundation for:
- ill-characterized data & likelihoods
- multitarget filtering and estimation
- multigroup filtering & sensor management
- computational approximation
detection
performance
estimation
sensor
management
fuzzy logic
unified Bayesian
data fusion
tracking
information
theory
group
targets
DempsterShafer
identification
control
theory
Bayes
rules
FISST
What is Bayes Multitarget Statistics?
random
observation-set
observation space, Z
S
random
state-set
X
multisensor-multitarget likelihood function*
LZ(X) = f(Z|X) = Pr(S=Z|X=X)
state space, X
f0(X) = Pr(X = X)
multitarget prior distribution* of X
Classical statistics:
unknown state-set X of the target is a fixed parameter
Bayesian statistics:
unknown state-set is always a random finite set X even if we do not
explicitly know either it or its distribution f0(X) (the prior). The
multisensor-multitarget likelihood function is a cond. prob. density
* equalities temporarily assume discrete state & observationspaces for ease of argument
Multitarget Bayes Statistics: Distributions
multitarget likelihoods and distributions must have a specific form
specify multisensor-multitarget
measurement space and its
measure structure—e.g., finite
subsets of Z = Z1   Zs
union of the measurement
spaces of the individual sensors)
Z* X*
random variables
(e.g. random sets)
specify multitarget state space and its
measure structure (e.g.: finite subsets
of single-target state space). Subtle
issues are involved here!
f(Z|X) = Pr(S=Z|X=X)
signifies a random draw from
a single multisensor, multitarget measurement space
f0(X) = Pr(X=X)
signifies a random draw
from a single multitarget
state-space
strictly speaking: re-writing f0(X) as a family of ordinary densities
(one for each choice of target number) is not Bayesian because then it
does not describe a random draw from a single multitarget state-space:
f 00 (), f 01 (x1 ), f 02 (x1 , x 2 ) ,..., f 0n (x1 ,..., x n ) ,...
Point Processes / Random Finite Sets
a point process is a random finite multi-set
 n copies
1 copies
{(1 , x1 ),..., ( n , x n )}
 x1 ,..., x1 ,..., x n ,..., x n 
finite multi-set
finite set of pairs
(= set with repetition of elements)
 1d x (x)  ...   nd x (x)
1
n
Dirac mixture density
 1 x ( S )  ...   n  x ( S )
1
counting measure
n
x (S )   d x (y)dy
S
d ( ,x ) ( , x)  ...  d (
1
1
n ,x n )
( , x)
simple Dirac mixture
density on pairs
 (1 ,x1 ) (T )  ...   ( n ,x n ) (T )
simple counting measure on pairs
Set Integral (Multi-Object Integral)
Set integral:


f (Y )dY  f ()    f ({y1 ,..., y n })dy1  dy n
n 1
sum over all multi-object
states with n objects
sum over all possible
numbers of targets
Set Derivative (Multi-Object Derivative)
Frechét derivative of a functional G[h] (of a function-variable h(x)):
G
G[ h    g ]  G[ h]
[h]  lim
 0
g

g
F
[h] is linear, cts.
g
Functional derivative (from physics)
G
G[ h    d x ]  G[ h]
[ h]  lim
 0
d x

Dirac delta function
Set derivative of a set-function b(S) = G[1S] :
db
d nb
G
(S ) 
(S ) 
[1S ]
dX
dx n dx1
d x d x
n
X = {x1,…,xn}
1
Probability Law of a Random Finite Set
probability generating
functional (p.g.fl.)
GX [h]  E[hX ]
h X   h(x)
xX
h = bounded real-valued
test function
Choquet capacity theorem
belief-mass
function
b S (S )  GX [1S ]  Pr(X  S )
probability that all targets are in S
multitarget
probability
distribution
(= Janossy densities)
G  G 


dbX
d
 g 
fX ( X ) 
() " Pr(X  X )"
X = {x ,…,x }
dX
g d x
x
1
n
Multitarget Posterior Density Functions

fk|k(X |Z(k))dX = 1
normality condition
multitarget
posterior
fk|k(X |Z(k))
measurement-stream
Z(k)  Z ,...,Zk
multitarget state
fk|k(|Z(k))
fk|k({x}|Z(k)
fk|k({x x2}|Z(k))
(no targets)
(one target)
( two targets)
multisensor-multitarget
measurements:
Zk = { z  zm(k) }
…
fk|k({x  xn}|Z(k)) (n targets)
individual measurements
collected at time k
The Multitarget Bayes Filter
observation space
random
observationsets Z produced
by targets
state space
random
state-set
random
multi-target motion
Xk|k
Xk+1|k
Xk+1|k+1
three targets
five targets
multitarget

Bayes filter
fk|k(X|Z(k))
state-set
fk+1|k(X|Z(k))
fk+1|k+1(X|Z(k+1))
Zk+1

Issues in Multitarget Bayes Filtering
how do we
construct
multisensormultitarget
measurement
models and
multisensormultitarget
likelihood
functions?
multitarget motion
models must account
for changes and
correlations in target
number and motions
what does “true”
multisensormultitarget
likelihood even
mean?
f(Z|X)
^
Xk|k
multitarget
analogs of usual
Bayes-optimal
estimators are
not even defined!
sensor
models
state
estimation
must define new
multitarget state
estimators and
show well-behaved
motion
models
fk
computability
how do we construct
multitarget motion models
and true multitarget
Markov densities?
1|k(X|Y)
what does
“true”
multitarget
Markov
density even
mean?
fk|k(Y|Z(k))
new mathematical
tools even more
essential than in
single-target case
True Multitarget Likelihoods & Markov Densities
OBSERVATIONS
multitarget meas. model
targetrandom
generated
observation measurements
clutter
process
Bayesian modeling
Step 1:
construct
model
Sk = Tk(X)  Ck
belief-mass function
bk(S|X) = Pr(Sk  S)
= probability that observation is in S
true likelihood function
dbk
f k (Z | X ) 
( | X )
dZ
= set derivative of bk
MOTION
multitarget motion model
random
state of
new targets
random state
of old
targets
new
targets
Xk+1|k = Fk+1|k(X)  Bk
Step 2:
use beliefmass function
to describe
model statistics
Step 3:
construct “true”
density function
that faithfully
describes
model
belief-mass function
bk+1|k(S|X) = Pr(Xk+1|k  S)
= probability that moved target is in S
true Markov transition density
dbk 1|k
f k 1|k (Y | X ) 
( | X )
dY
= set derivative of bk+1|k
Failure of Classical Bayes Estimators
simple posterior distribution
state
posterior
probability
naïve
MAP
naïve
EAP
compare

no-target state
1/2
0
2
position states (km)
f(x) = 1/4 km -1
f() = 1/2
f(x) = 1/4 km -1
 X f(X) dX =
1
}
incommensurable!
F() + 02 x f(x)dx
= 1/2  ( + 1 km)
?
Some problems go away if continuous states are discretized into cells, BUT:
- approach is no longer comprehensive
- power of continuous mathematics is lost
New multitarget state estimators must be defined and proved
optimal, consistent, etc.!
Application 1: Poorly Characterized Data
SAR
features
LRO
features
MTI
features
ELINT
features
SIGINT
features
interpretation
by operators
sent out on
coded datalink
alphanumeric format
messages corrupted
by unknowable
variations
(“fat-fingering”,
fatigue, overloading,
differences in
training & ability)
?
Modeling Ill-Characterized Data

SA(f )

model “ambiguous data”
as random closed subsets of
observation space
SF(XS) SA(W)
vague imprecise
(fuzzy) (DempsterShafer)
contingent
(rules)
r(|x)
general
choose specific
modeling technique
generalized (i.e., unnormalized)
likelihood function
CAUTION!
non-Bayesian
r(|x) = Pr(  Sx )
practical implementation
via “matching models”
CAUTION!
partially
heuristic
single- or multi-target Bayes filters
Generalized Likelihood of Datalink Message Line
SAR line from formatted datalink message
IMA/SAR/22.0F/29.0F/14.9F/ Y / - / Y / - / - / T / 6 / - //
convert
features to
fuzzy sets
g1
compare to
fuzzy templates
in database
compute attribute likelihoods
generalized
likelihood
of SAR
message line
g2
f1,x
r1
g3
f2,x
r2
g4
f3,x
r3
f4,,x
r4
g5
f5,x
r5
g6
f6,x
r6
g7
g8
f7,x
r7
g9
f8,x f9,x
r8
r9
g10
g11
f10,x f11,x
r10
r11
rSAR(SAR|x) = r1(x)  r2(x)    r10(x)  r11(x)
ab
a  b = a + b - ab
'' operation models statistical correlation intermediate
between perfect correlation and independence
Application 2: Poorly Characterized Likelihoods
dents
statistically
uncharacterizable
“extended operating
conditions” (EOCs)
wet mud
turret articulations
non-standard equipment
(e.g. hibachi welded on motor)
irregular placement of
standard equipment
(e.g. grenade launchers)
Modeling Ill-Characterized Likelihoods
likelihood
value
}
very hedged error bar
more hedged error bar
somewhat hedged error bar
best-guess error bar
 iz (c)  f i ( z | c)
nominal likelihood
function
iz (c)  Siz (c)
random error bar on
pixel likelihood
(random interval)
 z (c)  S z (c)
 S1z (c) S Nz (c)
random error bar on
image likelihood
(interval arithmetic)
zi ,c ( x)  Pr( x  Sz (c))
z
fuzzyvalue
error
bar
intensity
in pixel
on
image likelihood
Application 3: Scientific Performance Estimation
miss distance
I.D. miss
multisensormultitarget
algorithm
multitarget miss
distance
ordinary
metrics
track purity
mathematical information
produced by algorithm
(multitarget Kullback-Leibler)
Multitarget Miss Distance
Oliver Drummond et. al. have proposed optimal truth-to-track
association for evaluating multitarget tracker algorithms
multi-target tracker
data
g ground
truth targets, G
find best association xpi 
gi
between truths gi and tracks xi
n tracks, X
“natural” approach works only when numbers
of truths and tracks are equal:
d (G, X )  min
p
g
1
g

i 1
g i  xpi
Wasserstein distance rigorously and tractably generalizes intuitive approach
Application 4: Group-Target Filtering
detect
classify
(i.e., this target group is
actually a group target)
(e.g., this group target
is an armored brigade)
group target 1
(armored infantry)
group target 2
(tank brigade)
track
(e.g., this group target is
moving towards objective A)
individual
targets
Approximate 1st-Order Multi-Group Bayes Filter
random
observationsets Z produced
by targets in
group targets
observation space
state space
random
groupset, Gk|k
multi-group
motion
five groups
multigroup
Bayes filter
1st-moment
“group PHD”
filter
fk|k(Xk|Z(k))
Dk|k(g,x|Z(k))
fk+1|k(Xk+1|Z(k))
Dk+1|k(g,x|Z(k))
random
group-set, Gk+1|k+1
three groups
fk+1|k+1(Xk+1|Z(k+1))
Dk+1|k+1(g,x|Z(k+1))
Application 5: Sensor Management
predicted multitarget system
sensor
and
platform
controls
to best
detect,
track, &
identify all
targets
multitarget system
- data
- attributes
- language
- rules
multisensor-multitarget observation
all observations regarded as single observation
multiple sensors on multiple platforms
all sensors on all platforms regarded as single system
all targets
regarded
as single
system (possibly
undetected &
low-observable)
Multi-Sensor / Target Mgmt Based on MHC’s
use multi-hypothesis correlator (MHC) as filter part of sensor mgmt
observation space
single-target state space
multisensor motion
FoV of ith sensor
p Di (x, x ik*1 )
X*k|k
X*k1k|
X*k 1|k 1
Xk|k
Xk+1|k
Xk+1|k+1
multitarget motion
multitarget

Bayes filter
fk|k(X|Z(k))
multisensor
FoV s*

1
1*
s
pD (x)  1  (1  pD (x, x k 1 ))  (1  pD (x, x k 1 ))
fk+1|k(X|Z(k))
fk+1|k+1(X|Z(k+1))
optimize Csiszár information objective function
prediction
data-update
MHC-like
algorithm
MHC-like
algorithm
MHC-like
algorithm
(step k|k)
(step k+1|k)
(step k+1|k+1)
optimize geometric O.F.:
multisensor FoV & predicted tracks

Summary / Conclusions
• “Finite-set statistics” (FISST) provides a systematic,
Bayes-probabilistic foundation and unification for
multisource, multitarget data fusion
• Is an “engineering friendly” formulation of point
process theory, the recognized theoretical
foundation for stochastic multi-object systems
• Has led to systematic foundations for previously
murky problems: e.g. group-target fusion
• Is leading to new computational approaches, e.g.
PHD filter; multistep look-ahead sensor management
• Currently being investigated by other researchers:
– Defense Sciences Technology Office (Australia), U. Melbourne
(Australia), Swedish Defense Research Agency, Georgia Tech
Research Institute, SAIC Corp.
Download