February 21, 1979 LIDS-P-887 (revised)

advertisement
LIDS-P-887
(revised)
February 21, 1979
ESTIMATION AND CONTROL FOR A SENSOR MOVING ALONG
A ONE-DIMENSIONAL TRACK
by
Pooi Yuen Kam*
and
Alan S. Willsky**
Abstract
We consider the problem of estimating a random process defined
along a one-dimensional track using measurements from a sensor
which traverses this track.
The effects of sensor motion and
motion blur on the estimation problem are considered, and in
the particular case of a linear model for the random process
and deterministic sensor-motion, these effects are analyzed and
discussed in detail.
In this special case we also consider
the problem of controlling the motion of the sensor in order
to optimize some measure of the accuracy of our estimates
along the track.
Department of Electrical Engineering, University of Singapore,
Kent Ridge, Singapore 5, Singapore.
**
Department of Electrical Engineering and Computer Science,
Massachusetts Institute of Technology, Cambridge, Mass. 02139
This work was performed in part at the MIT Laboratory for Information
and Decision Systems with partial support from NSF under Grant
GK-41647 and from AFOSR under Grant 77-3281. The work of A.S. Willsky
was in part performed at the Department of Computing and Control,
Imperial College of Science and Technology, London, England, under a
Senior Visiting Fellowship from the Science Research Council of Great
Britain.
-2-
1.
Introduction
In this paper we consider the problem of recursive estimation of
a random process defined along a one-dimensional track traversed by
a moving sensor.
tions.
Problems of this type arise in a variety of applica-
For example, small variations in the gravitational field of
the earth are often measured and mapped using data obtained from ships
which travel along prescribed trajectories
[1,2].
Another important
context in which this kind of problem arises is in the remote sensing
of atmospheric variables using instruments carried in a satellite
[3-6], and a final related application in the processing of blurred
images obtained from moving cameras [7].
In our work we focus our attention on sensor motion along a onedimensional track, on which the process to be estimated can be modeled
as the output of a finite-dimensional shaping filter.
While our general
formulation allows for a nonlinear shaping filter, most of our attention
will focus on the linear case.
As mentioned in the preceding paragraph,
models of this type are of use in several applications.
On the other
hand, by restricting attention to one-dimensional tracks, we can expect
to gain only some insights into the issues involved in mapping spatiallydistributed random processes.
The multidimensional problem clearly raises
many questions which we have not considered and which must be in the future.
Nevertheless, we feel that our study is a valuable step in gaining some
understanding into problems of this type.
In particular, the ideas and
results thatwe have developed concerning the effect of sensor motion on the
-3estimation problem are of some importance and, in fact, represent the
major focus of our work.
The assumption that the random process to be estimated can be modeled
as the output of a linear shaping filter is clearly an idealization.
However, it is one that has found great use in practice [1-7].
For
example, linear-guassian models for the deviation of a gravitational
field from some idealized reference have been developed using both physicallybased models and statistical parameter identification techniques
[1,2].
The further assumption that the shaping filter model is finite dimensional
is also an approximation.
For example, physically-based models for the
power spectral density of random gravity fluctuations are not rational
[1,2], and, furthermore, except in certain special cases, the power spectral '
density along a track across a random field will not be rational even if
the spectrum for the entire field is rational.
Nevertheless, the assumption
of finite-dimensionality is one that has met with success in applications,
and we have chosen to use this assumption for this reason as well as for
the reason of obtaining detailed solutions.
The effects of sensor motion
on these solutions are particularly clear, and this has facilitated our
gaining an understanding of some of the issues that arise in processing
data from moving sensors.
A final point concerning the formulation and perspective adopted in
this paper relates to the focus on recursive techniques.
One of the
largest problems to be faced in the analysis of spatially-distributed
random data is that of efficient handling of the large amounts of data
involved.
Since model-based recursive estimation techniques have proven
to be extremely efficient for processing time series data, it is natural
-4to ask whether analogs of such techniques exist for spatial data.
Thus
the main goal of our work has been to gain some understanding into problems
of mapping spatially-distributed random fields by considering the onedimensional problem using the tools of recursive estimation theory.
In
the next section we formulate the basic problem and indicate how sensor
speed affects the measurements, while the specialization to the linear
case is the topic addressed in Section III.
The results of Section III
are used in Section IV to formulate an optimal control problem for controlling sensor motion to achieve the best map possible.
is very much in the spirit of the work in [8]
This formulation
on optimal search strategies.
In Section V we extend the results of Section III to include the possibility
of motion blur in the observations.
Most of the detailed analysis through
Section V is for the case of deterministic sensor motion.
In Section VI
we discuss the effects of random sensor motion, and the paper concludes
with a discussion in Section VII of some of the issues we have raised and
open problems that need to be examined.
II.
Problem Formulation
Let s denote distance
(possibly vector-valued)
by i(s).
along the one-dimensional track, and let the
spatial random process to be estimated be denoted
Our basic assumption is that E can be modeled as the output of
a spatial shaping filter, that is, a stochastic differential equation
in s
-5-
dx(s) = f(x(s),s)ds + g(x(s),s)dw(s),
s>O
(2.1)
i(s) = h(x(s),s)
(2.2)
where x(O) is a given random variable, independent of the Brownian
motion process w which has covariance
min (s,c)
E[w(s)w' (&)] =
Q(Q)d~
(2.3)
0
Note that if i(s) has a rational power spectral density, we can always
find a linear, space-invariant model of this type.
The spatial process is observed through a sensor that moves in
the direction of increasing s with velocity V(t).
The velocity may be
deterministic or random but is assumed to be positive for all t with
probability 1.
ds(t) =
The equation of motion of the sensor then is
v(t)dt,
s(0)=Q
(2.4)
The value of the process ~ being observed at time t then is
(s(t)),
and the measurements are modeled by*
dzl( t) = r(5(s(t)),t)dt + d
l( t)
(2.5)
where E1 is a Brownian motion process with
* We include the subscript "1" here, as we will introduce a second set
set of observations in Section VI.
-6-
E[S l (t)S
l
(s)]
We assume that {I(T1)
= I min(t,s)
-
(2.6)
S1(2), T1 > T12 > t} is independent of
{s(T) , V(T) , w(s(T)), 0 < T < t}
and x(0) and hence of {xi(T),
0 < T < t}.
Since v(t) is positive, s(t) is monotonically increasing and we can
define t(s) as the inverse of s(t).
We will assume that w(s ) - w(s2),
s1 > s 2 > s, is independent of {s(T)As, Vt
-
0} U {v(t(s')), 0 < s
< s}.*
Since 5 is a memoryless function of x, we can combine equations
(2.2) and (2.5) to obtain
dz l (t) = c(x(t), s(t),t)dt + d5l(t)
(2.7)
x(t) = x(s(t))
(2.8)
c(x(t),s(t),t) = r[h(x(t),s(t)),tJ
(2.9)
where
Our problem then is to estimate the spatial shaping filter state x(s),
which satisfies
(2.6),
(2.7),
(2.1), (2.3), given the measurements zl specified by
(2.8) and the sensor motion equation (2.4).
In order to solve this estimation problem, it is necessary to
describe the evolution of x(t). To do this
* A simpler but more restrictive condition would be that E1 is independent
of v,w, and x(O) and that w is independent of v. The less restrictive
condition given in the text is included since it allows for the possibility
that the sensor velocity v might be chosen to depend upon past observations.
-7-
we must utilize the change of time scale formula for diffusion processes
An application of this result, which requires v(t) > 0, Vt,
19.10],
wp.l, gives us
d
dx(t) = f(i(t),t)v(t)dt + g(x(t),t)v /2t)((t)
(2.10)
where 1nis a Brownian notion process with
E[dn(t) EdT =t2Q(t) dt = Q(s(t))dt
(2.11)
f(-,t) = f(.,s(t))
(2.12)
g(-,t) = g(-,s(t))
(2.13)
and
The estimation of x(t) is now a standard nonlinear filtering
problem,
which thus has all of the difficulties associated with
that type of problem.
given in (10].
the linear case.
A discussion of the general nonlinear case is
For the remainder of this paper we will concentrate on
Estimation of Linear Spatial Processes with
Deterministic Sensor Motion
III.
Suppose that we have a linear process model
(3.1)
dx(s) = A(s)x(s)ds + B(s)dw(s)
and linear observations
dzl(t) = C(s(t),t)x(t)
(3.2)
+ dfl(t)
In this case the evolution of x(t) is given by
dx(t)
= A(s(t))v(t)x(t)dt + B(s(t))v /2(t)drn-(t)
(3.3)
Assuming that v(t) is deterministic and that x(O) is Gaussian with
mean x(O) and variance P(O), the conditional mean x(t) of x(t) given
zl(T),
T
< t
can be computed using the Kalman filter
dx(t) = A(s(t))v(t)x(t)dt + p(t)C'(s(t),t) [dz l (t) - C(s(t),t)x(t)dt]
(3.4)
The covariance p(t)
of the estimation error
(x(t)-x(t)) can be computed
off-line from the Riccati equation
PCt) = v(t)[A(s(t))P(t) + P(t)A'(s(t))] + v(t)B(s(t))Q(s(t))B'(s(t))
- P(t)C' (s(t),t)C(s(t) ,t)P(t)
(3.5)
Note that because of the assumption of deterministic sensor
motion, the estimates x(t) can be directly transformed into estimates
of the field x(s).
That is, x(t(s))
is the optimal estimate of x(s)
given data up to the point s, or, equivalently, time t(s).
The
covariance of this estimate is obviously
M(s) = P(t(s))
and, differentiating
dM(s)
dM(s)
ds
(3.6)
(3.6) we obtain
= A(s)M(s)
+ M(s)A' (s) + B(s)Q(s)B' (s)
(3.7)
M(s)C' (s,t(s))C(s,t(s))M(s)
v(t(s))
Examining
(3.5) and (3.7) we can see how the speed of the sensor
affects the performance of the estimator.
The first two terms on the
right-hand sides of (3.5) and (3.7) are the covariance propagation dynamics without measurements.
Intuitively the matrix A(s)
controls the
"correlation distance" in the process x(s), while A(s(t))v(t) determines
the correlation time for x(t).
(A=constant) case,
1/
IAlv
V/IA
For example in the scalar, space-invariant
is the correlation distance for x(s) and
is the correlation time for x(t).
Thus we have the physically
correct feature that the faster we move, the faster the fluctuations we
see in the observed process.
Also, we would intuitively expect that
the quality of the measurements would also decrease as the sensor velocity is increased.
This feature can be deduced from (3.7), where we see
that the term that tends to decrease M(s)
vations is inversely proportional to v.
to account for the obser-
-10-
IV.
Optimal Mapping via Sensor Motion Control
As we have seen, the motion of the sensor affects the quality of
the observations being taken and hence the accuracy of the estimates.
An interesting problem then is the control of sensor speed in order
to optimize some measure of the quality of the spatial map that the
observations produce.
In this section we look at this problem and
formulate an optimal control problem that captures the important
We consider only the linear model - deter-
features to be considered.
ministic motion problem examined in the preceding section, and, for
simplicity, we consider only the scalar case.
Extension to the vector
case is immediate using the matrix version of the minimum principle
[141.
Suppose we define our measure of the quality of the spatial map
on the interval
[0,s]0
by
0
s
Iy
(4.1)
q(s)M(s)ds
0
where q(s) is a positive weighting function which we specify a priori.
We also include a cost on sensor speed to reflect penalties for large
velocities, and we assume that we have a fixed time interval
which we must traverse the spatial interval [0,s0].
[0,T]
Then, transforming
(4.1) to a time integral, we obtain the following optimal control
problem.
Given the dynamics
in
-11-
dP(t) = 2A(s(t))v(t)P(t) + v(t)B2(s(t))Q(s(t)) - C 2(t)P2(t)
dt
ds(t)
dt
= v(t)
dt
(4.2)
(4.3)
with given initial conditions
P(O) = PO' s(O)=O
(4.4)
determine the sensor velocity time history that minimizes
T
J
[q(s(t))v(t)P(t)
+ r(t)v2(t)]dt
(4.5)
subject to
s(T) = sO
v(t)> E
(4.6)
Vt
(4.7)
Here, r(t) is a specified positive time function, and S is an
arbitrary but fixed positive number, included to insure the positivity of the velocity.
This optimal control problem can be solved by a direct application
of the minimum principle
[12,13].
We will consider this application
with the inclusion of one more terminal condition:
P(T) = PT
(4.8)
-12-
i.e.,
a type of "target" terminal estimation error.
This terminal
condition helps to simplify the two-point boundary value problem
that must be solved to determine the optimal control.
The free
terminal condition problem can, of course, also be considered, but
for our demonstration purposes we need only consider the simpler problem.
The Hamiltonian for our problem can now be written as
H = D [q(s(t))P(t)v(t)
+ r(t)v (t)]
+ Dl (t) [2A(s(t))v(t)P(t)
(4.9)
+ v(t)B (s(t))Q(s(t))
P- (t)C (t)]
+ D2 (t)v(t) + P(t) [-v(t)]
where
> 0,
1P(t)
0,
= 0,
(See
[13].)
variables.
c-v(t) = 0
(4.10)
E-v(t) <
0
The variables D0 , D1 (t),
D2 (t) and p(t) are costate
The optimal control problem can now be solved in
principle by applying the minimum principle
[12] to obtain the
necessary conditions that characterize the optimal velocity v*(t)
and the optimal estimation error covariance p*(t).
evidently
impossible
to obtain
cation on
the
necessary conditions
set
of
It is
any algebraic
simplifi-
which,
practice, usually have to be solved numerically on a
in
computer.
-13-
There are, however, special cases in which an explicit solution
can be obtained, and we now present one such example.
Assume the fol-
lowing constant conditions:
A =
B2Q=l
,
C2 = 1/2
r = 1/2,
(4.11)
q=l
is a Wiener process,
In th$is cQase the process -x(s)
and .t.he choice of q(s)
= 1 means that we give equal
weight to the accuracy of all parts of our spatial map.
Now, assume
that the terminal conditions on P and s are so given that they can
be met with more than one velocity profile V(t), 0 < t < T.
the case in which V(t)> £
Then, in
V t, we can derive the following expression
for P*(t):
dP
2
*
*
D2 ( ))
*2
23
+ P
+
1
P
(4.12)
+ C
where
dp*
C -P*
2
t2
.*2
(D(O)P*2
(0)
+
P
*3(0)
+
1 p.4
P (0))
4
(4.13)
-14-
The derivation of (4.12) is presented in the Appendix.
equation (4,12)
By writing
as
P *\ 2
2
(P* -) (P* -) (P* -y)
= h
2
-6)
(4.14)
1
4
the solution is given by
P*(t) =
(P*
[15]
AC)/(Y -A) +
(By2-
(P*(O)-a)
where
Y
= sn{hMt, k}
(4.16)
A
=
(4.17)
k2
= (
M
= (2-6)(a-y)/4
(P-6)/(a-6)
(4-)/(-Y)
-¥Y)
(-)
(418)
(4.19)
The function sn{-,-} is an elliptic function known as the sinus
amplitudinus function [15] and it is tabulated in [16).
We have now
obtained a closed form solution for P*(t), and this enables us to
obtain the optimal velocity v*(t) from the Riccati equation, which
is given in this case by
dP*(t)
dt
=
(t) -2
2
p
t)
(4.20)
-15-
V.
The Inclusion of Motion Blur
We now suppose that because of its own dynamics, the sensor
is not capable of making instantaneous, point measurements.
Rather,
the sensor output at time t involves a blurring of that part of the
spatial process already swept
dz 1 (t)
=
[
H(t-T)x(T)dT]dt + d l(t)
(5.1)
where we have assumed, for simplicity, a time invariant blur model.
Models of this type were considered in the discrete time case in
[7].
Suppose that the matrix blurring function H is realizable as
the impulse response of a finite dimensional linear system.
H(t-T) = Ce
G
(5.2)
Then we can write
dz (t) = Cy(t)dt + d l(t)
(5.3)
dy(t) =
(5.4)
Fy(t)dt + Gx(t)dt
We now have an estimation problem with an augmented state, consisting of x and y, and the optimal filtering equations are
-16-
d[x
(t)(t
A(
[A(s(t)V(t)
=
y(t)
F
G
y(t)
(5.5)
+ P(t)[O,C']{dz l( t)
- Cy(t)dt}
where P(t), the error covariance for the augmented state estimation
error can be computed from
P(t) + P(t)
P(t)
=
F'
0
F
G
1
v(tst
)(s(t))B'(s(t)
(s(t)B(s(t
)
I
+
·1. ()
VI.
(5.6)
st)0
0
(t)-
P(t)
0
C'C
The Effect of Imperfectly Known Sensor Motion
The analysis in the last few sections has been aided by the as-
sumption that the trajectory of the sensor was known or perfectly
controllable.
In this section we indicate some of the complications
that arise if this is not the case.
We assume that the spatial
process is modeled as in (3.3), which is repeated here for convenience:
A(s
dx(t) = A(s(t)
(t)tdt + B(s(t))v 1 / 2
(t))(t
(t) d ( t)
(6.1)
and we assume that the motion of the sensor can be described by
ds(t) = v(t)dt
(6.2)
dv(t) = u(t)dt + k(v(t),t)d(t)
(6.3)
-17-
Here u(t) represents the known part of the sensor's acceleration,
while the other term models the unknown random perturbations in the
velocity.
Here r is a standard Brownian motion process.
Note that
for our formulation, possible choices of k are restricted to those
for which v(t)> 0
Vt
with probability 1.
For example, the bilinear
model
k(v) = -av
(6.4)
with the assumption u > 0, v(o)> 0 satisfies the positivity condition.
In general, we must have k dependent upon v to satisfy the constraint,
and this rules out a linear model.
Of course if u is large compared with
the disturbance, we may be able to use the linear model in practice.
Given the model
(6.1)-(6.3), we assume that we observe
dzl( t) = c(t)x(t)dt + d
where
E1
.
l (t)
(6.5)
dz (t) = v(t)dt + d 2(t)
(6.6)
dz 3 t) = s(t)dt + df3 (t)
(6.7)
13 are independent Wiener process both independent of
2', and
Our goal is to obtain a spatial map of the process x(s) given the
observations
Z
= {z (T),
types of problems occur.
z 2(), z3(T),
T < t}
unfortunately, two
First of all, the optimal estimation of x,
and v is a nonlinear filtering problem, and this is the case even if
A, B, and Q do not depend on s and we assume a linear model in (6.3).
s,
-18-
The problem is the product terms in (6.1), since
v is now random.
Note also that all of the observations contain information about all
of the states.
For example, the observation zl does yield infor-
mation concerning the velocity v
(and hence the position s).
In fact,
it is precisely this information that is used in map-matching navigation systems [1,20] in which position and velocity are deduced by
correlating an a priori map of the process x(s) with the observed
process z (t).
The second problem centers around the issue of mapping itself.
Recall that
x(t)
= E[x(t)t))
t
z 1
(6.8)
When s(t) was known perfectly, we could associate this estimate with
a specific spatial point.
x(s) = E[x(s)
That is,
Z
(s)
] =
(6.9)
(t(s))
However, when s itself is unknown and must be estimated, we do not
have such a simple relationship, and, in fact, we cannot exactly
associate x(t) with the estimate of x(s) at any specific point.
To overcome this difficulty, one might consider estimating
x(s(t)) where s(t) is measurable with respect to zt
(and hence is
known when we know the measurements).
Such an approach leads to some
extremely complex technical problems.
For example, one might consider
trying to estimate x(s(t)), where
(6.10)
s(t) = E[s(t) IZ
However, we cannot obtain a differential equation for x(s(t)) as we
The problem is that in the latter case we changed
did for x(s(t)).
the time scale of a diffusion process with an increasing process s(t).
In the case of x(s(t)) we want to change the time scale of a diffusion
process using another diffusion process .
[10,17]
We refer the reader to
for further discussion of these technical problemns and
several other approaches.
In the remainder of this section we describe one suboptimal
estimation scheme that arises naturally from our formulation and from
This scheme decouples the
the analysis of the preceding sections.
sensor location and field
estimation problems.
Suppose we compute the
estimates of v(t) and s(t) using only the observations z2 and z3.
we make the assumption that
(6.3) is linear (k(v(t),t)=g),
If
these esti-
mates are calculated by a Kalman filter
d s(vt)
1
dz (t) - s(t)dt
vt)
dt + K(t)
=
v(t)
vd
dz 2
u(t)
where, assuming that f2 and
(6.11)
(t) - v(t)dt
3 are unit strength and independent,
K(t) satisfies the Riccati equation
K(t)
n
K(t)
=
O
1
=
K(t) + K(t)
+
1
^0
K2
2
g2
K2(t)
(6.12)
-20-
Having the estimates s(t) and v(t), we now devise an estimate for
x(t) assuming that these values of s(t) and v(t) are, in fact, the true
values.
That is, we implement the Kalman filter of Section III with v
and s replaced by v and s.
This yields the filter equations
dxi(t) = A(s(t))v (t)x(t)dt + P(t)C' (s(t),t) [dz
l
(t) - C(s(t),t)x(t)dt]
(6.13)
dP(t)
dt )-= v(t) [A(s(t))P(t) + P(t)A' (s(t))] + v(t)B(s(t))Q(s(t))B' (s(t))
-P(t)C' (s(t),t)C(s(t),t)P(t)
(6.14)
Note that the Riccati equation (6.14) must be solved on-line, as the
quality of the measurements -- as dictated by sensor speed -- is
estimated on-line.
We also associate the estimate x(t) with the point
s(t) on our spatial map.
In theory, there is no guarantee that s(t) is
monotonically increasing
but in practice it is very likely to be so
because position estimates can often be made very accurately.
An
evaluation of the performance of this estimator and the development
of alternative schemes including those that attempt to extract velocity
and position information from the observations zl remain for the
future.
VII.
Conclusions
In this paper we have formulated and studied the problem of es-
timating a one-dimensional time invariant spatial random process
given observations from a moving point sensor.
Our formulation has
-21-
allowed us to study the effects of sensor motion on the quality of
the observations and on the estimation problem itself.
This has led
us to consider the problem of optimally controlling the velocity of
the sensor and to study the effects of uncertainties in our knowledge
of sensor location and speed.
In addition, we have shown how our
formulation can be extended to allow for the effects of sensor
blurring.
As mentioned in the introduction, our purpose here has been to
expose some of the key issues involved and to provide a foundation
for further, more advanced studies.
Several extensions and related
problems directly come out of the questions we have studied.
An
obvious area for further work is in the study of the nature and
structure of the optimal velocity control problem discussed in Section
IV.
In addition, one might also wish to consider the problem in which
the control variable is sensor acceleration.
In this case v is a state
variable, and, because of (4.7), we have a state-constrained optimal
control problem.
Also, in the nonlinear case or the uncertain motion
problem of Section VI, the optimal velocity or acceleration problem becomes one of on-line stochastic control.
The structure of such
controllers should be investigated, as should the performance of the
estimator suggested in Section VI either by analysis or by simulations.
Another variation that brings us closer to a realistic formulation
for many problems, is to replace the filtered covariance P(t) in
the mapping criterion (4.2) with the smoothed covariance, i.e., we
-22-
utilize the entire measurement history Zl(T),
curate spatial map over the region
causal structure
entire
[O,s0].
TE[O,T] to obtain an ac-
In this case we lose the
(the smoothed error covariance depends upon the
velocity history), and the study of the nature of optimal
sensor trajectories in this case is an interesting problem.
Also we
can consider extending our analysis by allowing the sensor to reverse
direction.
The deterministic analysis of Section III can clearly be
extended in this case, although the optimal estimator immediately
becomes a smoother once the sensor goes into reverse.
In the case of
random sensor motion, even the time of the reversal of direction is
unknown, and hence we do not even know when to start smoothing.
study of this is open.
The
Intuitively, if we use a criterion based on
smoothed error covariances, one would expect that any performance
achievable by a trajectory with reverse motion can also be achieved
by a monotone trajectory.
The study of problems such as these remains
for the future.
In the introduction we mentioned that the sensor motion control
problem is similar in spirit to the results in [8] on optimal search
problems.
In the formulations in [8] one is interested in determining
strategies for searching a region for some object, given a specification
of the probability of the detection of the object in a subset of the
region as a function of the amount of energy put into searching that
subset.
In our formulation the velocity-estimation error covariance
-23-
relationship plays the role of the search energy-probability of
detection specification.
is the following:
Given this observation an interesting problem
suppose we modify the description of x(s) as in (3.1)
by allowing for one or more jumps in the value of x(s)
at unknown lo-
cations; determine the optimal search procedure -- i.e. velocity profile
-- to locate these jumps.
Here again one might imagine on-line pro-
cedures, where we may choose to reverse direction to look at a given
region more carefully once we've satisfied ourselves that no jumps are
present outside that region.
In this case some of the techniques for
the detection of failures and other abrupt changes may be of value
[19].
As mentioned in Section VI, the problem of estimation when sensor
motion is uncertain represents a difficult challenge.
Not only should
the suboptimal estimator discussed be studied, but there is certainly a
need for the development of other estimation systems.
Of particular
importance is the problem of estimating s(t) and v(t) given the sensor
measurements zl(t).
As we discussed earlier, this is a problem of
potentially great practical significance for map-matching navigation
systems.
Another important possibility is to allow the spatial process
to directly affect sensor motion [10,18].
This might arise, for example
if the spatial process were a force field (such as a gravitational field)
and our only observations were of the motion of the "sensor"
z2 and z3 of Section VI).
(i.e., only
In this case it is the field x(s) which is
observed only indirectly through its influence on v (t) and s(t).
-24-
Finally, there are the extensions of these ideas to processes that
vary in several .spatial dimenssions
and possibly in time.
Problems
such as estimation given data along one or more tracks each of which can
contain changes of direction, curves, crossings, etc., are of importance
in applications such as gravity mapping and meteorological analysis.
In
these as in many applications involving multidimensional processes, two
of the central problems are the development of efficient procedures for
assimilating all of the data and the design of efficient strategies for
deciding where to gather data or where to search.
The results in this
paper are aimed at the simplest of problems of these types and thus merely
form an initial step.
In order for the more general cases to be considered,
a substantial effort is needed in obtaining useful multidimensional
probabilistic models and formulations.
-25-
APPENDIX
Derivation of Equation (4.12)
For the special case given by equation (4.11), the necessary
conditions are;
(a)
dt
dP*(t)
= v0,t)
ds*(t)
ds*(t)
dt
= v*(t)
-
2
2
P*(T)
s*(0) = 0, s* (T)
;
(A.1)
=T
s
(A.2)
0
(A.3)
D* > 0
0-
dD1 (t)
1dt
dD2
t)
DH1
+
(t)D* (t)
0
dt
(b)
D*v* (t)
=-;P
(A.4)
(A.5)
-i
Minimization of H with respect to v:
aH
k**
av
* *
*
D0 P (t) + D0 v (t) + D1 (t)
= 0
*
(A.6)
*
+ D2 (t) - ~ (t)
Since we assume that the terminal conditions on P and s are
so given that they can be met with more than one velocity
profile v(t), 0 < t < T, we can set
DO =1
(A'.7)
-= 2v (t)D* > 0
(A 8)
Thus, because
2H
v2
V
-26-
we conclude that v* obtained from equation (A.6) must necessarily
minimize H.
Equation (A.6) gives us only one solution for v*
so this must necessarily be a global minimum.
In the case when v(t) > E, we set
(A.7)
P (t) = 0
then gives
Equation (A.6)
(A.8)
P (t) - Dl(t) - D2(t)
v (t) =-
Differentiate this and substitute from (A.1),
(A.4) and (A.5)
to obtain
dv (t)
(t)
11
dt
2
*2 (t) P*
Using Dl(t) from (A.8)
D2
(A.9)
*(t)D* (t)
1
and noting that
(A.10)
= D2 (0)
(t )
we find that
*
3
dv (t
p*2(t) + p*(t) (v (t) + D2 (0))
2
dt
Next use v (t) from (A.1)
dv()=P(t)
dtvt
dt
=P
*
(t)
to obtain
1 *3
3 p2
P(t)
(t) +
+ 2
D(0
dt (A1) and substitute from 2
dP*(t)
Finally, differentiate (A.1)
-
and substitute from (A.12)
to get
(A.12)
-27-
2*
d P2(t)
D2 (O)P (t) +
3
P
*2
Ct) +
1 *3
P
(t)
at
This is a differential equation in P (t).
side by
dt(
2 ddt
d
)
dt
Multiplying the left
and the right side by 2dP*
2_(
)p* + 2 P3*2
An integration gives equation
An integration gives equation
4.12).
(4.12).
gives
1 P*3)dP*(A.14)
(A.13)
-28-
REFERENCES
1.
Nash, R.A. and Jordan, S.K., "Statistical Geodosy -- An
Engineering Perspective," Proc. IEEE, Vol. 66, No.5, May 1978,
pp. 532-550.
2.
Larimore, W.E., "Statistical Inference on Stationary Random
Fields," Proc. IEEE, Vol. 65, No.6, June 1977, pp. 961-970.
3.
Staelin, D.H., et al, "Microwave Spectrometer on the Nimbus
5 Satellite: Meteorological and Geophysical Data," Science
December 1973, Vol. 182, pp. 1339-1341.
4.
Staelin, D.H., et al, "Microwave Sensing of Atmospheric
Temperature and Humidity from Satellites," COSPAR paper
VI.2.3, June 1975.
5.
McGarty, T.P., "The Estimation of the Constituent Densities
of the Upper Atmosphere by Means of a Recursive Filtering
Algorithm," IEEE Trans. on Auto. Cont., Vol. AC-16, December
1971, pp. 817-823.
6.
Ledsham, W.H. and Staelin, D.H., "An Extended Kalman-Bucy
Filter for Atmospheric Temperature Profile Retrieval Using
Sounder," J. Applied Meteorology, to
a Passive Microwave
appear.
7.
Aboutalid, A.O. and Silverman, L.M., "Restoration of Motion
Degraded Images," IEEE Trans. Cir. and Sys., Vol. CAS-22,
No.3, March 1975, pp. 278-286.
8.
Stone, L.D., Theory of Optimal Search, Academic Press, New
York, 1975.
9.
McKean, H.P. Jr., Stochastic Integrals, Academic Press, New
York, 1969.
10.
Kam, P.Y., "Modeling and Estimation of Space-Time Stochastic
Processes," Ph.D. Thesis, Department of Electrical Engineering
and Computer Science, MIT, October 1, 1976.
11.
Jazwinski, A.H., Stochastic Processes and Filtering Theory,
Academic Press, New York, 1970.
12.
Varaiya, P,P., Notes on Optimization, Van Nostrand Reinhold
Notes on Systems Sciences, New York, 1971.
-29-
13.
Bryson, A.E., and Ho, Y.C., Applied Optimal Control, Ginn and
Company, Waltham, Mass., 1969.
14.
Athans, M., "The Matrix Minimum Principle," Inf. and Control,
Vol. 11, Nov. 1967, pp. 592-606.
15.
Ames, W.F., Nonlinear Ordinary Differential Equations in
Transport Processes, Academic Press, New York, 1968.
16.
Standard Mathematical Tables, CRC Press.
17.
Kam, P.Y. and Willsky, A.S., "Estimation of Time-Invariant Random
Fields via Observations from a Moving Point Sensor," Proc. 1977
JACC,
San Francisco, Calif., June 1977.
18.
Willsky, A.S., Digital Signal Processing and Control and Estimation
Theory - Points of Tangency, Areas of Intersection and Parallel
Directions, The M.I.T. Press, to appear in 1979; see also "Relationships Between Digital Signal Processing and Control and Estimation
Theory," Proc. IEEE, Vol.66, No.9, Sept. 1978, pp. 996-1017.
19.
Willsky, A.S., "A Survey of Design Methods for Failure Detection
in Dynamic Systems," Automatica, Vol. 12, 1976, pp. 601-611.
20.
Nash, R.A., Chmn., Invited Session on Correlation Guidance Systems,
Proc. 1976 IEEE Conf. on Decision and Control, Clearwater Beach,
Florida, Dec. 1976, pp. 774-808.
Download