Emerging Technologies, Robotics and Control Systems

advertisement
International Society for Advanced Research
Emerging Technologies,
Robotics and Control Systems
Second edition
Editor: Salvatore Pennacchio
www.internationalsar.org
Emerging Technologies,
Robotics and Control Systems
Second edition
Published by INTERNATIONALSAR
Edited by Salvatore Pennacchio
^
Neither this book nor any part may be reproduced or transmitted in any form or by
any means, electronic, or mechanical, including photocopying, microfilming, and
recording, or by any information storage or retrieval system, without prior permission
in writing from the publisher.
All papers have been published after review by 2 independent reviewers.
Each paper has been reproduced exactiy as received, on authors' own responsibility.
Trademark notice: product or corporate names may be trademarks and are used only
for identification and explanation, without intend to infringe.
ISBN 978-88-901928-5-2
Copyright © 2008, by INTERNATIONALSAR
Printed by F O T O G R A F , Palermo, Italy, June 2008
INTERNATIONALSAR
'Via Montepellegrino, 72
90142, Palermo
ITALY
www.internationalsar.org
Shaping the Perceptual Robot Vision and
Multiresolutional Kalman Filtering Implementation
Ozer Ciftcioglu
Delft University o f Technology, Faculty o f Architecture, 2628CR Delft, The
the
Abstract-
S h a p i n g the perceptual robot vision and
forward direction
Cauchy
processing
density.
In
the
this
shape
of which
context,
higher
h u m a n v i s i o n is a x i o m a t i c a l l y m o d e l l e d i n t h i s w a y as t o a
of
a
perceptual
robot.
One
of
the
robot,
important
c h a r a c t e r i s t i c s of the h u m a n vision is the augmented
attention
such
a
model
can
be
definition.
as
means
proportionally
navigation
by
known
is d e s c r i b e d . T h e processed
for the
attention,
is
density
perception i n f o r m a t i o n by multiresolutional K a l m a n filtering
i n f o r m a t i o n is intended
more
Netherlands
subject
to
Although
modifications
a c c o r d i n g t o t h e i n t e n d e d g o a l s . I f t h e r o b o t is r e q u i r e d t o
in the f o n v a r d direction and this is c h a r a c t e r i z e d by C a u c h y
have
density. F o r v i s u a l perception properties of a robot the vision
i n f o n n a t i o n t o its b e h a v i o u r , i t is r e f e r r e d t o as
rays emanating f r o m the robot c a n be shaped in s u c h a w a y
r o b o t i c s i n c l u d i n g s o m e c o g n i t i v e q u a l i t i e s . A l t h o u g h , the
that
robot
vision
can
have
different
perception
properties
i n c l u d i n g the emulation of h u m a n v i s u a l perception.
research
the
measured
human
vision
vision
having
information
of
been
environment
is
robotics
the
by
accomplished
robot
by
navigation
Kalman
are presented
fdtering
in
the
emerging
another
robotics,
of humanoids.
and
properties
reflecting
this
perceptual
intelligent technologies
research
domain
in
into
robotics
as
c o g n i t i v e q u a l i t i e s are m u c h d e s i r a b l e i n
F r o m the h u m a n - l i k e b e h a v i o u r v i e w p o i n t ,
is
perceptual
r o b o t i c s is o n e o f t h e t y p e s a m o n g
reality
emotional
robotics,
a n d this
virtual
visual
such r o b o t i c exercises to i m p r o v e the h u m a n - l i k e qualities
optimally in the sense of m i n i m u m v a r i a n c e estimation. T h e
employment
o f the
yield
inlelligenl
processed
computational details of a design for h u m a n perception
human-like
integration
I n this
established,
some
found
a
others
like
number
a p p l i c a t i o n areas o f p e r c e p U i a l r o b o t i c s f a l l i n t o t w o p a r t s as
the
reality and virUial reality. I n the r e a l - l i f e , a u t o n o m o u s
without
encountering
hindrance.
The
a n d true t r a j e c t o r y is reported
and a c c u r a t e autonomous robot navigation
by perception
movement
is
with
obstacle
Categorically,
avoidance
is
a
the
of
used for the estimation of a reference t r a j e c t o r y along w h i c h
navigates
[12].
in
applications
robot
practice
is
environment. A s to robot navigation, the vision i n f o r m a t i o n is
c o m p a r i s o n of the estimated
in
which
big
main
robot
challenge.
Perceptual r o b o t i c s can easily address this issue w i t h m a j o r
demonstrated by c o m p u t e r experiments.
improvements
the
distance
a l o n g this line. For instance,
to
by
measuring
obstacles directly in real-time
using
extended K a l m a n fdtering, multiresolution, sensor fusion
e n v i r o n m e n t . I n the v i r t u a l r e a l i t y , percepttial robotics can
1.
Introduction
robotics
[1-5].
In
the
fields
present
o f study i n
work,
considered
different
as
forms
reference
are
and
presented.
shaping
Each
of
vision
this
vision
into
the
modes
can
m a y be m o r e a t t e n t i o n i n the f o r w a r d d i r e c t i o n , f o r instance.
dimension
o f vision
in
the
with
cognition
autonomous
refined
human
vision
human
i t is easy t o r e a l i z e
vision
emulations.
the
[6-10]
is
robotics
Namely,
another
for
although
same v i s i o n
more
for
environment
f r o m d i f f e r e n t v a n t a g e p o i n t , i t is n o t t h e s a m e f o r a r o b o t
since the v i s u a l i n f o r m a t i o n about
t h e s c e n e has
ehanged.
T h e l a t t e r is p a r t i c u l a r l y i m p o r t a n t i n t h e c a s e o f h u m a n o i d
robotics w i t h human-like navigation. C i f t c i o g l u
et a l . [ 1 1 ]
c o n s i d e r s t h e p r o b a b i l i t y as a m e a s u r e o f h u m a n
perception
which
is t h e
integration o f attention. Thus,
in their
work
a t t e n t i o n is e x p r e s s e d as a p r o b a b i l i t y d e n s i t y f u n c t i o n a n d
h u m a n v i s i o n is a x i o m a t i c a l l y c o n s i d e r e d
probability
density
with
respect
to
to h a v e
viewing
uniform
angle.
estimations
static
with
as w e l l as p r e c i s i o n , l i k e d i s t a n c e e s t i m a t i o n , f o r
instance.
These
are
demanded
features i n game
industry,
s i m u l a t i o n , as w e l l as c o n s i s t e n t s p a t i a l d e s i g n q u a l i t y
where the application o f a novel probabilistic human vision
represent a particular type o f h u m a n v i s i o n m o d a l i t y w h i c h
Combination
a
D e s c r i p t i o n o f a p e r e e p U i a l r o b o t is p r e s e n t e d e a r l i e r [ 1 3 ] ,
p r o c e s s i n r o b o t i c s is c o n s i d e r e d b y s h a p i n g t h e v i s i o n r a y s
is
important means o f perception-based
in
assessment i n architectural design.
vision
o f the robot. I n this novel approach basically h u m a n
algorithms
accuracy
flight
R o b o t n a v i g a t i o n is o n e o f t h e m a j o r
autonomous
localization
of
K e y w o r d s : h u m a n vision, perception, p e r c e p t u a l robotics,
be
different
instead
The
i m p l i c a t i o n o f t h i s a x i o m is t h e h i g h e r p r o b a b i l i t y d e n s i t y i n
212
theory
is
introduced
and
application
to
r o b o t i c s is g i v e n i n a m u l t i - r e s o l u t i o n a l
an
autonomous
framework,
thanks
t o w a v e l e t s . V i s i o n , o f t h e r o b o t is s h a p e d i n t h e f o r m o f a
v i s i o n c o n e w i t h a l i m i t e d v i s i o n c o n e angle i n the f o r w a r d
direction w i t h o u t any reference
that cone.
to perception properties
in
I n contrast to that w o r k , i n the present w o r k t w o
e s s e n t i a l e l a b o r a t i o n s are r e p o r t e d as o r i g i n a l c o n t r i b u t i o n s ;
firstly,
the v i s i o n properties i n the f r a m e w o r k o f perception
is i n v e s t i g a t e d . I t s h e d s m o r e
light o n the previous
work
a b o u t s h a p i n g t h e r o b o t i c p e r c e p t i o n as d e s i r e d . F o r instance
biased
human
vision
or any
other
t y p e o f v i s i o n can
be
c o n s i d e r e d . S e c o n d l y , the w o r k elaborates o n the n a v i g a t i o n
dynamics in continuous time underlying K a l m a n
discrete
form
and
also
elaborates
about
the
filtenng
m
step-wise
i m p l e m e n t a t i o n o f the m u l t i r e s o l u t i o n a l K a l m a n
filtering
different
resolution
systematic
approach
due
decomposition.
to
the
levels
that
it
needs
p e c u l i a r i t y o f the
Eventtially,
a
in
multiresolutiona
some demonstrative
computer
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
ISSN 1828-6984
Shaping the Perceptual Robot Vision and
Multiresolutional Kalman Filtering Implementation
Ozer Ciftcioglu
Delft University of Technology, Faculty o f Architecture, 2628CR Delft, The Netherlands
the forward direction the shape o f which is known as
Shaping the perceptual robot vision and processing
Cauchy density. In this context, higher density means
perception information by multiresolutional Kalman filtering proportionally more attention, by definition. Although
is described. The processed information is intended for the
human vision is axiomatically modelled in this way as to a
navigation of a perceptual robot. One of the important
robot, such a model ^'can be subject to modifications
characteristics of the human vision is the augmented attention
according to the intended goals. I f the robot is required to
in the forward direction and this is characterized by Cauchy
have some human-like visual properties reflecting this
density. For visual perception properties of a robot the vision
information to its behaviour, it is referred to as perceptual
rays emanating from the robot can be shaped in such a way
robotics including some cognitive qualities. Although, the
that robot vision can have different perception properties
integration of the emèrging intelligent technologies into
including the emulation of human visual perception. In this
research the human vision having been established, the
robotics yield another research domain in robotics as
measured vision information of environment is processed
intelligent robotics, cognitive qualities are much desirable in
optimally in the sense of minimum variance estimation. The
such robotic exercises to improve the human-like qualities
computational details of a design for human perception and
of humanoids. From the human-like behaviour viewpoint,
employment by robot navigation are presented and this is
perceptual robotics is one o f the types among others like
accomplished by Kalman filtering in the virtual reality
emotional robotics, which is found in a number o f
environment. As to robot navigation, the vision information is
applications in practice [12]. Categorically, the main
used for the estimation of a reference trajectory along which
application areas of perceptual robotics fall into two parts as
the robot navigates without encountering hindrance. The
reality and virtual reality. In the real-life, autonomous robot
comparison of the estimated and true trajectory is reported
movement with obstacle avoidance is a big challenge.
and accurate autonomous robot navigation by perception is
demonstrated by computer experiments.
Perceptual robotics can easily address this issue with major
improvements along this line. For instance, by measuring
the distance to obstacles directly in real-time instead of
Keywords: human vision, perception, perceptual robotics,
using different localization algorithms in a static
extended Kalmanfiltering,multiresolution, sensor fusion
environment. In the virtual reality, perceptual robotics can
be important means o f perception-based estimations with
accuracy as well as precision, like distance estimation, for
instance.. These are demanded features in game industry,
1. Introduction
flight simulation, as wéll as consistent spatial design quality
Robot navigation is one o f the major fields o f study in
assessment in architectural design.
autonomous robotics [1-5]. In the present work, vision
Description of a perceptual robot is presented earlier [13],
process in robotics is considered by shaping the vision rays
where
the application o f a novel probabilistic human vision
o f t h e robot. I n this novel approach basically human vision
theory is introduced and application to an autonomous
is considered as reference and shaping this vision into
robotics is given in a multi-resolutional framework, thanks
different forms are presented. Each o f the modes can
to
wavelets. Vision o f the robot is shaped in the form o f a
represent a particular type o f human vision modality which
vision cone with a limited vision cone angle in the forward
may be more attention in the forward direction, for instance.
direction without any reference to perception properties in
Combination o f vision with cognition [6-10] is another
that
cone. In contrast to that work, in the present work two
dimension in the autonomous vision robotics for more
essential elaborations are reported as original conttibutions;
refined human vision emulations. Namely, although for
firstly, the vision properties in the framework o f perception
human it is easy to realize the same vision environment
is investigated. It sheds more light on the previous work
from different vantage point, it is not the same for a robot
about shaping the robotic perception as desired. For instance
since the visual information about the scene has changed.
biased
human vision or any other type o f vision can be
The latter is particularly important in the case o f humanoid
considered. Secondly, the work elaborates on the navigation
robotics with human-like navigation. Ciftcioglu et al. [11]
dynamics in continuous time underlying Kalman filtering in
considers the probability as a measure o f human perception
discrete
form and also elaborates about the step-wise
which is the integration o f attention. Thus, in their work
implementation of the multiresolutional Kalman filtering in
attention is expressed as a probability density function and
different resolution levels that it needs a systematic
human vision is axiomatically considered to have uniform
approach due to the peculiarity o f the muttiresolutional
probability density with respect to viewing angle. The
decomposition. Eventually, some demonstrative computer
implication of this axiom is the higher probability density in
Abstract-
62
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
ISSN 1828-6984
experiments are presented. The organisation of the work is
as follows. Section two describes the shaping of the robot
vision for some specific cases. Section three gives robot
dynamics in state-space form in continuous time and derives
the Kalman equations in discrete time which includes also
the extended Kalman filter equations since one o f the state
variables is nonlinear with respect to the system dynamics.
Section four comprises some computer experiments for
robot navigation followed by conclusions.
2.
Vision with Perception
Fig. 2
Geometry of visual perception; an observer is looking
at a vertical xz- plane with a vision lying in the xy-plane;
perception measurements by avatafrin real-time is also shown.
2.1. Perception
The basic probabilistic human perception process is
shown in Figure 1. Human vision has uniform probability
density function (pdf) with respect to the vision angle 6 and
the probability density function along the axis y is given by
[14]
(1)
for the interval - oo < ^ <co. The probability density
is
known as Cauchy density and it is defined as attention in the
terminology of cognition.
Further, to shape the vision rays in any desirable pdf form,
for instance in an augmented human vision form i n the
forward direction, it is important to understand how to deal
with the pdfs accordingly. For this purpose we consider that
the robot vision system is on the xy-plane and it sends rays
impinging on the ;cz-plane while the y direction is forward.
Without affecting the generality, for simplicity, we consider
only the x component o f perception in the xy-plane. The z
direction is a twin independent counterpart of the x
direction. Below we present several different examples o f
interest.
2.2. Shaping the Perceptual Robot Vision
2.2.1.
Two Gaussians
Along the x axis we consider a zero-mean Gaussian pdf with
a width ov
1
(2)
27T CT^
and along the y axis we consider a Gaussian pdf with a
width cTy and mean iriy
1
f^iy)
—r(.v-?",.)
=- ^ e ^ ^ '
ITT cr„
•
(3)
This is shown in Figure 3a.
Fig. 1 The geometry of visual perception from a top view, where
P represents the position of eye, looking at a vertical plane with a
distance /„ to the plane;/vö^; is the probability density flraction in
>'-direction.
y
modified
In the robot navigation, the robot probes the environment by
sending rays. To endow it with human-hke vision, the
probability density of these rays should be favorably
uniform with respect to 6. To obtain this in the virtual reality
we use independent probability density components in a
three dimensional space where the independent densities are
suitably chosen. The resulting perceptual vision is indicated
in figure 2, where the vision is represented by rays in a
horizontal plane as is the case in figure 1.
63
pdf
nty-
i Cauchy
pdf
my
x
(a)
(b)
Fig. 3 Two Gaussians probability densities shaping the vision
one of which has non-zero mean (a); The probability density
fy(v) representing v=tm(9)) =x/y (b)
International Joumal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
We assume that the f / y ) is negligibly small in negative y
direction. The probability density of the rays in relation to
the angle /9is determined by the random variable v
v = - = tan(é')
(4)
y
where x and ƒ are also random variables. The values o f x and
y determine the direction of the vision ray, so that the
probability densities of x and y determine the probability
density of v. The region of the xy plane such that
— <v
y
is the shaded area in figure 4. Hence
//'
if
>>0
y<0
(5)
x<vy
x>yy
(6)
and the cumulative probability
distribution) is given by
density
^.(v)= ]
]f^ix,y)dxdy
\f,yix,y)dxdy
1
-e
'
e
ISSN 1828-6984
'
(10)
From (8),
0^
X ( v ) = - ^ •;ye '
„ 2
: ï
j
dy(11)
•1
I-
7^
Jye
, 2
*
'
'
dy
This integral is computed from the standard integral table,
which reads [15]
'
xe-"'-"''dx
-—J~e'' 1 - < D ( - ^ )
=—
(12)
(probability
where <I)(.) is probability integral given by
+' ƒ
(7)
Differentiating with respect to v, we obtain, with simple
calculus
^(x)=^
\e-''dt
(13)
I n v i e w o f (12) and(13), (11) becomes
/,(v)=T
y
(14)
/ x =v
/
dz
y
/
/.(v) =
1
(T
e
'
•
2
Ï-
where n is given by
/
Fig. 4
(15)
t
(16)
2(T.^
Formation of Gaussian vision in forward direction
Defining a as a constant and è as a function o f v o f the form
I f U x , y )=
a=——e
7ca„
U{-x-y)
'
(17)
Then (7) and (8) yields
h{v) = - ^ \ - e
/;,(v) = 2 ƒ
(15) becomes
\f,{x,y)dxdy
''*(-^)
.V=() . T = ^
(9)
f{v) = 2
Jyf^(vy,y)dy
For the independent Gaussians in figure 3a f / x . y ) is given
by
64
Uv)=^-^[\
a
a.
ty,
+ b{v)\
(18)
The variation o f the p t i f f f v ) is illustrated in Figure 3b. It is
approximately a Cauchy curve.
It is interesting to note that, in (18) for my=0, i.e., zero
mean fyfy), the pdf fy(v) is exact as Cauchy. With an
additional term (18) becomes a modified pdf; namely, it is
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
more peaked as <jy becomes smaller or altematively nty
becomes larger. This is illustrated in figure 3b. I t is to be
noted that, (18) gives the pdf on the xy-plane and v is equal
to tan(9)=x/y, by defmition. The pdf of tan(9) is Cauchy
which implies that the pdf o f 0 is uniform for crj=ay. This
is seen as follows. For my=Q and o-/=c^-, (18) becomes
X ( v ) = ^
(19)
The density of 0=arctan(v)
such that
can be derived as follows. For 9
given by (23). In this section we want to investigate the
biased human vision; namely the attention is given by
(26)
which satisfies
ƒ, {v)dv = - arctan(-) I" = 1
Tt
m
<nll
(20)
we write [16]
AO
(21)
(27)
k
This is illustrated in figure 6a.
-
-7tll<e
ISSN 1828-6984
A
h<l (biased
\
0
m
viaiott)
k^l (human
vLfhn)
Tt
V
(a)
di/|
Fig. 6
JL.
2
'2
0^
(b)
Variation of/vfvj with respect to the parameter k
which gives
l/;r
tan(ö)^ +1
(22)
1
Now we wish to compute the fe(0) for the case k < l which is
the case of biased human vision at the forward direction.
With the standard procedure as described in the preceding
sub-section, we write
kljc
ian(dy- + k-
kin
+ k'
and finally
TC
which shows t h a t f g / d ) is a unifonn density:
(28)
1
d0 I
dv
(23)
l + tan(ö)^
which gives
(24)
k
M 0 )
=
(29)
n l + (A'-l)cos(ö)'
For this case, i.e., my=Q the Gaussians are illustrated in
figure 5 a
y
f;>,(.)rfö=i——
(r-i)cos(ö)-
m
my=0
1
arctan
J
Fig. 5
=1
(b)
Variation of two Gaussians both with zero-mean
For the case o f spatial solid angle in three dimensions
which is the rotation of 0 around y axis we should consider
the z-axis together with the x-axis where the pdf along the zaxis is given by a zero-mean Gaussian of the form
The variation o f f i / 0 ) is shown in figure 6b. From the figure
one sees that with the variation of k, the visual attention can
be shaped, easily. It is interesting to note that, (29) can be
written in the form of
\-k^
rr
/.(z) =
2;rcr.
- d e
J
(25)
Above, f / z ) is the counterpart of the x-direction so that x
and ^-directions together form a spatial vision.
2.2.2.
(30)
tan(g)
= -^-arctanj
(a)
tan(g)
^ 4^ + {k-~\)
(31)
-+sin(i9)'
For small values of (9(29) approximately becomes
k
1
Biased Human Perception
(32)
Considering \^tan(0) in (4), the human attention at v
within the inflnitesimally small interval dv and per unit v is
given by (19) and so that the attention fe(0) is uniform as
65
x-k"-
which is Cauchy. It may be o f value to point out that, with
regard to (18), for m^=0, it becomes
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
3.
0/^)(<r,/o-,)
ISSN 1828-6984
Navigation
(33)
3.1. Kalman Filtering
SO that
(34)
The above derivations are presented to show how to shape
the vision according to the desired forms o f perception. The
perception measurements with avatar in the virtual reality
are shown in figure 7 where for human vision perception,
6) is be uniform. More about these measurements can be
found in [17].
In the preceding section how vision can be shaped and
integrated into a perceptual robot is presented. In this
section how a robot can process this visual information for
autonomous movement will be presented. It is important to
note that, human vision has maximum attention in the
forward direction and because o f this reason by means of
similar vision a robot effectively moves in the forward
directioil avoiding obstacle hindrance for example.
Kalman filtering theory and its applications are well
treated in literature [18-27] . In order to apply Kalman
filtering to a robot movement the system dynamics must be
described by a set o f differential equations which are in
state-space form, in general
dx
:Fx + Gu-hw
dt '
(35)
where x is a column vector with the states o f the system, F
the system dynamics matrix, u is the control vector , and w
is a white noise process vector. The process noise matrix Q
is related to the process-noise vector according to
(36)
e=E[ww^]
The measurements are linearly related to the states
according to
z = Hx + v
(37)
where z is the measurement vector, H is the measurement
matrix, and v is measurement noise vector which is elementwise white. The measurement noise matrix R is related to
the measurement noise vector v according to
i?=E[w^]
'
(38)
In discrete form, the Kalman filtering equations become
Fig. 7
Real-time perception measurements m virtual reality
where perceptions are probabilities as to the rays impinging
on the objects; perpective view (a) and top view (b)
In the perceptual robotics appUcation which is the subject
matter o f the next section, the forward-biased perception is
used. In this case the situation similar to that shown in figure
7 where k « l is of concem. This is depicted in figure 8.
4-, + Kk (Zk - HO.,x,, - HG,u,,) + G,u,,
=
(39)
where Ö\ system transition matrix, Kk represents he Kalman
gain matrix and Gt is obtained from
T
G,. = J<I>(r)Gdr
(40)
0
In the robot navigation, there is no control vector so that the
Kalman filtering equation becomes
=
^k-x +
(Zt - H<I>,.Xk.,)
(41)
The Kalman gains are computed, while the filter is
operating, from the matrix Riccati equations:
K, = M,H\HM,H''
Fig. 8
Perception measurement in the virtual reality. Perception
is determined as probability by means of the rays impinging
on an object.
-h
(42)
where
is a covariance matrix representing errors in the
state estimates after an update and A4 is the covariance
matrix representing errors in the state estimates before an
66
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
update. The discrete process noise matrix
can be found
from the continuous process-noise matrix Q according to
X, =
a = J*«Q*'(r)dr
X, = rsin(0t)
rcos{at)
X,
(43)
ISSN 1828-6984
=-rmsin((Ot)
(50)
0
X2 =rcooos((Dt)
When robot movement is along a straight line with a
constant speed v„ the x component of the system dynamic
model is given by
x = a„+
vt
So that the system dynamics in state-space form in continuous time
is given by
(44)
0 10
0'
0 0 0 -o
Please note that the angular speed (0=0. The system
dynamics in state-space form is given by
(51)
0 0 0 1
0 (a 0 0
X
r
'0
X
(45)
0 0_
X
The system transition matrix <ff°ji is computed from inverse
Laplace transform of the form
X
..
.
Where the system dynamics matrix F is given by
ro
0
r
=
<b;{t)
L-'{{sI-F„r]=/-
0
where {sI-FJ is given by
The system transition matrix 0 is computed from inverse
laplace transform of the form
sI-F„
"j'
-1
0
s
0
0
=
m=L-%i-Fr]=e''
1 T
* , =4>(r) =
0
1
0
r
0
1 0
0
0
0
1
J
1
0
0
y
Q
s
s(s'+c»')
0
CO
r
]_
1
0
(49)
(54)
1
\\
s{s^+C0'^)
s +ca
and the inverse Laplace transform of (54) gives the system
transition matrix as
sm{caT)
CO
01--
cos(or)
cos(a)7')-I
CO
sm{<aT)
cosjmT)-!
CO
-sinicoT)
sinjcuT)
(55)
CO
cos{ü>T)
During the robot navigation we have obtained two
system dynamics models; namely rectilinear straight-ahead
and angular rotation cases. To endow the robot to be
autonomous the angular velocity should be computed during
the navigation. I f the perception measurements yield a
significant angular velocity, the system dynamics model
should switch from linear to non-linear. It is interesting to
note that i f in (55) a)=0, then it reduces to (48) which is the
transition matrix for linear case. In other words, (55)
represents inherently the linear case, as well as the rotational
r coscoT
Fig. 9
CO
s'+co^
CD
v^]
I '
0 (global .coordinate
system)
CO
0
(48)
In the case o f m^, i.e., circular movement with an angular
velocity, we consider the geometry shown in Figure 9.
direction
at!
(53)
s^+co^
and the corresponding state vector X i s given by
X = \_x
-1
s +m
0
0
1
5
The inverse of (53) yields
1
0
0
0 m
0 -a
(47)
Above T is the sampling time interval of the perception
measurements. In two dimensional navigation space, i.e., xy
plane, the system transition matrix becomes
0
(52)
(46)
X
Robot navigation deviating from a straight line
At the local coordinate system, the state variables are
67
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
robot navigation. I f tiie angular velocity is computed at each
step of navigation and i f it is non-zero, the robot moves
along a non-linear trajectory with each time a deviation 9
from linear trajectory. The linear and non-linear cases are
illustrated in Figure 10 and Figure 11.
To compute the angular velocity co at each step, it is also
selected as a state variable, so that the state variables vector
given by (49) is modified to
(56)
However, in this case the system transition matrix in (55)
becomes non-linear with respect to co. I n this case Kalman
filtering equations should be linearized. This is a form
known as extended Kalman filtering.
ISSN 1828-6984
trajectory x may then be written as
X
= x„ + Ax
(59)
Hence, (57) and (58) become
x„-f Ax = / ( x „ +Ax) + >f
(60)
z = h{x„ -I- Ax) 4- V
The Taylors series expansion yields
x„-)-Ax« ƒ ( x J - ^
Ax + w
dx_
(61)
z = h{x,)
Ax'^v
where
Mi.
dx^ 8Xi
dx
global coordinate
ystem)
Fig. 10
=F=
S/i,
'
at,
Mi.
dh
dXj
Sx
Bll,
, =H =
(62)
dx, dX2
If the reference trajectory x„ is chosen to satisfy the differential
equation
Robot navigation deviating ftom a straight line
(63)
Ax = /(x„)
then, from (61) the linearized model becomes
x„-i-Ax«/(X(,)-f-
Sf
Ax + w
(64)
z = hixj +
0 (Blobal coordlnata
system)
Fig. 11
Ax + v
In view o f (63) and (64), the system dynamics matrix 0 in
discrete form for extended Kalman filtering becomes
'Y
Robot navigation deviating from a straight line
sin(i»r)
0
cos((oT)-l
3.2. Extended Kalman Filtering
The non-linear state-space form as a set of first-order nonlinear differential equations is given by
0
cos(0T)
^
cosjcoT)-!
0
J
-sm(0T)
sinjmT)
CO
(57)
0
where x is a vector of the system states,/(3c^ is a non-linear
function of those states, and w is a random zero-mean
process. The measurement equation is considered to be a
non-linear fimction of the states according to
0
z
=
h(x)
+ V
sin(or)
0
co^
^
CO
0
cos((yr)
0
0
(65)
''
co.
1
Above, fo^, cOy, .. are given by
(58)
r., = - | (Tecs « , r - ^ ) ; + ( - r sin
where h(x) is a non-hnear measurement matrix, v is a zeromean random process.
We assume that an approximate trajectory Xo is available.
This is referred to as the reference trajectory. The achial
68
CO _
(66)
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
o. = - sin aT) x- cos aT
y
(67)
\T
ISSN 1828-6984
optimal estimation of the state variables. The relevant
Kalman filtering equations are as follows.
x{k + l\k) =
A{k)x{k\k)
P(k + l\k) = A{k)P{k I
k)A{kf,
(68)
(73)
K{k -I-1) = P(* -I-11 k)C{k + \f {C{k + l)P{k + \\ k)
C{k + l\kY
COS (oTx-
(69)
sin o r ƒ
In the extended Kalman filter operation, three measurements
are considered. Referring to Figure 9 these are
and,
x{k + \\k + \) = [I-K{k
+ \)C{k + l)]x(/t + \\k)
+ K{k + l)z(k + \),
P{k + l\k + \)=-[I-K{k
6 = arctg{y Ix) = arctg{x^ Ix^)
angle
r = -^x^ + y^ = yfxf+xf
distance
V = ^v] + vl
velocity
=^Jxf+xf
(70)
+R{k + \)r'
(74)
V
+ \)C{k + V)]P{k + \\k)
where I is the unity matrix; P is the error variance; K ,
Kalman gain.
A n N sensor multiresolutional dynamic system can be
described by
From (43), the linearized measurement matrix in terms of
state variables becomes
z"'
i =
H=
^"^^-^
Jx; + x;
) = Ct'l
) + vf-l { k , )
(75)
\,...,N
where i=N is the highest resolution level, so that
E [ w ^ ' ' \ k , ) . . ^ ' \ l , Y ] = Q ^ ' \ k , ) ,
.
Above xi, X2, Xs, X4 are the state variables which are defmed
as xi=x, X2=y, X3=Vx, and X4=Vy, respectively.
=0
k = l
(76)
k^^l
Referring to the measurements
at different
resolution levds, we write
3.3. Multiresolutional Kalman Filtering
In the preceding subsection the system dynamics in
discrete form having been established, in this section the
multiresolutional implementation of Kalman filtering will be
described. The multiresolutional decomposition is due to
discrete wavelet transformation. Wavelets theory is well
treated in the literature [27-31]. The research concerns
multisensor fiision and some similar works are reported in
the literature [32, 33]. However, application to autonomous
robotics is a novel application, according to the best
knowledge of the author.
For the sake of simplicity of representation, we consider
the following dynamic system
x(k +1) = A{k)x{k)
z{k) = C(k)xik)
+
+ v{k)
B{k)wik)
(72)
where k being the discrete time, x(k) is state variable; A(k)
is system matrix; B(k), process noise input matrix. The noise
sources w{k) and v(k) have the properties as given in (36)
and (38). The objective of Kalman filtering is to combine
the measurements information in order to generate an
69
E \ w ^ ' \ k . ) ] = Q ,
E [ w ^ ' \ k , ) w ^ ' \ l , Y ] = Q ^ ' \ k , \
= 0
k
= l
i l l )
k ^ l
and
E [ v ' ' \ k , ) ] = 0 ,
E [ v ^ < \ k y \ l , f ] = R ^ ' \ k ^ ) ,
= 0
k = l
(78)
k ^ l
The measurements at different resolution levels are shown
in figure 12 and the wavelet decomposition of state variables
in a data block is shown in figure 13. Please note that
propagation as to state estimation occurs block-wise and the
highest resolution level. Decomposition of the states by
wavelet transform and reconstruction by inverse wavelet
transform is effective in this scheme due to better estimation
Intemational Joumal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
of the error covariance as to the state estimation. This is
explained in the text.
data block i
data block 1 data block
1
0
0
2
1
2
3
4
5
i
i
ISSN 1828-6984
•^'"^(0)
yl"'''(0)
^'*''(1)^""(0)
+ B,QX
(82)
i=l
1=2
J-1
where Bo and Qo are given by
0
Fig. 12
1 2
3
4
5
6
time
1=3
7
Measurements at different resolution levels
^i'^'(l)fl""(0)
0
0
5"'J(1)
0
(83)
data
block
xl"(ki)
1=1
(84)
1=2
x!''(kM)
and cartying out the multiphcations, we obtain
x"l(k3)
xl'l(k3^i)
time
xl'l(k3+2)
index
xl'l(k,»)
i=3
'x^'^O)
Av
Fig. 13 Wavelet decomposition of state variables in a data
block
x^'^l)
(85)
X o\o
x^'\2)
Within a data block, the state variables at resolution level i
are designated as
'x^%(ï)
(79)
x"/t(z + 2'-')
where Bo is given by
x^^kii)
(80)
=
0
5™(0)
0
0
B„ =
An
(^131)3^(31
where m is data block index and s is the number of state
variables. For the recursive implementation of the
muhiresolutional Kahnan filtering is step-wise described
below.
A.
X
^131
^131^P1
^13)5(3]
0
0
(87)
0
513)
Initialisation:
(81)
Qo
70
Q'''
0
0
Q'''
0
0
0
0
0
0
gm
=
0
0
(88)
Intemational Joumal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
It is to be noted that above, the zero elements in Q„ and B„
are sub-matrices with the same dimensions as the error
variance Q'^' and the and the non-zero elements in 5 „ . The
propagation from one data block to other consecutive block
is given below.
ISSN 1828-6984
Where the matrix elements bpg are given by
=YlAmM
+ j+u-l)B{mM
+p +
q-2)
(93)
The error variance matrix is computed from
B.
Propagation
from data block m to m+1:
ö['''(OTM),(o),efi(wM+i),
Here, we are concemed with the propagation of the whole
data block X^im^ and its error covariance P,„),F^ at the
highest resolution level which is 3, has to be propagated to
X„,+i\„P'-' sndP„,+,i„,l^l This is carried out as follows.
(94)
From above equations ^ , / "
m+llm
^
^ m\m
'{A^^Y
pi^
-
A^P'^^AA^^
+ B^O\B^1
(90)
^ 3 _ | 0
^n+l|ra - A« ^m\m V^m J ^ ""m tim \Pm J
where iV and M being # = J and M='^''=3,
'^A^^\mM
+ j),A^''\mM
respectively
+ j + l),
A: = diag
i„
A^''\mM
i „ ... i , „
0
b„,
(^131)3^13]
0
'HH*
GH'
0
0
{A'^Y
0
0
0
0
0
0
{A'^f
(95)
0
(A^^f
0
(92)
i „ ... A,
(^P])2^[3,
=
0
0
0
0
(^131)1 5 P I
(^(31)2 ^[3]
(^[31 y
£13]
0
(^[3])l5[3]
(^I3])2 g[3]
0
5l31
(^my^Pi
.^Pi
h = [h, h,] = Q.S[42 V 2 ]
g = { g , ^ i ] = 0.5[V2
GG* _
- V 2 ]
and
H*
=H'^.
^, li-n
. [i-2]
, Li-31
m
G
'\
0
(96)
0
5P1
+ G*G = I
HG*'
0
0
respectively.
The predicted state variables Xn,+i|m''' and error
covariances Pxxm+iim having been computed at the resolution
levels i , they are updated with the measurements Z„+i''' at
the respective resolution level. The resolution levels are
obtained by means of wavelet decomposition. The
decomposition scheme is shown in figure 14 where H and G
are low-pass and high-pass decomposition filters. The
reconstruction scheme is shown in figure 15 where H and
G* are low-pass and high-pass reconstmction filter filters.
They satisfy the quadrature mirror filter (QMF) property
which is
H'H
0
+j + M - l )
(^[3])3 ^[3,
0
B
are given by
(91)
ji
0
m.AB„f^
(89)
X [i]
Q
G
m
X U-l]
1
m
where I is the unit (identity) matrix. The wavelet matrix
operator H and the scaling matrix operator G in the
decomposition contain two-tap Haar filters where
.[i-2i
m
.[«]
m
Fig. 14
block
71
Wavelet decomposition of state variables in a data
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
[i]
-^m
As
m
m
G
\i-3\
X
m
m
Fig. 15 Wavelet reconstruction of state variables in a data
block
Estimate
JC„„|,/^ and Y,„u\nP,
=diag
C"[(/n + l)2'-'],C"[(/n + l)2'-' - 2 ' - ' +1],.
As
X,„+,|„/^ and K„,+7|,/^, Y,„^,J'^"
are correlated, the
covariance matrices PxYm-n\nP and Pyxm^i\«P are also
updated. However, the wavelet coefficients Ym^i^J'^ and
their covariance matrices Prvnu-iiJ'^ are not updated. The
minimum variance Kalman gain matrix K„+i''' at each level,
is determined by
fr w _ p
t
•^m+; Int+i
update with Z
•
X 1
-^m+ilm+/
^m+i
reverse wavelet iransforn
wavelet i^raiisfon
update with Z
1
nAl Im
state variables
p
vvir
K^m+\
['lp
Wr vv .jf mV'
.\,TnH-l|nj ^m+i
•'^m-t-l /
t,10Jy
to
'generate
X
1
error
P
,
X .m+J\m+J
Fig. 16 Wavelet decomposition of state variables in a data
block
Y
and
error
covariances
X,„^;|,„^./'^''' and
[UJ] for/=7,2,..,A'; are determined, they must be fused
X..„„./''^^
propagation
tV
~ ^XXm*\\m ^m+\
Once, within the moving window, the sequences of updated
1
fusion ^n^i 1'"+/
(102)
,«W[(OT4-l) + 2'-'+1]
fusion
\
P
(101)
,C"'[(m-H) + 2'-'-l-l]
i?W[(OT +1)2'-'],i?"[(OT +1)2'-' - 2'-' +1],
Update with Z
^mt;lm
Y„„.,i„f'*" are correlated, the
where the measurent matrix Cn,=i''^ and Rm+i^'' are given by
updating:
The basic update scheme for dynamic multiresolutional
filtering is shown in Fig. 16 where at each resolutional level,
when the measurement is available, the state variables are
updated and when the block is complete the inverse wavelet
transform and fusion is performed. During the inverse
transformation wavelet coefficients Ym+i\„n-i saved aside
are used.
P
(99)
covariance matrices Pxrm+j\,J'^ and Prxm+i]nP^ are also
updated. However, the wavelet coefficients
and
their covariance matrices PYY,„*I\J'^
however are not
updated. The minimum variance Kalman gain matrix Km+i''^
at each level, is determined by
['J_p
I ' V ^v{r Mp
Wp I ' F . p MV'
'^m*\ -"a-m+l|»i "-m+I (<-»,+! t^XXmt-^m ^m*\ +•"»/+! /
(100)
It is to note that after the state variable estimations are
carried out at the respective resolution levels, the fiision o f
the information is carried out through the error covariance as
described below. However, normally for the estunation o f
the highest level state variables reconstruction is not
necessary since it is already available.
C.
in
• XXm+Ilm
' XXm+\\m+l
m
,[/-5]
ISSN 1828-6984
an
optimal
Jr,,,^,^,,^./''"'' and
. For the minimum fusion
and X .
covariance
X,
X,,^^,^}""^ •
INF]
the
fused
estimate
is calculated as
['*'i
(104)
Explicitly
p
l'i p
['•]
p
['] p
in
where the minimum fusion error covariance P„+i\f„+i
[/]
m+ljm
(97)
becomes
{p..,..ry=tk.w'ï-(N-l){p^j'^'Y.
(=j
Explicitly
-*^m+l|»+l =-^».+%,
-^•m+lh )
[NF]
(98)
(105)
The fused estimate -^^,„+7^+7'''^^'' is a weighted sumrnation
and
of both predicted X^^j^J'^^
and updated X,„^j^„,J^-'^
,
for 1=1,2,...N. The sum o f the weight factors equal to the
72
Intemational Joumal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
identity
I.
This
-^m+;|m+/'^'^^ given
X.^^^J'"'
can
be
above
seen
by
substitution of
into
the
expression
ISSN 1828-5984
trajectoryref[rj, EKF [b] and measurements [g]
of
in (104).
4. Autonomous Robot Navigation
The experiments have been carried out with the
simulated experiments in the virtual reality. The state
variables vector is given by (107) while taking ; - N .
(106)
k { N ) -
J.N]
Fig, 17
The overall Robot trajectory with measurement (* signs),
and Kalman filtering estimation (• signs).
Explicitly,
trajectory ref [r], E K F [b] and measurements [g]
where m is the angular rate and it is estimated during the
move. When the robot moves in a straight line, the angular
rate becomes zero. The other state variables are x and y
coordinates and the respective velocities.
Vision rays interact with the environment which includes
side walls and hindrance along the forward direction. For an
environment a permanent fictive vertical planes at certain
threshold distances are considered. I f a distance is more than
the threshold value it is taken to be the threshold value. This
means i f the real environment is in front of that fictive plane
the actual distance measured by the sensor is considered as
distance.
I f the measured distances are beyond the
predetermined threshold distances the robot moves forward
without any angular velocity. The position estunation of
robot is accomplished by means o f Kalman filtering via the
states ofthe robot navigation dynamics as presented earlier.
The overall robot trajectory is shown in figure 17 where
there are three lines plotted but only two of them are visible.
The line marked by * sign represents the measurement data
set. The line marked by • is the extended Kalman filtering
estimation. The Hne indicated by o sign is the reference
trajectory. These lines are not explicitly seen in the figure.
For explicit illustration o f the experimental outcomes the
same figure with a different zooming ranges and the
zooming powers are given in figures 18-22. From the
experiments it is seen that, the Kalman filtering is effective
for estimation of the trajectory from perception
measurement. Estimations are more accurate in the straightahead mode seen in figure 22, compared to the bending
modes. This can be explained by noting that in the straightahead mode the angular velocity is zero and the system
matrix is fixed. However in the non-linear mode the system
matrix is approximate due to the estimation error o f the
angular velocity, In this case the difference between the
reference trajectory and the estimated trajectory is markedly
separated due to the approximation error caused by the
Taylor's series expansion and ensuing linearization.
73
Fig, 18
Enlarged robot trajectory, measurement, and Kalman
filtering estimation. The * sign is for measurement, o sign for
the reference and • sign is the estimated trajectory.
trajectory ref [r], E K F [bJ and measurements [g]
Fig. 19
Enlarged Robot trajectory, measurement Kalman
filtering in bending mode. The * sign is for measurement, o
sign for the reference and • sign is the estimated trajectory.
International Journal of Factory Automation, Robotics and Soft Computing, Issue 3, July 2008
trajectory ref [r], E K F [bj and measurements jg]
1
-
1
1
-1
-
-1
-
-[
1
1
1
J
!
r
1
_ 1
1
^
1
4
1
1
1
1
J
1
1
1
--
- T -
-
1
8.8
1
i
1
1
1
i
_
1
1
'
'^eT'u - U J l
- 1
"
_ 1
1
1
>
1
J
5; .
^ii"r.^-
'"'/'
I'j
f/l
é' 91
4- -
•J--'''
!
_
- . T , . ?
-t) '
1
J
1
1
1
._.U>-
1
1
i
_ ^- 1 , p . i r * * ; . ' T t - - -
1 —
- -
-ZS: T -
5. D i s c u s s i o n a n d C o n c l u s i o n
i
1
1
1
1
1
1
f
1
9.2
9.4
j
1
1
- f
'
1
" ï
-
'
1
l
1
1
i
1
l
ll
i
1
f
1
_
.
j
1
t
1
to
9.S
1
1
1
9.8
10
10.2
Fig. 20
Enlarged Robot trajectory, measurement Kalman
filtering in bending mode. The * sign is for measurement, o
sign for the reference and • sign is the estimated trajectory.
trajectory ref [rj, E K F [b] and measurements [gJ
l' if
1 .
i
4
-6.21
1
1
1
1
1
1
_ _ _ _ _
; : :
m
fiy
L.
1
1
J
- -
-
_
_|_
1
1
1
_
8.5
1
1
'
1
1
1'
1
-
-pi_t,>*>^'r-H-l
l
l
l
—
1
A .
-1-\'—
- r - - 1 - - \-jSjat^t-
1
_ _ : _ _
I
"
1
1
1
-7,
1
- - j i a i * ^ ' - ^
Cl
'
'
1
1
1
!
1
1
X ^ + t •¬
V
ni'
• • ,J
i X
r -
-
t'.
-
1
1
1
1
1
I
1
I
9
9.5
10
10.5
11
Fig. 21
Enlarged Robot trajectory, measurement Kalman
filtering in bending mode. The * sign is for measurement, o
sign for the reference and • sign is the estimated trajectory.
trajectory ref [rJ, E K F [bj and measurements [g]
.2.451
-2.5
I
ij. •
1i i r / . a 1 7 : 4 Ï - 7 6 '
i
ISSN 1828-6984
!
* ' 1 7 . 7 : / . 8 "
Fig. 22
Enlarged Robot trajectory, measurement Kalman
filtering in bending mode. The * sign is for measurement, o
sign for the reference and • sign is the estimated trajectory.
74
Perception is an important concept in human vision. In this
work, the detailed description of Kalman filtering of
processing perceptual information is described for a
perceptual robot. Especially, the perception is considered to
be a probabilistic concept and defined as such [11] so that
the probabilistic consideration in perceptual robot is
necessary for human-like operations. In this way the robotic
movement can, be coupled with cognition of the
environment by a robot in a relatively more convenient way.
In this context, Bayesian perception and cognition studies
[34] can be more substantiated with more accurate priors.
This is especially important when one deals with a certain
scene but from a different vantage point.
In vision robotics autonomous navigation is a challenging
issue. Vision robot can navigate autonomously by
perception-based sensory information so that it can react
better with respect to the environmental and circumstantial
situations. Better is in the sense of obstacle avoidance and
way finding without crash.
Another interesting feature of the perceptual robot
navigation study is the observer form [35] of the
measurements referring to distance (r), angle (0) and the
scalar velocity (v). These may partly be computed ftom the
estimated state variables, in the case of unavailable actual
measurement
counterparts. The work is a clear
demonstration o f the working state observers and therefore
an important hint for the physical robotics exercises. It is
reminded that, the present work is an autonomous robotics
simulation implemented in a virtual reality and the results
reported are the computer experiments.
Applications o f vision robot are coimtless and they are
well documented in the literature. Perception is a new
dimension in a v|sion robot as such its vision can be shaped
for application dependent vision properties. In this sense,
perception can be seen as a measure of vision quality of a
vision robot. The investigations presented in this work are
motivated by measuring the perception aspects of an
architectural design from different vantage points and
altogether to obtain a compromise among them, as to a final
decision-making for that design. Such a task in one hand is a
routine work for a robot in virtual reality and the actual
(real) scene having been represented in the virtual reality the
scene can be perceived in many different ways as desired.
The same task for a human is firstly extremely tedious but
more importantly subjective in the sense o f perception
measurement and therefore can be deemed to be
inconclusive due to probable inconsistencies. This is
because of the complexity o f the task due to the number of
objects involved in the scene. Here the objects may be a
number of dwelling units subject to marketing by a building
project developer. Since a number of locations and
architectural requirements condition on perception aspects,
the task is a multi-objective optimization subject to these
impositions as constraints. Also, in this situation one is
interested in a solution on a Pareto front. Such complexities
are today effectively dealt with thanks to evolutionary
algorithms [36-38]. Sorae endeavours in the context of
architecture are reported in [39, 40]. The present research is
another endeavour along this line.
I n t e r n a t i o n a l J o u r n a l o f Factory A u t o m a t i o n , R o b o t i c s a n d S o f t C o m p u t i n g , Issue 3, July 2 0 0 8
[17]
6.
References:
[1]
M. Beetz, T, Arbuckle, T. Belker, A. B. Cremers, D.
Schuiz, M. Bennewitz, W. Burgard, D. Hahnel, 0. Fox,
and H. Grosskreutz, "Integrated, plan-based control of
autonomous robots in human environments," IEEE
Intelligent Systems vo]. 16, pp. 56-65,2001.
G. Oriolio, G. Ulivi, and M. Vendittelli, "Real-time map
building and navigation for autonomous robots in
unknown environments," IEEE Trans, on Systems, Man
and Cybernetics - Part B: Cybemetics, vol. 28, pp. 316¬
333, 1998.
M. Wang and J. N. K. Liu, "Online path searching for
autonomous robot navigation," presented at I E E E Conf
on Robotics, Automation and Mechatronics, Singapore,
2004.
F . Cuesta and A. Ollero, Intelligent Mobile Robot
Navigation. Berlin: Springer, 2005.
U. Nehmzow, Scientific Methods in Mobile Robotics.
Berlin: Springer, 2006.
E . Ahle and D. Söffker, "A concept for a cognitiveoriented approach to build autonomous systems,"
presented at 2005 I E E E Int. Conf. on Systems, Man and
Cybemetics, Big Island, Hawaii, 2005.
E . Ahle and D. Söfïker, "A cognitive-oriented
architecture to realize autonomous behaviour - part II:
Application to mobile robots," presented at 2006 I E E E
Conf on Systems, Man, and Cybernetics, Taipei,
Taiwan, 2006.
E . Ahle and D. Söfiker, "A cognitive-oriented
architecture to realize autonomous behaviour - part I:
Theoretical background," presented at 2006 I E E E Conf
on Systems, Man, and Cybernetics, Taipei, Taiwan,
2006.
C. Burghart, R. Mikut, R. Stiefelhagen, T. Asfour, H.
Holzapfel, P. Steinhaus, and R. Dillmann, "A cognitive
architecture for a humanoid robot: A first approach,"
presented at 2005 5th l E E E - R A S Int. Conf on
Humanoid Robots, Tsukuba, Japan, 2005.
D. Söfïker, "From human-machine-interaction modeling
to new concepts constructing autonomous systems: A
phenomenological
engineering-oriented
approach.,"
Journal of Intelligent and Robotic Systems, vol. 32, pp.
191-205, 2001.
Ö. Ciftcioglu, M. S. Bittermann, and I. S. Sariyildiz,
"Towards computer-based perception by modeling visual
perception: a probabilistic theory," presented at 2006
I E E E Int. Conf on Systems, Man, and Cybemetics,
Taipei, Taiwan, 2006.
B. Adams, C. Breazeal, R. A. Brooks, and B. Scassellati,
"Humanoid robots: a new kind of tool," Intelligent
Systems and Their Applications, IEEE [see also IEEE
Intelligent Systems], vol. 15, pp. 25-31, 2000.
Ö. Ciftcioglu, M. S. Bittermann, and I. S. Sariyildiz,
"Visual
perception theory underlying perceptual
navigation," in Emerging Technologies, Robotics and
Control Systems vol. 1, S. Pennacchio, Ed. Palermo:
Intemational Society for Advanced Research, 2007, pp.
139-153.
Ö. Ciftcioglu, M. S. Bittermann, and I. S. Sariyildiz,
"Multiresolutional fiision of perceptions applied to robot
navigation," Journal
of Advanced
Computational
Intelligence and Intelligent Informatics (JACIII), vol. 11,
pp. 688-700, 2007.
I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals
Series and Products. New York: Academic Press, 1965,
A. Papoulis, ProbabiUty,
Random
Variables
and
Stochastic Processes. New York: McGraw-Hill, 1965.
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
75
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
[40]
ISSN 1828-6984
M. S. Bittermann, I, S. Sariyildiz, and Ö. Ciftcioglu,
"Blur in Human Vision and Increased Visual Realism in
Virtual Environments," in Lecture Notes on Computer
Science: Springer Verlag, 2007,
A, Gelb, J. F. Kasper, R. A. Hash, C. F . Price, and A, A.
Sutherland, Applied Optimal Estimation. Cambridge,
MA: MIT Press, 1974.
M. S. Grewal and A, P. Andrews, Kalman
Filtering:
Theory and Practice Using MATLAB. New York: Wiley,
2001.
A, H, ISLZViinski,, Stochastic Processes and Filtering
Theory. New York Academic Press, 1970,
P. S. Maybeck, Stochastic Models, Estimation
and
Control, Voli. New York: Academic Press, 1979.
P. S. Maybeck, Stochastic Models, Estimation
and
Control, Ko///. New York: Academic Press, 1982.
J. M, Mendel, Kalman Filtering and
OlherDigital
Estimation Techniques: Individual leaming
Package.
New York: I E E E , 1987.
D. Simon, Optimal State Estimation. New Jersey: Wiley
Interscience, 2006.
H. W, Sorenson, Kalman Filtering:
Theory
and
Application. New York: I E E E Press, 1985.
T, Kailath, Lectures on Wiener and Kalman Filtering.
New York: Springer-Verlag, 1981.
Y . B. Shalom, X. R. L i , and T. Kirubarajan, Estimation
and Application to Tracking and Navigation. Montreal:
Wiley, 2006,
S. Mallat, A Wavelet Tour of Signal Processing.
New
York: Associated Press, 1999,
S, G, Mallat, "A theory for multiresolution signal
decomposition:the wavelet representation," IEEE Trans.
on Pattern Analysis and Machine Intelligence, vol, 11,
pp, 674-693,1989.,
T, Ogden, Essential Wavelets for Statistical
Applications
and Data Analysis. Boston: Birkhauser, 1997.
D, B, Percival and A. T. Walden, Wavelet Methods for
Time Series Analysis: Cambridge University Press, 2000.
L . Hong, "Multiresolutional filtering using wavelet
transform," IEEE Trans on Aerospace and Electronic
Systems, vol. 29, pp. 1244-1251,1993.
H. C. Hsin and A. C. L i , "Wavelet-Based Kalman
Filtering in Scale Space for Image Fusion," in Pattern
Recognition and Computer. Vision, C. H. Chen and P. S.
P. Wang, Eds. Singapore: World Scientific, 2006.
D, C, Knill and W, Richards, Perception as Bayesian
Inference. Cambridge: Cambridege University Press,
1996,
B. Friedland, Control System Design. London: McGrawHill, 1986.
C, A. C, Coello, D, A. Veldhuizen, and G, B, Lamont,
Evolutionary Algorithms for Solving
Multiobjective
Problems. Boston: Kluwer Academic Publishers, 2003,
K, Deb, Multiobjective Optimization using Evolulionaiy
Algorithms: John Wiley & Sons, 2001,
H, Sato, H. E . Aguirre, and K . Tanaka, "Controlling
dominance area of solutions and its impact on the
performance of MOEAs," in Lectza-e Notes on Computer
Science, Evolutionary
Multi-Criterion
Optimization.
Heidelberg: Springer, 2007,
M, Bittermann, I, S, Sariyildiz, and O. Ciftcioglu,
"Performance-Based Pareto Optimal Design," presented
at Tools and Methods of Competitive Engineering
(TMCE), Izmir, 2008.
O, Ciftcioglu and M, S, Bittermann, "Solution Diversity
in Multi-Objective Optimization: A study in Virtual
Reality," presented at I E E E World Congress on
Cornputational Intelligence, Hong Kong, 2008.
e x p e r i m e n t s a r e p r e s e n t e d . T h e o r g a n i s a t i o n o f t h e w o r l < is
as f o l l o w s . S e c t i o n t w o d e s c r i b e s
vision
the s h a p i n g o f the robot
f o r s o m e s p e c i f i c cases. S e c t i o n t h r e e
gives
robot
d y n a m i c s i n state-space f o r m i n c o n t i n u o u s t i m e and derives
the K a l m a n equations
in discrete t i m e w h i c h includes also
the e x t e n d e d K a l m a n
filter
e q u a t i o n s s i n c e o n e o f the s t a t e
v a r i a b l e s is n o n l i n e a r w i t h r e s p e c t t o t h e s y s t e m
Section
four
comprises
some
computer
dynamics.
experiments
for
robot navigation f o l l o w e d by conclusions.
2.
Vision with Perception
F i g . 2'-
G e o m e t r y o f v i s u a l perception; an observer is l o o k i n g
at a vertical xi-
plane w i l h a v i s i o n l y i n g in the .y).-plane;
perception measurements b y avatar i n real-time is also s h o w n .
2.1. Perception
F u r t h e r , to shape the v i s i o n rays i n any desirable p d f f o r m ,
The
basic
probabilistic human
perception
s h o w n i n F i g u r e 1. H u m a n v i s i o n h a s u n i f o r m
process
is
probability
density f u n c t i o n ( p d f ) w i t h respect to the v i s i o n angle 9
and
t h e p r o b a b i l i t y d e n s i t y f u n c t i o n a l o n g t h e a x i s y is g i v e n b y
[14]
for
instance
i n an
augmented
human
vision
form
in
the
f o r w a r d d i r e c t i o n , i t is i m p o r t a n t t o u n d e r s t a n d h o w t o d e a l
w i t h the p d f s a c c o r d i n g l y . F o r this purpose w e consider that
t h e r o b o t v i s i o n s y s t e m is o n t h e .v)'-plane a n d i t s e n d s r a y s
i m p i n g i n g o n t h e .vz-plane w h i l e t h e y d i r e c t i o n is f o r w a r d .
W i t h o u t a f f e c t i n g the g e n e r a l i t y , f o r s i m p l i c i t y , w e c o n s i d e r
o n l y t h e .v c o m p o n e n t o f p e r c e p t i o n i n t h e . v y - p l a n e . T h e z
ƒ,.(;') =
direcdon
is
a
twin
independent
direction. B e l o w w e present
the
.Y
several d i f f e r e n t examples
counterpart
of
of
interest.
f o r t h e i n t e r v a l - co <
k n o w n as Cauchy
< CD . T h e p r o b a b i l i t y
d e n s i t y i s
d e n s i t y a n d i t is d e f i n e d as attention
i n the
2.2. Shaping the Perceptual Robot Vision
tenninology o f cognition.
2.2.1.
Two Gaussians
A l o n g the x axis w e consider a zero-mean Gaussian p d f w i t h
a w i d t h (T,
/.(•v) =
(2)
cr.
2n
and a l o n g the y
axis w e consider a Gaussian
pdf with
w i d t h CJ,, a n d m e a n
1
AO')--
(3)
T h i s is s h o w n i n F i g u r e 3 a
Fig. 1 The geometry o f v i s u a l perception f r o m a top v i e w , where
P represents the p o s i t i o n o f eye, l o o k i n g at a vertical plane w i t h a
distance /„ to the plane; jj.ö»; is the p r o b a b i l i t y density f u n c t i o n in.
;'-direction.
M y )
I n the r o b o t n a v i g a t i o n , the r o b o t p r o b e s the e n v i r o n m e n t b y
modified
sending
Cauchy
rays.
probability
To
endow
density
of
it w i t h
these
human-like
rays
should
be
vision,
the
pdf
pdf
favorably
u n i f o r m w i t h r e s p e c t t o 6. T o o b t a i n t h i s i n t h e v i r t u a l r e a l i t y
we
use
independent
probability
density
components
in a
t h r e e d i m e n s i o n a l s p a c e w h e r e t h e i n d e p e n d e n t d e n s i t i e s are
s u i t a b l y c h o s e n . T h e r e s u l t i n g p e r c e p U i a l v i s i o n is i n d i c a t e d
in
figure
2, w h e r e the v i s i o n
h o r i z o n t a l p l a n e as is t h e c a s e i n
is r e p r e s e n t e d
figure
b y rays
in a
1.
(a)
Fig. 3
(b)
T w o Gaussians p r o b a b i l i t y densities shaping Ihe vision
one o f w h i c h has non-zero mean ( a ) ; The p r o b a b i l i t y density
f,(v)
213
representing
v=lan(e)) =x/y
(b)
a
W e a s s u m e that t h e f , ( y ) is n e g h g i b l y s m a l l i n n e g a t i v e y
d i r e c t i o n . T h e p r o b a b i l i t y d e n s i t y o f the r a y s i n r e l a t i o n
to
(10)
lM<y)--
2^(7.(7,.
t h e a n g l e ö is d e t e r m i n e d b y t h e r a n d o m v a r i a b l e v
From (8),
V = ^ = tan(ö)
w h e r e x and
y
(4)
dy-
are a l s o r a n d o m v a r i a b l e s . T h e v a l u e s o f - Y a n d
determine
the
direction
o f the
vision
ray,
so
p r o b a b i l i t y d e n s i t i e s o f .Y a n d y
d e t e r m i n e the
d e n s i t y o f v. T h e r e g i o n o f t h e xy
plane such that
that
27ra,<y„
the
(11)
probability
1
jye
2;ra-,CT,
(5)
T h i s i n t e g r a l is c o m p u t e d f r o m t h e s t a n d a r d i n t e g r a l
y
table,
w h i c h reads [ 1 5 ]
is t h e s h a d e d a r e a i n f i g u r e 4 . H e n c e
if
y>0
if
y<0
and
x<.y
X >
the
xe '
dx =
cumulative
probability
density
(probability
(12)
l - 0 ( - ^ )
—
2M
vy
V/' .
2/,^/;
w h e r e <t)(.) is p r o b a b i l i t y i n t e g r a l g i v e n b y
d i s t r i b u t i o n ) is g i v e n b y
F„(v)=
j
'\fjx,y)dxdy+
Differentiating with
|
j/„(x<y)dxdy
r e s p e c t to v, w o
obtain,
with
<i,{x)=-^]e-'dt
(7)
(13)
I n v i e w o f ( 1 2 ) a n d ( 1 3 ) , (11) becomes
simple
calculus
^
ƒ,(") =
is".»)l + 0 ( - ^ ) )
y'
(14)
-0(^^)1
X
<vy
/
x = v
ƒ,•(") =
"/
(15)
w h e r e / / is g i v e n b y
x>vy
(16)
M = M(y) -Fig. 4
Formation o f Gaussian v i s i o n i n foiAvard direction
f,M=
]yfJ^y.y)dy-
2cr
D e f i n i n g a as a c o n s t a n t a n d i as a f u n c t i o n o f v o f the f o n
'lyfJvo')dy
(8)
cr,
2(r:
a-=—e
•
(17)
Then (7) and (8) yields
f„(v) = 2
ƒ
(15) becomes
'\f^^Xx,y)dxdy
(9)
/,(v) = 2
/,(v) = -
^
[
i
+ K.')]
(18)
\yf„{yy,y)dy
F o r t h e i n d e p e n d e n t G a u s s i a n s i n f i g u r e 3 a f„(x,y)
is g i v e n
by
T h e v a r i a t i o n o f t h e p d t f / v ) is i l l u s t r a t e d i n F i g u r e 3 b . It is
approximately a Cauchy curve.
It is interesting
mean f / y ) , the
to n o t e t h a t , i n ( 1 8 ) f o r my=0,
p d f f,.(v)
is
exact
as
Cauchy.
i.e., z e r o
With
an
a d d i t i o n a l t e r m ( 1 8 ) b e c o m e s a m o d i f i e d p d f ; n a m e l y , i t is
214
more
peaked
as CT,, b e c o m e s
smaller
o r a l t e m a t i v e l y m,,
g i v e n b y ( 2 3 ) . I n this s e c t i o n w e w a n t to investigate
b e c o m e s l a r g e r . T h i s is i l l u s t r a t e d i n f i g u r e 3 b , I t is t o b e
n o t e d that, ( 1 8 ) g i v e s t h e p d f o n t h e A y - p l a n e a n d v is e q u a l
t o tan(e)=x/y.
by definition. The p d f o f
tonis
k i n
, , ,
f,i.v) = -
Cauchy
, , ,
CT/=CT,,^,
(26)
+k
w h i c h i m p l i e s that the p d f o f i9is u n i f o r m f o r O : / = C T / . T h i s
i s seen as f o l l o w s . F o r m , , = 0 a n d
the
b i a s e d h u m a n v i s i o n ; n a m e l y t h e a t t e n t i o n is g i v e n b y
w h i c h satisfies
(18) becomes
l/;r
^'('') = 7 7 T
r V v M c / f = -!-arctan(-^) I " = 1
(19)
T h e d e n s i t y o f S=arclan(v)
(27)
c a n b e d e r i v e d as f o l l o w s . F o r 6
s u c h that
fW
k<l (hiiiseil
- ^ / 2 < 0 < ! r / 2
m
vision)
(20)
\
we write [16]
dö
Id.
1
n
-JL
2
(21)
Flg, 6
nu=0
JL2
0 '
(a)
(b)
V a r i a t i o n o f f . ( v ) w i t h respect to the parameter k
w h i c h gives
N o w w e w i s h t o c o m p u t e t h e fg(e)
l/slan(g)' +1
f o r t h e c a s e k < l w h i c h is
t h e c a s e o f b i a s e d h u m a n v i s i o n at t h e f o r w a r d
(22)
direction.
W i t h t h e s t a n d a r d p r o c e d u r e as d e s c r i b e d i n t h e p r e c e d i n g
sub-section, w e write
_
kl7,
and finally
(23)
w h i c h s h o w s t h a t f / d ) is a u n i f o r m d e n s i t y :
k i n
lan(e)' + / t ^
foiO)-
(28)
I
I + lan(fl)^
dv
w h i c h gives
(24)
ƒ„{(?) =
k
(29)
1-f (A' - i ) c o s ( e ) '
F o r t h i s case,
i.e., my=0
t h e G a u s s i a n s are
illustrated in
f i g u r e 5a.
y
,jmde
=-
1
m
tan(e)
)
mf=0
^
" V r + ( t - - I)
/ f / y )
F o r the
(30)
tan(e)
n_
Fig.5
\/i+(*'-ïy
0^
(a)
(b)
V a r i a t i o n o f t w o Gaussians both w i t h zero-mean
case o f s p a t i a l
solid
angle
i n three
The variation
dimensiohs
office)
is s h o w n i n f i g u r e 6 b . F r o m t h e f i g u r e
o n e sees t h a t w i t h t h e v a r i a t i o n o f k, t h e v i s u a l a t t e n t i o n c a n
w h i c h is t h e r o t a t i o n o f 6 a r o u n d y a x i s w e s h o u l d c o n s i d e r
be s h a p e d , e a s i l y . I t is i n t e r e s t i n g t o n o t e t h a t , ( 2 9 ) c a n
t h e z - a x i s t o g e t h e r w i t h t h e .v-axis w h e r e t h e p d f a l o n g t h e z-
w r i t t e n in the f o n n o f
be
a x i s is g i v e n b y a z e r o - m e a n G a u s s i a n o f t h e f o r m
y»(ö) = * * ]
I n
-de
(31)
(25)
c7.
A b o v e , f ( z ) is t h e c o u n t e r p a r t o f t h e , ï - d i r e c t i o n so t h a t x
For small values o f 0 ( 2 9 ) a p p r o x i m a t e l y becomes
and j'-directions together f o r m a spatial vision.
2.2.2.
Biased Human Perception
Considering
v=tan(e)
in
( 4 ) , the
°
human
a t t e n t i o n at
~-(l-*^)ö^^-*L
(32)
v
w i t h i n t h e i n f l n i t e s i m a l l y s m a l l i n t e r v a l d v a n d p e r u n i t v is
w h i c h is C a u c h y . I t m a y be o f v a l u e t o p o i n t o u t that, w i t h
g i v e n b y ( 1 9 ) a n d so t h a t t h e a t t e n t i o n f i f 6) is u n i f o n n as
regard to ( 1 8 ) , f o r m,,=0, i t b e c o m e s
215
3.
(l/rtc.'o-,)
Navigation
(33)
3.1. Kalman Filtering
I n the p r e c e d i n g s e c t i o n h o w v i s i o n can be shaped
so t h a t
integrated
into
a
perceptual
robot
is
presented.
a u t o n o m o u s m o v e m e n t w i l l be presented.
to s h o w h o w to shape
the v i s i o n a c c o r d i n g to the desired f o n n s o f perception. T h e
note
that,
avatar
i n the virUial
are s h o w n i n f i g u r e 7 w h e r e f o r h u m a n
vision
reality
be
found in [17].
vision
has
I t is i m p o r t a n t t o
maximum
attention
vision
a
robot
effectively
moves
in
the
by means o f
in
the
forward
d i r e c t i o n a v o i d i n g obstacle hindrance f o r example.
perception,
f f d ) is b e u n i f o r m . M o r e a b o u t t h e s e m e a s u r e m e n t s c a n
human
f o r w a r d d i r e c t i o n and because o f this reason
similar
perception measurements w i t h
this
section h o w a r o b o t can process this visual i n f o r m a t i o n f o r
(34)
T h e a b o v e d e r i v a t i o n s are p r e s e n t e d
and
In
Kalman
treated
filtering
in literature
filtering
theory
and
its
[18-27] . In
applications
order
to
are
apply
well
Kalman
to a r o b o t m o v e m e n t the s y s t e m d y n a m i c s m u s t be
described
by
a set
o f differential equations
which
are
in
stale-space f o r m , i n general
(35)
= f X + Gu + w
*
dl
w h e r e x is a c o l u m n v e c t o r w i t h t h e states o f t h e s y s t e m , F
t h e s y s t e m d y n a m i c s m a t r i x , H is t h e c o n t r o l v e c t o r , a n d
w
is a w h i t e n o i s e p r o c e s s v e c t o r . T h e p r o c e s s n o i s e m a t r i x
Q
is r e l a t e d t o t h e p r o c e s s - n o i s e v e c t o r a c c o r d i n g t o
e = E[ww'']
The
(36)
measurements
are
lineariy
related
to
the
states
according to
: Hx + V
(37)
w h e r e z is t h e m e a s u r e m e n t
v e c t o r , H is t h e
measurement
m a t r i x , a n d v is m e a s u r e m e n t n o i s e v e c t o r w h i c h is e l e m e n t wise white. T h e measurement
n o i s e m a t r i x R is r e l a t e d t o
the m e a s u r e m e n t noise v e c t o r v a c c o r d i n g to
R^E[vv^]
(38)
I n discrete f o r m , the K a l m a n
filtering
equations
become
•V, = * i - v . - , + K t ( Z j - H < t , x ^ , - H G ^ U j . , ) + G ^ u , ,
Fig. 7
R e a l - t i m e perception measurements i n v i r t u a l
(39)
reality
= fix,
w h e r e perceptions are p r o b a b i l i t i e s as to the rays i m p i n g i n g
o n the objects; perpective v i e w (a) and t o p v i e w ( b )
In the perceptual
robotics application w h i c h
is t h e
where
subject
+ V,
0^ s y s t e m t r a n s i t i o n m a t r i x , Ki
represents he K a l m a n
g a i n m a t r i x a n d G j . is o b t a i n e d f r o m
m a t t e r o f t h e n e x t s e c t i o n , t h e f o r w a r d - b i a s e d p e r c e p t i o n is
used. I n this case the s i t u a t i o n s i m i l a r to that s h o w n i n f i g u r e
7 where k « l
G,
=
(40)
|o(r)Gdr
is o f c o n c e m . T h i s is d e p i c t e d i n f i g u r e 8.
I n t h e r o b o t n a v i g a t i o n , t h e r e is n o c o n t r o l v e c t o r so that t h e
Kalman
filtering
equation becomes
-''i. = < t ' i J » - i + K | , ( z j - H t p , X t _ , )
The
Kalman
gains
are
(41)
computed,
while
the
filter
is
operating, f r o m the m a t r i x Riccati equations:
M , ='i>,7',-,oi+a
K , = M , H \ H M , H ^ + R , r '
Fig. 8
Perception measurement
i n the v i r t u a l r e a l i t y . Perception
(42)
w h e r e P j is a c o v a r i a n c e m a t r i x r e p r e s e n t i n g en-ors i n the
is d e t e r m i n e d as p r o b a b i l i t y b y means o f the rays i m p i n g i n g
state
on an object.
estimates
a f t e r an
matrix representing
216
update
and
M-
is t h e
e r r o r s i n t h e state e s t i m a t e s
covariance
b e f o r e an
update.
T h e discrete process n o i s e m a t r i x g(, can be f o u n d
/•cos(ty/)
f r o m the c o n t i n u o u s process-noise m a t r i x Q a c c o r d i n g to
Ö* =
jcI'(r)QO''(r)dr
(50)
(43)
-V: =
When
robot
movement
is
along
a
straight
line
c o n s t a n t s p e e d y,-, t h e ,v c o m p o n e n t o f t h e s y s t e m
with
dynamic
m o d e l is g i v e n b y
+ V
Please
rü}cos((ül)
a
So that the system dynamics i n stale-space f o r m i n continuous t i m e
is g i v e n by
(44)
note
that
the
angular
speed
co=0.
The
system
0
1 0
0
0
0
'0
r
_0
0
-a
(51)
O O O I
d y n a m i c s i n s t a t e - s p a c e f o n n is g i v e n b y
X
0
0
ffl
0
0
X
(45)
k
T h e s y s t e m t r a n s i t i o n m a t r i x 0"t
is c o m p u t e d f r o m i n v e r s e
L a p l a c e t r a n s f o n n o f the f o r m
W h e r e t h e s y s t e m d y n a m i c s m a t r i x F is g i v e n b y
f =
ro
0
r
<t>:(,) = L - ' { { s / - F J - ] } .
(52)
(46)
0
where (sl-FJ
The system transition matrix 0
is c o m p u t e d
from
inverse
is g i v e n b y
-1
".5
0
0 "
laplace t r a n s f o r m o f the f o r m
si -
(I)(t) = r ' { ( i / - F ) - ' } = e "
0
F.
0
1-0
0
J
-1
0 -fo
0
.Ï
.Ï
0
T'
'1
0
(47)
(53)
1
The inverse o f (53) yields
Above
T is t h e s a m p l i n g t i m e i n t e r v a l o f t h e
perception
m e a s i u e m o n t s . I n t w o d i m e n s i o n a l n a v i g a t i o n s p a c e , i.e., xy
.V
plane, the system transition m a t r i x becomes
"0
r
0
1 0
0
0
0
0
s'
+ (O
1
0
0
s{s
-V CO )
0
(54)
17-1
0
1
a n d t h e c o r r e s p o n d i n g state v e c t o r X i s g i v e n b y
and the inverse Laplace
h-ansition m a t r i x as
t r a n s f o r m o f (54) gives
the
syslen
I n t h e c a s e o f o ) ^ , i.e., c i r c u l a r m o v e m e n t w i t h an a n g u l a r
v e l o c i t y , w e c o n s i d e r t h e g e o m e t r y s h o w n i n F i g u r e 9.
J
sin(f3r)
a
cos(fl)r)-l
0
COS((M7")
-sin(fflr)
g
cos(t»r)-l
(55)
sin((y7')
CO
0
sin(a)7')
During
the
0
robot
cos(ai7')
navigation
we
have
system dynamics models; namely rectilinear
and
angular
rotation
cases.
To
endow
the
obtained
two
straight-ahead
robot
lo
bc
a u t o n o m o u s the a n g u l a r v e l o c i t y s h o u l d b e c o m p u t e d d u r i n g
the
0 (Blobal coordinale
syslem)
navigation.
I f the
significant angular
perception
velocity,
the
measurements
system
yield
dynamics
a
model
s h o u l d s w i t c h f r o m l i n e a r t o n o n - l i n e a r . I t is i n t e r e s t i n g to
Fig. 9
R o b o t navigation d e v i a t i n g f r o m a straight line
n o t e t h a t i f i n ( 5 5 ) ffl=0, t h e n i t r e d u c e s t o ( 4 8 ) w h i c h is the
transition
A t the local coordinale system, the slate variables are
matrix
for
linear
case.
In
other
words,
(55)
r e p r e s e n t s i n h e r e n t l y t h e l i n e a r case, as w e l l as t h e r o t a t i o n a l
217
r o b o t n a v i g a t i o n . I f t h e a n g u l a r v e l o c i t y is c o m p u t e d at e a c h
step o f n a v i g a t i o n
and
i f it is n o n - z e r o ,
along a non-linear
trajectory with
f r o m linear trajectory. T h e
robot
moves
each t i m e a d e v i a t i o n
linear and
i l l u s t r a t e d i n F i g u r e 10 a n d F i g u r e
the
non-linear
cases
9
t r a j e c t o r y .v m a y t h e n b e w r i t t e n as
.V = .v„ + A,v
(59)
are
Hence, (57) and (58) become
11.
T o c o m p u t e t h e a n g u l a r v e l o c i t y a> at e a c h s t e p , i t is a l s o
s e l e c t e d as a s t a t e v a r i a b l e , so t h a t t h e s t a t e v a r i a b l e s
vector
g i v e n b y ( 4 9 ) is m o d i f i e d t o
[.V
y
V,
However,
v„
.v„ + AA- = ƒ (.v„ + A A ) -t- iv
z = h{x.,
co]
(56)
T h e T a y l o r s series e x p a n s i o n
i n t h i s case t h e s y s t e m t r a n s i t i o n m a t r i x i n ( 5 5 )
A-„-^-AA-«ƒ(A„)-^
b e c o m e s n o n - l i n e a r w i t h r e s p e c t t o ca. I n t h i s case K a l m a n
filtering
equations
should
be
k n o w n as e x t e n d e d K a l m a n
linearized.
This
is
a
(60)
+ txx) + V
[9/1
yields
......
f L9A
(61)
form
z = h(x\+
filtering.
—
^x + v
where
'dh,
Sf,
dx,
0
^
(qlobal coordinate
syslem)
¥L
Sf,
s.v,
dv.
dh,
dx.
Sx,
dx.
.
dl,
dh.
dh.
dx,
dx,
, = H =
Sx '••
(62)
I f the reference t r a j e c t o r y x„ is chosen to satisfy the d i f f e r e n t i a l
equation
F i g . 10
R o b o t n a v i g a t i o n d e v i a t i n g f r o m a straight line
(63)
A.v = y(.v„)
then, f r o m (61) the linearized m o d e l becomes
A „ + Ax
a f { x j
Av -I- H'
(64)
z = A(.vJ +
0 (Qlobal coordinate
•yitem)
X
AY + V
I n v i e w o f ( 6 3 ) and ( 6 4 ) , the system d y n a m i c s
discrete f o r m f o r extended K a l m a n
Fig. 11
filtering
cos{coT) -1
C0S{<3}T)T h e n o n - l i n e a r s t a t e - s p a c e f o r m as a set o f
first-order
0
non-
cos{coT) -1
l i n e a r d i f f e r e n t i a l e q u a t i o n s is g i v e n b y
1
-sin(^y^)
s'm(coT)
(65)
m
CO
,ï = ƒ (.v) + VI'
w h e r e .v is a v e c t o r o f t h e s y s t e m s t a t e s , f ( x ) is a
function
o f those
process. T h e
non-linear
s'm(ü}T)
(57)
states,
measurement
fimction
and
w
is
equation
a
random
t o be
A b o v e , («v, ojy, .. are g i v e n by
(58)
is a n o n - l i n e a r m e a s u r e m e n t m a t r i x , v is a z e r o -
mean r a n d o m process.
is r e f e r r e d
t o as
the
0).
a
o f the states a c c o r d i n g to
W e a s s u m e t h a t a n approximate
cos(coT)
0
zero-mean
is c o n s i d e r e d
= -
This
0
y
0
non-linear
+ V
w h e r e h(x)
co^
CO
CO
Extended Kalman Filtering
= h(.x)
in
Robot n a v i g a t i o n d e v i a t i n g f f o m a straight line
Ü
3.2.
matrix 0
becomes
t r a j e c t o r y jr„ is a v a i l a b l e .
reference trajectory. T h e
actual
218
sinfyr •
(7- cos ror - ^
) x+l-TsmcoT
...
cosöjr-1
•
)>
(66)
OJ.
-
- sin
(oT)
X -
1 /T- •
T
O, ^ — (FsincoT
(oT y
cos
(67)
l-cosryT" * , _
...
).v+ {-TcoscoT
siniyT" '
)y
opfimal
estimation
Kalman
filtering
of
the
state
variables.
.x(k + l\k)
= A{k).x(k I k)
P{k + \\k)
relevant
=
A{k)P(k\k)A{kY,
(68)
(73)
K{k + 1) = ƒ>(* + 1 [ k)C{k + \Y[C(k
+ \)P(k +
C(k + \\kY
coswT x-sin
The
e q u a t i o n s are as f o l l o w s .
a>T y
+R(_k +
\\k)
\)Y'
(69)
and,
I n t h e e x t e n d e d K a l m a n filter o p e r a t i o n , t h r e e m e a s u r e m e n t s
x{k + I'l /( + 1 ) = [ / - K(k + \)C{k + \)]x(k + \\k)
are c o n s i d e r e d . R e f e r r i n g t o F i g u r e 9 t h e s e are
+ K(k + \)z(k + \),
P(k + \\k + \) = [I-K{k
= arclg {y I x) = arctg{x^ I .v,)
(74)
+ \)C(k + \)]P(k + \\k)
angle
V
distance
(70)
w h e r e 1 is t h e u n i t y m a t r i x ; P is t h e e r r o r v a r i a n c e ; K ,
K a l m a n gain.
velocity
An
N sensor m u l t i r e s o l u t i o n a l d y n a m i c system can
be
described b y
F r o i n ( 4 3 ) , the l i n e a r i z e d m e a s u r e m e n t m a t r i x i n tenris o f
state v a r i a b l e s b e c o m e s
.T'"'(^,.+l)='^"^'(^Jx'^'l(/t,),
(75)
O
0
O
O
i =
(VI)
l,...,N
w h e r e i = N is the h i g h e s t r e s o l u t i o n l e v e l , s o t h a t
O
^k'"(^A.)]=o,
E W ' ^ - \ k , Y r \ l , Y ] - Q ' " \ k , } , k =I
A b o v e Xl, Xl, xj, . V j are t h e s t a t e v a r i a b l e s w h i c h are d e f i n e d
as .v,=.v, X2=y, X}=v„
a n d x^=v„
= 0
(76)
k*l
respectively.
R e f e r r i n g t o t h e m e a s u r e m e n t s z ' ' ' ( k j ) at d i f f e r e n t
resolufion levels, w e write
3.3. Multiresolutional Kalman Filtering
In
the
preceding
subsection
the
system
discrete f o n n h a v i n g been established,
multiresolutional implementation o f Kalman
described.
discrete
treated
The
the
filtering
multiresolutional decomposition
wavelet
in
dynamics
i n this section
transformation. Wavelets
literaUire
[27-31].
The
w i l l be
is d u e
theory
research
in
the
is
£:[w"'(/t,)]=o,
E[w^'\ki)\'J-'\liY]=Q^'\ki),
k =l
to
= 0
(77)
k ^ l
well
concerns
: and
m u l t i s e n s o r f u s i o n a n d s o m e s i m i l a r w o r k s are r e p o r t e d i h
the l i t e r a t u r e [ 3 2 , 3 3 ] . H o w e v e r , a p p l i c a t i o n to
robotics
is
a
novel
applicahon,
according
autonomous
to
the
best
k n o w l e d g e o f the author.
E[y'-'\ky''{lf]=R''\k.), k =l
F o r the sake o f s i m p l i c i t y o f representation, w e
= 0
the f o l l o w i n g d y n a m i c s y s t e m
.ï(/c + 1 ) = A(k)x(k)
z(k)
= C(k)x(k)
+
B(k)w(k)
(72)
+ v(lc)
is s y s t e m m a t r i x ; B(k),
and
process noise input m a t r i x . The noise
v(k) h a v e t h e p r o p e r t i e s as g i v e n i n ( 3 6 )
and (38). T h e o b j e c t i v e o f K a l m a n
the
measurements
information in
k*l
T h e m e a s u r e m e n t s at d i f f e r e n t r e s o l u t i o n l e v e l s are
w h e r e k b e i n g t h e d i s c r e t e t i m e , x ( k ) is state v a r i a b l e ; A ( k )
s o u r c e s \v(k)
(78)
consider
filtering
order
is t o
to
combine
generate
an
219
in
figure
in
a data
shown
12 a n d t h e w a v e l e t d e c o m p o s i t i o n o f s t a t e v a r i a b l e s
b l o c k is s h o w n
in
figure
13. P l e a s e n o t e
that
p r o p a g a t i o n as t o state e s t i m a t i o n o c c u r s b l o c k - w i s e a n d t h e
highest
resolufion lovel.
wavelet
transform and
Decomposifion
reconstruction
by
o f the
inverse
states
by
wavelet
t r a n s f o n n is e f f e c f i v e i n t h i s s c h e m e d u e t o b e t t e r e s t i m a t i o n
o f t h e e r r o r c o v a r i a n c e as t o t h e s t a t e e s t i m a t i o n . T h i s i s
data block
A^''\0)
(0)
e x p l a i n e d i n the text.
data block
data block
1
2
(l)/('"''(0)
(l)/(l"'l(0)
i
+ B,Q,Bl
0
0
2
1
4
3
(82)
1=1
5
i=2
w h e r e B o a n d Q „ are g i v e n b y
0
1 2
F i g . 12
4
3
5
i=3
6 7
time
^l»l.(l)al»l(0)
Measurements a! different resolution levels
0
0
fl"''(l)
0
(83)
data
block
1=1
xi'l(ki)
a=^'agb"^i(o),e""(i), e'"'(2),e''''(3).
X"l(k!)
(84)
and c a r r y i n g o u t the m u l t i p l i c a t i o n s , w e o b t a i n
xl'l(k,)
xl>i(kM)
time
x"l(k,n)
index
xi>l(kM)
lu
(^IA'1)2
F i g . 13
W a v e l e t d e c o m p o s i t i o n o f state variables i n a dala
block
(85)
(^lA-l^J
W i t h i n a dala b l o c k , the state v a r i a b l e s a l r e s o l u t i o n l e v e l i
_{A^'^y_
are d e s i g n a t e d as
x^'^k{i)
x^'^kii
+1)
(79)
(^fV,)2
Pl
x''''A:(/ + 2 ' - ' )
+ B„QX,
(^I«l)3
(86)
(^[«1)4
w h e r e Bo i s g i v e n b y
(80)
x"Vc(/):
0
0
g[3]
0
g[31
^[31^131
w h e r e m i s d a l a b l o c k i n d e x a n d s is t h e n u m b e r o f state
variables.
For
multiresolutional
the
recursive
Kalman
implementation
filtering
is s t e p - w i s e
of
0
0
(87)
0
B
the
described
below.
A.
Initialisation:
gI31
X.
fV]
0
0
0
g(3,
gI3,
0
gl3,
0
0
g(3,
0
0
0
(88)
(81)
n^'"'(y)
_y=o
220
0
It is t o b e n o t e d t h a t a b o v e , t h e z e r o e l e m e n t s i n g „ a n d
are
sub-matrices
with
the
same dimensions
as
the
B„
W h e r e t h e m a t r i x e l e m e n t s b„
are g i v e n b y
error
.y-i
v a r i a n c e Q ' ' ' a n d the a n d the n o n - z e r o e l e m e n t s i n S „ . T h e
~ n -^(""^ + j
p r o p a g a t i o n f r o m one data b l o c k to o t h e r c o n s e c u t i v e b l o c k
\)B(mM
+ 11-
+ p
(93)
+ q - 2 )
is g i v e n b e l o w .
T h e e r r o r v a r i a n c e m a t r i x is c o m p u t e d f r o m
B.
P r o p a g a t i o n f r o m
d a t a
blocli
m
lo
m + 1 :
H e r e , w e are c o n c e m e d
w i t h the p r o p a g a t i o n o f the w h o l e
d a t a b l o c k X„,|„,f^'' a n d
its e r r o r c o v a r i a n c e
P,,,/,,,""
at t h e
h i g h e s t r e s o l u t i o n l e v e l w h i c h is 3, has t o b e p r o p a g a t e d
X.n+ji,,/"-' a n d Pm+iiJ"'-
"e''"l(mjW),(0),ö''"'(mA^ + l),
QÜ, =
I
(94)
to
T h i s is c a r r i e d o u t as f o l l o w s .
'and 7
F r o m a b o v e e q u a t i o n s zf,„
= 4 : c k r + 5 : ö „ , k f
w h e r e N a n d A / b e i n g N=3
'
.. ^
e'"l('nM + 2 W - 2 )
diag
a n d M=2"''=3,
(90)
respectively
/4[">(mM-i-y),/l'"'(mM + y + l),
0
(zll^l)"
0
0
0
0
are g i v e n b y
0
0
0
0
(95)
0
0
(91)
= d i a g
\ - \
A''''\mM
+ j
+
M - \ )
0
(92)
6,, ..
(^[3]y
g[3i
(^[31)3 ^[31
0
0
5[3i
(^[3|yg[31
(^[3])lfi[31
(^[31)3 ^[3,
•(^[31)2 ^[3,
0
0
(z4l")°5[^l
(^[3iy5t3i
0
g[31
0
0
(^[31yg(31
(96)
gl31
respectively.
The
predicted
state
variables
X„,+||,„'''
and
error
/!,] = 0.5[V2 V2]
h=[h„
c o v a r i a n c e s Pxxm+i|m h a v i n g b e e n c o m p u t e d at t h e r e s o l u t i o n
l e v e l s i , t h e y are u p d a t e d w i t h t h e m e a s u r e m e n t s Z „ + i ' ' ' at
the
respective
obtained
by
resolution
means
level.
of
The
wavelet
resolution
levels
decomposition.
The
d e c o m p o s i t i o n s c h e m e is s h o w n i n f i g u r e 14 w h e r e H a n d
are
and
filters.
The
and
are l o w - p a s s
They
high-pass
decomposition
and high-pass reconstruction filter filters.
satisfy the quadrature
m i r r o r f i l t e r {QMF)
-
V2]
&m=(-l)*/',o-*,
>
k:
0,1
p
r e c o n s t r u c t i o n s c h e m e is s h o w n i n f i g u r e 15 w h e r e H
G'
low-pass
g = [g, g,] = 0.5^2
are
and
G '
= G''
H' = HY
property
w h i c h is
[i-n
H'H + G'G = I
'HH'
GH'
HG'~
GG'
']
.[/]
0
0
.[/-/]
1
H
-[/-2]
.ii-3]
w h e r e I is t h e u n i t ( i d e n t i t y ) m a t r i x . T h e w a v e l e t m a t r i x
operator H and the scaling m a t r i x operator G i n the
F i g . 14
decomposition contain two-tap Haar filters where
block
221
W a v e l e t d e c o m p o s i t i o n o f slate variables in a •
y[i-2\
[i-3] -^m
As
G
m
.[«]
updated.
W a v e i e t reconstruction
o f state variables
in a
are correlated, the
and
also
and
not
T h e m i n i m u m variance K a l m a n g a i n m a t r i x K„,+|'''
= /',«..,.,'''C.,,''>^(C.,,'''P,,,,„,„„"C.,7"^ + R j ' Y
(100)
data
block
It
Y„,^nJ'*"
Pxi;,niJ''
matrices
at e a c h l e v e l , is d e t e n n i n e d b y
K j '
F i g . 15
(99)
Prxm*i\J'' a r e
u p d a t e d . H o w e v e r , t h e w a v e l e t c o e f f i c i e n t s Y,„mJ''
their covariance
matrices
Pn„ti\J''
however
are
m
[i-2\
X
m
X,„+,|,„''7 and Y,„^,J'',
covariance
[/-/]
H
!'=U-Kj'cJ')Pxx„.J'
''x.,,.
-^m
w h e r e the m e a s u r e n t m a t r i x C „ , . i ' ' ' a n d R „ + i ' ' ' are g i v e n b y
is t o n o t e
that
after the
state v a r i a b l e e s t i m a t i o n s
are
Ci"[(m + l ) 2 ' - ' ] , C l " [ ( m + l)2'-' - 2 ' - ' +1],
c a r r i e d o u t at t h e r e s p e c t i v e r e s o l u h o n l e v e l s , t h e f u s i o n o f
t h e i n f o r m a t i o n is c a r r i e d o u t t h r o u g h t h e e r r o r c o v a r i a n c e as
,Cl"[(m + l) + 2 ' - ' + 1 ]
(101)
described b e l o w . H o w e v e r , n o n n a l l y f o r the estimation o f
the
highest
level
state
variables
reconstruction
is
not
n e c e s s a r y s i n c e i t is a l r e a d y a v a i l a b l e .
C.
E s t i m a t e
The
basic
u p d a t i n g :
update
scheme
fl"'[(m
+ l ) 2 ' - ' ] , « i " [ ( m + l)2'-
-1], •
1(102)
, / ; " ' [ ( / » + 1) + 2 ' - ' + 1 ]
for dynamic multiresolutional
f i l t e r i n g is s h o w n i n F i g . 16 w h e r e at e a c h r e s o l u t i o n a l l e v e l ,
w h e n the m e a s u r e m e n t
is a v a i l a b l e , t h e s t a t e v a r i a b l e s
are
u p d a t e d a n d w h e n t h e b l o c k is c o m p l e t e t h e i n v e r s e w a v e l e t
transform
and
fiision
is
transfonnafion wavelet
perfonned.
During
c o e f f i c i e n t s Y,„^,^„^/''
the
inverse
saved
aside
As
X„^nJ"
covariance
updated.
and Y,„^nJ«,
However,
F,,,,,,,,/'*'' a r e c o n e l a t e d , t h e
Pxrm*iyJ''
matrices
the
wavelet
are
also
c o e f f i c i e n t s Y,„^nJ''
t h e i r c o v a r i a n c e m a t r i c e s Pnm*i\nP
are u s e d .
Prxm*i\J''
and
and
are n o t u p d a t e d .
The
m i n i m u m v a r i a n c e K a l m a n g a i n m a t r i x K „ + | l ' ' at each
level,
is d e t e r m i n e d b y
|A'|
X
update wilh Z
fusion
1
m+l lni+/
P
1
1''"'
t
'^...1
-
' .V.Vm.II,.
L,,.!
':Ï,V„„I|„
t-,„l
+n„,|
j
(103)
Tvelet transfon
|A'-/|
update with Z
P
I'l
I'l
V
-^ni+l \m
P
fusion
m+l Im+Z
p
-* ni+/
1
update witli Z
m*l lni+/
p
-^"'+'!'"+
O n c e , w i t h i n the m o v i n g w i n d o w , the sequences o f u p d a t e d
state
7;,„/h*'''''"
1
to
I'l
and
error
1=1.2,...N,
propagation
1
'"^
in a
an
[NF]
d
covariance
^„ni\m+i^'^'"^
•* m+2 \ni+l
W a v e l e t d e c o m p o s i t i o n o f state variables
generate
X...
enor
P
F i g . 16
variables
C
covariances
X
^„ ^ / * ' ' "
and
are d e t e n n i n e d , t h e y m u s t be f u s e d
V
^
I
l'V-/|
wavelet transfi
1
optimal
x
X.,
,
and
^
•
For the m i n i m u m f u s i o n
X...
,
the
fused
estimate
is c a l c u l a t e d as
data
block
(104)
Explicitly
in
p
XXm+\\m
p
li]
p
^
p
m
[NF]
XYm+\\»,
in
w h e r e Ihe m i n i m u m f u s i o n error c o v a r i a n c e
(97)
becomes
Explicitly
y
A,„„|,„,|
- V
''I J . [ ' V 7
I'J
!'•) V
I'U
-^,„„^,„
+ y f , „ , , (Z,„,,
- C , „ ^ , A-,,,^,,,,, )
/J„+j|,„+;
(105)
[NF]
(9g^
T h e fused estimate
and
o f both predicted
f o r 1=1,2,..,N.
222
X
m+l\i,n-l
('VI
Z,,,,,.,!,,/'''
is a w e i g h t e d s u m m a t i o n
and updated
Jt',„^.,|,„^.,
i^.n
T h e s u m o f the w e i g h t f a c t o r s equal to the
identity
/.
This
can
-^i;+;[m+/'given
X,..
be
seen
above
into
by
subshtution
of
the
expression
of
trajectory ref (r), E K F [bj anrJ measurements
in ( 1 0 4 ) .
so
and
not
4.
Autonomous
Robot
Navigation
[i]
The
experiments
simulated
have
experiments
been
in
the
carried
virtual
out
with
reality.
The
the
state
v a r i a b l e s v e c t o r is g i v e n b y ( 1 0 7 ) w h i l e t a k i n g ( = N .
"jrVl
(106)
.I'V]
F i g . 17
The overall Robol Irajectory with measurement {* signs),
and Kalman filtering estimation (• signs).
Explicitly,
Irajeclory ref [rj, E K F [bj and measurements [gj
.v'''''i(A') = [.ï,.v,y,j;,o]
where
a
is t h e a n g u l a r
rate and
i t is e s t i m a t e d
during
m o v e . W h e n the r o b o t m o v e s i n a s t r a i g h t l i n e , the
rate becomes
zero.
The
other
c o o r d i n a t e s a n d the r e s p e c t i v e
state variables
the
angular
are .v a n d
y
velocihes.
V i s i o n rays interact w i t h the e n v i r o n m e n t w h i c h
includes
side w a l l s a n d h i n d r a n c e a l o n g the f o r w a r d d i r e c t i o n . F o r an
environment
a p e r m a n e n t f i c t i v e v e r t i c a l p l a n e s at
certain
t h r e s h o l d d i s t a n c e s a r e c o n s i d e r e d . I f a d i s t a n c e is m o r e t h a n
t h e t h r e s h o l d v a l u e i t is t a k e n t o b e t h e t h r e s h o l d v a l u e . T h i s
m e a n s i f t h e r e a l e n v i r o n m e n t is i n f r o n t o f t h a t f i c t i v e p l a n e
t h e a c t u a l d i s t a n c e m e a s u r e d b y t h e s e n s o r is c o n s i d e r e d
distance.
If
predetermined
without
any
the
measured
distances
are
beyond
as
the
threshold distances the r o b o t m o v e s f o r w a r d
angular
velocity. The
position
estimation
of
r o b o t is a c c o m p l i s h e d b y m e a n s o f K a l m a n f i l t e r i n g v i a t h e
s l a t e s o f t h e r o b o t n a v i g a h o n d y n a m i c s as p r e s e n t e d e a r l i e r .
T h e o v e r a l l r o b o t t r a j e c t o r y is s h o w n i n f i g u r e 17
where
Fig.' 18
Enlarged robot trajectory, measurement, and K a l m a n
filtering
estimation. The * sign is f o r measurement, o sign f o r
the reference and • sign is the estimated t r a j e c t o r y .
t h e r e a r e t h r e e l i n e s p l o t t e d b u t o n l y t w o o f t h e m are v i s i b l e .
enls [gj
T h e l i n e m a r k e d b y * s i g n represents the m e a s u r e m e n t data
set. T h e l i n e m a r k e d b y •
estimation. The
is t h e e x t e n d e d K a l m a n f i l t e r i n g
line indicated
by
o sign
is the
reference
t r a j e c t o r y . T h e s e l i n e s are n o t e x p l i c i t l y s e e n i n t h e f i g u r e .
For explicit
same
zooming
with
powers
experiments
for
i l l u s t r a t i o n o f the
figure
a
are
experimental
different
given
zooming
in
figures
i t is s e e n t h a t , t h e K a l m a n
estimation
of
the
trajectory
outcomes
ranges
18-22.
the
and
the
Fron)
the
filtering
is e f f e c t i v e
from
perception
m e a s u r e m e n t . E s t i m a t i o n s are m o r e a c c u r a t e i n t h e s t r a i g h t ahead m o d e
seen i n
figure
22, compared
to
the
bending
m o d e s . T h i s can be e x p l a i n e d b y n o t i n g that i n the s t r a i g h t ahead
mode
m a U i x is
matrix
angular
the
fixed.
the
system
H o w e v e r i n the n o n - l i n e a r m o d e the
angular
system
is a p p r o x i m a t e
velocity.
v e l o c i t y is z e r o
due
I n this
to
the
case t h e
and
estimation
error o f
difference between
the
the
r e f e r e n c e t r a j e c t o r y a n d t h e e s t i m a t e d t r a j e c t o r y is m a r k e d l y
separated
due
to
the
approximation
error
caused
by
the
F i g . 19
Enlarged
Robot
trajectory,
measurement
Kalman
f d t e r i n g in b e n d i n g mode. T h e * sign is f o r measurement, o
T a y l o r ' s series e x p a n s i o n and e n s u i n g l i n e a r i z a f i o n .
sign f o r the reference and • sign is the estimated t r a j e c t o r y .
223
trajectory retlr). E K F [b] and measurements [g]
5. Discussion and Conclusion
P e r c e p t i o n is an i m p o r t a n t c o n c e p t i n h u m a n v i s i o n . I n t h i s
work,
the
detailed
processing
description
perceptual
of
Kalman
infonnation
is
filtering
described
of
for
a
p e r c e p t u a l r o b o t . E s p e c i a l l y , t h e p e r c e p t i o n is c o n s i d e r e d t o
b e a p r o b a b i l i s f i c c o n c e p t a n d d e f i n e d as s u c h [ I I ] s o t h a t
the
probabilistic
consideration
in
perceptual
robot
is
necessary f o r h u m a n - l i k e o p e r a t i o n s . I n t h i s w a y the r o b o t i c
movement
can
be
coupled
with
cognition
of
the
environment by a robot in a relatively more convenient way.
I n this c o n t e x t , B a y e s i a n p e r c e p t i o n a n d c o g n i t i o n sUidies
[ 3 4 ] can be m o r e substantiated
w i t h m o r e accurate priors.
T h i s is e s p e c i a l l y i m p o r t a n t w h e n o n e d e a l s w i t h a c e r t a i n
scene but f r o m a d i f f e r e n t vantage p o i n t .
I n v i s i o n r o b o t i c s a u t o n o m o u s n a v i g a t i o n is a c h a l l e n g i n g
Fig. 20
Enlarged Robot trajectory, measurement Kalman
filtering in bending mode. The * sign is for measurement, o
sign for the reference and • sign is the estimated trajectory.
issue.
Vision
-TObot
perception-based
can
sensory
navigate
autonomously
by
i n f o r m a t i o n so t h a t i t c a n
react
better w i t h respect to the e n v i r o n m e n t a l a n d c i r c u m s t a n t i a l
s i U i a t i o n s . B e t t e r is i n t h e sense o f o b s t a c l e a v o i d a n c e a n d
trajectory ref [rj, E K F [b] and measurements [gj
-6 2
-
way
finding
w i t h o u t crash.
Another
-1-
interesting
navigation
study
measurements
is
feature
the
of
the
observer
percepUial
form
r e f e n i n g t o d i s t a n c e (r),
[35]
a n g l e (9)
robot
of
the
and
the
s c a l a r v e l o c i t y ( v ) . T h e s e m a y p a r t l y be c o m p u t e d f r o m t h e
e s t i m a t e d state v a r i a b l e s , i n t h e c a s e o f u n a v a i l a b l e a c t u a l
measurement
counterparts.
The
work
is
a
clear
d e m o n s t r a t i o n o f t h e w o r k i n g state o b s e r v e r s a n d t h e r e f o r e
an i m p o r t a n t hint f o r the p h y s i c a l r o b o t i c s exercises.
-7.2
I t is
r e m i n d e d t h a t , t h e p r e s e n t w o r k is an a u t o n o m o u s r o b o t i c s
-I.i
s i m u l a t i o n i m p l e m e n t e d i n a v i r t u a l r e a l i t y a n d the results
„T'T"^^
r e p o r t e d are t h e c o m p u t e r e x p e r i m e n t s .
A p p l i c a t i o n s o f v i s i o n r o b o t are c o u n t l e s s
well
documented
i n the
and they
literature. Perception
is a
are
new
d i m e n s i o n i n a v i s i o n r o b o t as s u c h i t s v i s i o n c a n be s h a p e d
f o r application dependent
Fig.21
Enlarged Robot trajectory, measurement Kalman
filtering in bending mode. The * sign is for measurement, o
sign for the reference and • sign is the estimated trajectory.
v i s i o n robot. T h e investigations presented
motivated
by
architecmral
trajectory ref [r], E K F [b} and
.2.1:
-
-2. ISS
I
-
v i s i o n p r o p e r t i e s . I n t h i s sense,
p e r c e p t i o n c a n be seen as a m e a s u r e o f v i s i o n q u a l i t y o f a
measuring
design
from
the
i n this w o r k
perception
aspects
d i f f e r e n t vantage
of
are
an
points
and
a l t o g e t h e r t o o b t a i n a c o m p r o m i s e a m o n g t h e m , as t o a
final
d e c i s i q n - m a k i n g f o r t h a t d e s i g n . S u c h a task i n o n e h a n d is a
-
1 -
routine work
- - J -
f o r a r o b o t i n virUtal r e a l i t y a n d the actual
(real) scene h a v i n g been represented i n the v i r t u a l r e a l i t y the
s c e n e c a n b e p e r c e i v e d i n m a n y d i f f e r e n t w a y s as d e s i r e d .
-2.2!- -
T h e s a m e t a s k f o r a h u m a n is
-2,25
more
-2.31
importantiy subjective
measurement
\
-2.35
inconclusive
-
and
due
firstly
therefore
to
extremely tedious but
i n the
can
probable
sense
be
of
perception
deemed
inconsistencies.
to
be
This
is
because o f the c o m p l e x i t y o f the task due to the n u m b e r o f
.2.Ï'- •2,46
-
- -
-| -
o b j e c t s i n v o l v e d i n t h e scene. H e r e t h e o b j e c t s m a y b e a
n u m b e r o f d w e l l i n g units subject to m a r k e t i n g by a building
\
project
•2,5 i
I
Since
a
number
of
locations
and
t h e t a s k is a m u l t i - o b j e c t i v e o p t i m i z a t i o n s u b j e c t t o these
-2-55 17,2
developer.
a r c h i t e c t u r a l r e q u i r e m e n t s c o n d i t i o n o n p e r c e p t i o n aspects,
17.3
i m p o s i t i o n s as
17,4
constraints.
Also,
i n this situation one
is
interested i n a s o l u t i o n on a Pareto front. Such complexities
Fig. 22
Enlarged Robot trajectoiy, measurement Kalman
filtering in bending mode. The * sign is for measurement, 0
sign for the reference and • sign is the estimated trajectory.
224
are
today
algorithms
effectively
[36-38].
dealt
Some
with
thanks
endeavours
to evolutionary
i n the
context
of
a r c h i t e c t u r e are r e p o r t e d i n [ 3 9 , 4 0 ] . T h e p r e s e n t r e s e a r c h is
another endeavour a l o n g this line.
References:
t'^]
M.
S. B i t t e r m a n n , I . S. S a r i y i l d i z ,
and Ö . C i f t c i o g l u ,
" B l u r in H u m a n V i s i o n and Increased V i s u a l Realism i n
M . Beetz, T . A r b u c k l e , T . B e l k e r , A . B . Cremers, D .
V i r t u a l E n v i r o n m e n t s , " in Lecture
Schuiz, M . B e n n e w i t z , W . B u r g a r d , D . H a h n e l , D . F o x ,
and H . Grosskreutz, "Integrated, plan-based
autonomous
InteUigent
robots
Systems
in
human
control o f
environments,"
IEEE
and
navigation
for
and
Cybernetics
- Part
Trans,
robots
on Systems,
B: Cybernetics,
[19]
in
Man
v o l . 2 8 , pp. 316-
M . W a n g and J. N . K . L i u , " O n l i n e path searching f o r
on Robotics, A u t o m a t i o n and M e c h a t r o n i c s , Singapore,
[21]
Ollero,
Intelligent
Mobile
Robot
[22]
in Mobile
E. A b l e and D . S ö f f k e r , " A concept
approach
to
build
Robotics.
systems,"
D.
Söfflcer,
"A
P.
S.
Maybeck,
Application
Conf
P.
S.
on
Systems,
Man,
D.
and
Söffker,
Cybemetics,
"A
architecture to realize autonomous
M.
Models,
Estimation
Stochastic
Models,
Estimation
Kalman
'-Techniques:
Filtering
and
OlherDigital
learning
State
Estimation.
N e w Jersey: W i l e y
Package.
W.
Sorenson,
Kalman
Filtering:
Theory
T. K a i l a t h , Lectures
on
Wiener
and Kalman
Filtering.
[27]
Y . B . S h a l o m , X . R. L i , and T . K i r u b a r a j a n ,
and
Application
to Tracking
Estimation
and Navigation.
Montreal:
W i l e y , 2006.
S. M a l l a t , A Wavelet
Tour
of Signal
Processing.
S.
G.
Mallat,
"A
theory
for
m u l t i r e s o l u t i o n signal
decomposition:the w a v e l e t representation," IEEE
Taipei, Taiwan,
2006.
Analysis
and
Machine
IntelUgence,
[30]
H o l z a p f e l , P. Steinhaus, and R. D i l l m a n n , " A c o g n i t i v e
architecture f o r a h u m a n o i d robot: A f i r s t approach,"
presented
at
2005
5th
lEEE-RAS
Int.
Conf
[31]
on
H u m a n o i d Robots, T s u k u b a , Japan, 2005.
to n e w concepts
[32]
c o n s t r u c t i n g autonomous
phenomenological
Journal
and Robotic
approach.,"
Systems,
IEEE
Int. C o n f . on Systems,
M a n , and
at
[34]
robots:
and
Inlelligenl
Cybemetics,
Their
a
new
kind
Applications,
Systems],
Control
H.
C.
o f tool,"
IEEE
Intelligent
[see
also
theory
i n Emerging
[36]
JEEE'
Systems
v o l . 1, S. Pennacchio,
Ed.
and
and
filtering
on Aerospace
Vision,
' D . C. K n i l l and W . Richards, Perception
Cambridge:
Cambridege
B . F r i e d l a n d , Control
Syslem
[37]
Intelligence
ftision
Algorithms
K . D e b , Multiobjective
Algorithms:
[38]
Palenno:
Jor
and Inlelligenl
Advanced
Informatics
[39]
area
Solving
Optimization
using
Stochastic
vol. I I ,
of Integrals
M.
at
[40]
N e w Y o r k : A c a d e m i c Press, 1965.
Probability,
Processes.
Evolutiona/y
and
Evolulionaiy
"Controlling
its i m p a c t
on
Notes
Computer
on
Multi-Criterion
the
Optimization.
B i t t e r m a n n , I . S.
Tools
and
Sariyildiz,
and
O.
Ciftcioglu,
Pareto O p t i m a l D e s i g n , " presented
Methods
of
Competitive
Engineering
( T M C E ) , I z m i r , 2008.
I . S. Gradshteyn and I . M . R y z h i k , Table
and Products.
Multiobjective
and K . Tanaka,
o f solutions
"Performance-Based
pp. 688-700, 2007.
Papoulis,
London: McGraw-
H e i d e l b e r g : Springer, 2 0 0 7 .
Computational
(JACIII),
Bayesian
John W i l e y & Sons, 2 0 0 1 .
H . Sato, H . E. A g u i r r e ,
dominance
Sariyildiz,
o f perceptions a p p l i e d to robot
of
as
Boston: K l u w e r A c a d e m i c Publishers, 2003.
Science,
Journal
Pattern
C. A . C. C o e l l o , D . A . V e l d h u i z e n , and G . B . L a m o n t ,
139-153.
navigation,"
Kalman
U n i v e r s i t y Press,
Design.
p e r f o r m a n c e o f M O E A s , " i n Lecture
"Multiresolutional
wavelet
Electronic
C. H . C h e n and P. S.
International Society f o r A d v a n c e d Research, 2007, pp.
Ö . C i f t c i o g l u , M . S. B i t t e r m a n n , and I . S.
using
cmd
A . C. L i , "Wavelet-Based
and Computer
EvoUilionaiy
perceptual
Robotics
Trans,
v o l . 29, p p . I 2 4 4 - I 2 5 I , 1993.
Hsin
Problems.
underlying
Technologies,
for
H i l l , 1986.
v o l . 15, pp. 2 5 - 3 1 , 2000.
perception
navigation,"
"Multiresolutional
Hong,
Methods
C a m b r i d g e U n i v e r s i t y Press, 2 0 0 0 .
1996.
[35]
Ö . C i f t c i o g l u , M . S. B i t t e r m a n n , and 1. S. S a r i y i l d i z ,
"Visual
L.
Inference.
B . A d a m s , C. Breazeal, R. A . B r o o k s , and B . Scassellati,
"Humanoid
Analysis:
P. W a n g , Eds. Singapore: W o r l d S c i e n t i f i c , 2 0 0 6 .
2006
T a i p e i , T a i w a n , 2006.
Systems
Time Series
RecognUion
Sariyildiz,
"Towards computer-based perception b y m o d e l i n g v i s u a l
AppUcations
F i l t e r i n g in Scale Space f o r Image F u s i o n , " in
191-205,2001.
perception: a p r o b a b i l i s t i c theory," presented
for Statistical
B o s t o n : Birkhauser, 1997.
D. B . Percival and A . T . W a l d e n , Wavelet
Systems,
[33]
v o l . 32, pp.
Ö . C i f t c i o g l u , M . S. B i t t e r m a n n , and I . S.
Wavelets
Analysis.
t r a n s f o r m , " IEEE
systems: A
engineering-oriented
of Intelligent
T. Ogden, Essential
and Data
D . S ö f f k e r , "From human-machine-interaction modeling
A.
Trans,
vol. 11,
pp. 674-693, 1989.
C. B u r g h a r t , R. M i k u t , R. Stiefelhagen, T . A s f o u r , H .
Series
New
Y o r k : Associated Press, 1999.
[29]
on Pattern
Cybernetics,
and
N e w Y o r k : I E E E Press, 1985.
on
M a n , and
and
Individual
Theoretical b a c k g r o u n d , " presented at 2 0 0 6 I E E E C o n f .
Systems,
and
New York:' Spnnger-Verlag, 1981.
Taipei,
part I :
Filtering
N e w Y o r k : A c a d e m i c Press, 1982.
Mendel,
Application.
cognitive-oriented
behaviour -
and
Interscience, 2006.
[28]
and
J,
Voill.
H.
to m o b i l e robots," presented at 2006 I E E E
Ahle
Filtering:
N e w York: Wiley,
Processes
Stochastic
Maybeck,
[25]
[26]
Taiwan, 2006.
MATLAB.
Vol I. N e w Y o r k : A c a d e m i c Press, 1979,
D . S i m o n , Optimal
cognitive-oriented
architecture to realize autonomous b e h a v i o u r - part I I :
Using
N e w Y o r k : I E E E , 1987.
for a cognitive-
autonomous
Cybernetics, B i g Island, H a w a i i , 2005.
and
Cambridge,
N e w Y o r k A c a d e m i c Press, 1970.
[24]
presented at 2005 I E E E Int. C o n f o n Systems, M a n and
Ahle
EsUmation.
H . J a z w i n s k i , Stochastic
Estimation
Methods
B e r l i n : Springer, 2 0 0 6 .
oriented
[23]
B e r l i n : Springer, 2005.
U . Nehmzovv, Scientific
E.
A.
Control,
Navigation.
E.
cmd Practice
Control.
2004.
A,
Optimal
2001.
[20]
autonomous robot n a v i g a t i o n , " presented at I E E E C o n f .
and
AppUed
M . S. G r e w a l and A . P. A n d r e w s , Kalman
Theory.
Cuesta
Computer
A . G e l b , J. F. Kasper, R. A . Hash, C. F. Price, and A . A .
Theory
333, 1998.
F.
on
M A : M I T Press, 1974.
autonomous
u n k n o w n environments," IEEE
Notes
Springer V e r t a g , 2007.
Sutherland,
v o l . 16, pp. 56-65, 2 0 0 1 .
G. O r i o l i o , G . U l i v i , and M . V e n d i t t e l l i , "Real-time map
building
Science:
[18]
Random
Variables
O. C i f t c i o g l u and M . S. B i t t e r m a n n , " S o l u t i o n D i v e r s i t y
in
and
N e w Y o r k : M c G r a w - H i l l , 1965,
Multi-Objective
Reality,"
presented
Optimization: A
at
IEEE
study
World
C o m p u t a t i o n a l Intelligence, H o n g K o n g , 2 0 0 8 .
225
in V i r t u a l
Congress
on
Download