Mobile Intelligent Systems 2004

advertisement
Mobile Intelligent Systems
2004
Course Responsibility:
Ola Bengtsson
Course Examination
• 5 Exercises – Matlab exercises – Written
report (groups of two students - individual
reports)
• 1 Written exam – In the end of the period
Course information
• Lectures
• Exercises (w. 15, 17-20) – will be available
from the course web page
• Paper discussions (w. 14-15, 17-21) –
questions will be available from the course
web page => Course literature
• Web page:
http://www.hh.se/staff/boola/MobIntSys2004/
The dream of fully autonomous
robots
R2D2
?
C3PO
MARVIN
Autonomous robots of today
• Stationary robots
- Assembly
- Sorting
- Packing
• Mobile robots
- Service robots
- Guarding
- Cleaning
- Hazardous env.
- Humanoid robots
Trilobite ZA1, Image from:
http://www.electrolux.se
A typical mobile system
Laser (Bearing + Range) Laser (Bearing)
Laser (Bearing + Range)
Micro controllers
Control systems
Emergency stop
v
a
Yr
Panels
Xr
H(r)
Yw
Motors
Encoders
Xw
Kinematic model
Mobile Int. Systems – Basic
questions
• 1. Where am I? (Self localization)
• 2. Where am I going? (Navigation)
• 3. How do I get there? (Decision making /
path planning)
• 4. How do I interact with the environment?
(Obstacle avoidance)
• 5. How do I interact with the user? (Human
–Machine interface)
Sensors – How to use them
Encoder 1
Gyro 1
Compass
Gyro 2
GPS
Laser
Micro controller How to use all these
sensor values
Reaction / Decision
IR
Ultra sound
Encoder 2
Sensor fusion
Course contents
•
•
•
•
Basic statistics
Kinematic models, Error predictions
Sensors for mobile robots
Different methods for sensor fusion,
Kalman filter, Bayesian methods and
others
• Research issues in mobile robotics
Basic statistics – Statistical
representation – Stochastic variable
Battery lasting time, X = 5hours ±1hour
X can have many different values
P
P
Continous variable
Discrete variable
[hours]
Continous – The variable can have
any value within the bounds
[hours]
Discrete – The variable can have
specific (discrete) values
Basic statistics – Statistical
representation – Stochastic variable
Another way of discribing the
stochastic variable, i.e. by
another form of bounds
P
Probability distribution
In 68%: x11 < X < x12
In 95%: x21 < X < x22
In 99%: x31 < X < x32
In 100%: - < X < 
The value to expect is the
mean value => Expected
value
How much X varies from its expected value => Variance
[hours]
Expected value and Variance
EX  

 x. f
X
( x)dx


V X    X2   ( x  E X ) 2 . f X ( x)dx

EX  
 k. p
k  
(k )
X

V X    X2 
Lasting time for 25 battery of the same type
8
7
6

2


(
k

E
X
)
. p X (k )

K  
5
4
The standard deviation X is the
square root of the variance
3
2
1
0
0
5
10
15
20
25
Gaussian (Normal) distribution
X ~ N m X ,  X 
By far the mostly used probability
distribution because of its nice
statistical and mathematical
properties
1
p X ( x) 
e
 X . 2

Gaussian distributions with different variances
0.4
0.35
Probability, p(x)
0.3
( x  E  X ) 2
0.25
0.15
2 X2
0.1
0.05
0
-10
What does it means if a
specification tells that a sensor
measures a distance [mm] and
has an error that is normally
distributed with zero mean and
 = 100mm?
N(7,1)
N(7,3)
N(7,5)
N(7,7)
0.2
-5
0
5
10
Stochastic Variable, X
Normal distribution:
~68.3%
mX   X , mX   X 
~95%
mX  2 X , mX  2 X 
~99%
m X  3 X , m X  3 X 
etc.
15
20
Estimate of the expected value and
the variance from observations
mˆ X 
N
1
N
Histogram of measurements 1 .. 100 of battery lasting time
25
ˆ 
2
X
k 1
N
1
N 1
2
ˆ
(
X
(
k
)

m
)

X
k 1
15
0.4
0.35
10
0.3
0.25
5
p(x)
No. of occurences [N]
20
 X (k )
0
0
1
2
3
4
5
6
Lasting time [hours]
7
8
9
10
0.2
Estimated distribution
Histogram of observations
0.15
0.1
0.05
0
-10
-5
0
5
10
Lasting time [hours]
15
20
Linear combinations (1)
X2 ~ N(m2, σ2)
X1 ~ N(m1, σ1)
+
Y ~ N(m1 + m2, sqrt(σ1 +σ2))
EaX  b  aEX   b
V aX  b  a 2V X 
EX1  X 2   EX1   EX 2 
V  X1  X 2   V  X1   V  X 2 
This property that Y remains Gaussian if the s.v. are combined linearily is
one of the great properties of the Gaussian distribution!
Linear combinations (2)
We measure a distance by a device that have normally distributed errors,
Dˆ ~ N D,  D 
Do we win something of making a lot of measurements and use the
average value instead?
Y  15 D  15 D  15 D  15 D  15 D 
N 5
1
N
D
i 1
i
What will the expected value of Y be?
What will the variance (and standard deviation) of Y be?
If you are using a sensor that gives a large error, how would you best use
it?
Linear combinations (3)
Y [m]
d (i
…
)
a(i)
theta(i)
2)
d(
d (1
)
X [m]
a(1)
Ground Plane
dˆi  di   d
di is the mean value and d ~ N(0, σd)
ˆi   i  
αi is the mean value and α ~ N(0, σα)
With d and α uncorrelated => V[d, α]
= 0 (Actually the covariance, which is
defined later)
Linear combinations (4)
D = {The total distance} is calculated as before as this is only the sum of all
d’s
N
D  d 1  d 2  ...  d N   d ( k )
k 1
The expected value and the variance become:

N
N
N
N
N
N


E Dˆ  E {d (k )   d }   E d (k )   E  d    E d (k )   0  d (k )
k 1
k 1
i 1
i 1
 k 1
 k 1

N
N
N
N
N


V Dˆ  V {d (k )   d }  V d (k )  V  d    0  V  d   N d2
k 1
k 1
k 1
 k 1
 k 1
Linear combinations (5)
 = {The heading angle} is calculated as before as this is only the sum of all
’s, i.e. as the sum of all changes in heading
N 1
   0   (0)   (1)  ...   ( N  1)   0    ( k )
k 0
The expected value and the variance become:

N 1
N 1


E ˆ  E ( (0)    (0))  { (k )    (k )}   (0)   (k )
k 0
k 0



N 1
N 1


V ˆ  V ( (0)    (0))  { (k )    (k )}  V   (0)  V   (k )
k 0
k 0


What if we want to predict X (distance parallel to the ground) (or Y (hight
above ground)) from our measured d’s and ’s?
Non-linear combinations (1)
X(N) is the previous value of X plus the latest movement (in the X direction)
X N  X N 1  d N 1 cos( N 1   N 1 )
The estimate of X(N) becomes:
Xˆ N  Xˆ N 1  dˆN 1 cos(ˆN 1  ˆ N 1 )  ( X N 1   X N 1 )  (d N 1   d ) cos( N 1   N 1   N 1   )
This equation is non-linear as it contains the term:
cos( N 1   N 1   i   )
and for X(N) to become Gaussian distributed, this equation must be replaced with a
linear approximation around  N 1   N 1 . To do this we can use the Taylor expansion
of the first order. By this approximation we also assume that the error is rather small!
With perfectly known N-1 and N-1 the equation would have been linear!
Non-linear combinations (2)
Use a first order Taylor expansion and linearize X(N) around  N 1   N 1 .
Xˆ N  ( X N 1   X N 1 )  (d N 1   d ) cos( N 1   N 1   N 1   ) 

 ( X N 1   X N 1 )  (d N 1   d ) cos( N 1   N 1 )  ( N 1   ) sin(  N 1   N 1 )

This equation is linear as all error terms are multiplied by constants and we can
calculate the expected value and the variance as we did before.
  


E Xˆ N  E ( X N 1   X N 1 )  (d N 1   d ) cos( N 1   N 1 )  ( N 1   ) sin(  N 1   N 1 ) 
 E X N 1   E d N 1 cos( N 1   N 1 )  X N 1  d N 1 cos( N 1   N 1 )
Non-linear combinations (3)
The variance becomes (calculated exactly as before):
  


V Xˆ N  V ( X N 1   X N 1 )  (d N 1   d ) cos( )  ( N 1    ) sin(  ) 

 


 V X N 1   V  X N 1  V (d N 1   d ) cos( )  ( N 1    ) sin(  ) 
  X2 N 1   d N 1 sin(  )   2  cos( )   d2   d N 1 sin(  )   2N 1
2
2
2
Two really important things should be noticed, first the linearization only affects
the calculation of the variance and second (which is even more important) is that
the above equation is the partial derivatives of:
Xˆ N  Xˆ N 1  dˆN 1 cos(ˆN 1  ˆ N 1 )
with respect to our uncertain
parameters squared multiplied
with their variance!
Non-linear combinations (4)
This result is very good => an easy way of calculating the variance => the law of
error propagation
 
V Xˆ N
2
2
2
2
 Xˆ N  2
 Xˆ N  2  Xˆ N  2  Xˆ N  2
 X 
 d 
   
 
 
N 1






 N 1
 X N 1 
 d N 1 
  N 1 
  N 1 
The partial derivatives of Xˆ N  Xˆ N 1  dˆN 1 cos(ˆN 1  ˆ N 1 ) become:
Xˆ N
1
X N 1
 Xˆ N
 cos(  )
 d N 1
Xˆ N
 dˆ N 1 sin(  )
 N 1
Xˆ N
 dˆ N 1 sin(  )
 N 1
Non-linear combinations (5)
The plot shows the variance of X for the time step 1, …, 20 and as can be
noticed the variance (or standard deviation) is constantly increasing.
Estimated X and its variance
0.7
0.6
0.5
P(X)
0.4
0.3
 X2  0.5
0
0.2
d = 1/10
0.1
0
-1000
 = 5/360
-500
0
500
1000
X [km]
1500
2000
2500
3000
Multidimensional Gaussian
distributions (1)
The Gaussian distribution can easily be extended for several dimensions by:
replacing the variance () by a co-variance matrix () and the scalars (x and mX)
by column vectors.
1
2

  12 ( x  m X )T  X1 ( x  m X )
1
 e
p X ( x)  
N

(
2

)

X


The CVM describes (consists of):
1) the variances of the individual dimensions => diagonal elements
2) the co-variances between the different dimensions => off-diagonal elements
  x21
C x1, x 2  C x1, x3 


2
 X   C  x 2, x1
 x2
C  x 2, x3
 C  x3, x1 C x3, x 2

2

x
3


! Symmetric
! Positive definite
MGD (2)
0 
100

(green)  
0
2500


 700  1039 

(green)  

1039
1900


0 
100

(red )  
0
2500


 7525 4287 

(red )  
4287
2575


Correlated S.V.
250
200
200
150
150
100
100
50
50
0
0
Y
Y
Un-correlated S.V.
250
-50
-50
-100
-100
-150
-150
-200
-250
-250
STD of X is 10 times bigger than STD of Y
STD of Y is 5 times bigger than STD of X
-200
-150
-100
-50
0
X
50
-200
100
150
200
Eigenvalues => standard deviations
Eigenvectors => rotation of the ellipses
250
-250
-250
STD of X is 10 times bigger than STD of Y
STD of Y is 5 times bigger than STD of X
-200
-150
-100
-50
0
X
50
100
150
200
250
MGD (3)
The co-variance between two stochastic variables is calculated as:
C X , Y   E( X  mX )(Y  mY )
Which for a discrete variable becomes:
V X   
C  X , Y    ( j  m X )( k  mY ) p X ,Y ( j , k )
j
k
2
X

And for a continuous variable becomes:
CX ,Y  
 
  (x  m
  
X
)( y  mY ) f X ,Y ( x, y )dxdy

2


(
k

E
X
)
. p X (k )

K  
MGD (4) - Non-linear
combinations
Y [m]
d (i
…
)
a(i)
theta(i)
2)
d(
d (1
)
X [m]
a(1)
Ground Plane
The state variables (x, y, ) at time k+1 become:
 x(k  1) 
 x(k )  d (k ) cos( (k )   (k )) 




Z (k  1)   y (k  1)   f ( Z (k ), U (k ))   y (k )  d (k ) sin(  (k )   (k )) 
 (k  1) 


 (k )   (k )




MGD (5) - Non-linear
combinations
 x(k  1) 
 x(k )  d (k ) cos( (k )   (k )) 




Z (k  1)   y (k  1)   f ( Z (k ), U (k ))   y (k )  d (k ) sin(  (k )   (k )) 
 (k  1) 



(
k
)


(
k
)




We know that to calculate the variance (or co-variance) at time step k+1 we must
linearize Z(k+1) by e.g. a Taylor expansion - but we also know that this is done
by the law of error propagation, which for matrices becomes:
(k  1 | k )  f X (k | k )f XT  fU U (k  1)fUT
With fX and fU are the Jacobian matrices (w.r.t. our uncertain variables) of the
state transition matrix.
 1 0  d (k ) sin(  (k )   (k )) 


f X   0 1 d (k ) cos( (k )   (k )) 
0 0

1


 cos( (k )   (k ))  d (k ) sin(  (k )   (k )) 


fU   sin(  (k )   (k )) d (k ) cos( (k )   (k )) 


0
1


MGD (6) - Non-linear
combinations
The uncertainty ellipses for X and Y (for time step 1 .. 20) is shown in the figure.
Airplane taking off, relative displacement in each time step = (100km, 2deg)
1000
800
Y [km]
600
400
200
0
-200
0
200
400
600
800
1000
X [km]
1200
1400
1600
Download