Assisted Teleoperation of Quadcopters Using Obstacle Avoidance

advertisement
Assisted Teleoperation of Quadcopters Using
Obstacle Avoidance
João Pedro Baptista Mendes
Dissertation submitted for obtaining the degree of
Master in Electrical and Computer Engineering
Juri Committee
President:
Doutor Carlos Filipe Gomes Bispo
Advisor:
Doutor Rodrigo Martins de Matos Ventura
Vowel:
Doutor Eduardo Alexandre Pereira da Silva
July 2012
ii
Abstract
Manually flying a quadcopter is considered to be a complex task, requiring extensive training
and experience in order to guarantee a safe flight as incautious users tend to crash the vehicle on the
first flight.
This thesis proposes a SLAM-based assisting method for a quadcopter pilot to avoid collisions
with obstacles. We assume we have a teleoperated quadcopter equipped with sensory devices consisting
of an Inertial Measuring Unit, an altimeter and six sonars, one above each of the four propellers, one
above and one below the main body of the quadcopter. The vehicle is considered to operate in a GPS
denied, unknown, and dynamic environment.
To solve the localization and mapping problem, a 3D FastSLAM algorithm using an occupancy
grid mapping is employed. Since an accurate, high resolution map is not our objective — due to
computer efficiency and scope — a coarse and rough mapping is generated. In this approach, the
map accuracy is traded for the computational efficiency as collision avoidance does not require a
precise knowledge of the obstacle shape, but only its presence. Based on a collision risk assessment
classification, the users inputs will be overridden in case of an imminent collision. The proposed
approach was first evaluated in simulation. The simulator used, for the quadcopter and sensors, is
the Unreal Tournament engine based simulator, USARSim. Further testing was conducted in a real
quadcopter, allowing the evaluation of most of the proposed methods.
Keywords: Collision avoidance, 3D FastSLAM, Quadcopter, Occupancy Grid
Mapping
iii
iv
Resumo
Pilotar manualmente um quadcopter é uma tarefa complexa, requerendo extenso treino e
experiencia para garantir um voo seguro, visto que novos utilizadores costumam colidir com o veı́culo
contra obstáculos no seu primeiro voo.
Esta tese propõe métodos de assitencia de tele-operação baseados em SLAM com o fim de
um piloto de quadcopters evitar colisões com obstáculos. É assumido que possuimos um quadcopter
telecomandado equipado com vários sensores nomeadamente uma unidade de medidas inerciais, um
altı́metro e seis sonares, um por cima de cada um dos quatro motores, um por cima e outro por baixo
do quadcopter. Considera-se ainda que o veı́culo é operado num ambiente sem GPS, desconhecido e
dinâmico.
A fim de resolver o problema de localização e mapeamento, um FastSLAM a 3D com um occupacy grid mapping é aplicado. Visto que, devido á objectividade e eficiência computacional, o nosso
objectivo não é construir um mapa exacto nem de alta resolução, a solução aplicada realiza apenas
um mapeamento grosseiro. Nesta abordagem, a exactidão do mapa é preterida pela eficiência computacional visto que os métodos para evitar colisões não requerem um conhecimento preciso da forma
do obstáculo mas apenas a sua presença. Tendo por base um classificador, os comados dados pelo
utilizador poderam ser substituidos em caso de perigo iminente. A solução apresentada foi primeiramente testada em simulação. O simulador usado, quer para o quadcopter quer para os sensores, foi o
USARSim, baseado no motor de jogo do Unreal Tournament. Foram, também, conduzidos testes num
quadcopter real permitindo a avaliação da maioria dos métodos propostos.
Palavras-Chave: Impedir colisões, 3D FastSLAM, Quadcopter, Mapeamento
v
vi
Contents
Abstract
iii
Resumo
v
List of Figures
ix
List of Tables
xiii
List of Algorithms
xv
1 Introduction
1
1.1
Motivation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2
Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2.1
SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2.2
Obstacle Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.2.3
Assisting Teleoperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.3
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.4
Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.5
Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.6
Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2 FastSLAM
7
2.1
Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.2
Occupancy Grid Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.2.2
Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
2.2.3
Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3
Particle Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.2
Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
vii
viii
CONTENTS
2.3.3
Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4
FastSLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.5
Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3 Decision Block
43
3.1
Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2
Proposed Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3
Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4 Experimental Results
53
5 Real World Experimental Results
59
5.1
Occupancy grid mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.2
Particle Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3
FastSLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.4
Decision Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.5
Full Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
6 Conclusions
69
Bibliography
71
A USARSim
75
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A.2 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.2.1 Sonar Pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.2.2 Velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
A.3 Karma Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
A.4 USARSim testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
List of Figures
1.1
Example of a quadcopter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.2
Full architecture of the approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
2.1
Adopted sonar model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2
xy cut of the graphical interpretation of index to check. . . . . . . . . . . . . . . . . . . 10
2.3
Graphical interpretation of the inverse sensor model. . . . . . . . . . . . . . . . . . . . . 11
2.4
USARSim’s environment print screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5
Log-odd representation of a 2D cut of the 3D output map. . . . . . . . . . . . . . . . . . 13
2.6
Binarized version of Figure 2.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.7
3D plot of the center of mass of the free and occupied cells. . . . . . . . . . . . . . . . . 14
2.8
USARSim’s environment print screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.9
Log-odd representation of a 2D cut of the 3D output map. . . . . . . . . . . . . . . . . . 15
2.10 Binarized version of Figure 2.9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.11 3D plot of the free and occupied cells. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.12 USARSim’s environment print screen. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.13 USARSim’s environment print screens. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.14 Log-odd representation of a 2D cut of the 3D output map. . . . . . . . . . . . . . . . . . 18
2.15 Binarized version of Figure 2.14. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.16 Log-odd representation of a 2D cut of the 3D output map using a smaller grid cell size.
19
2.17 Binarized version of Figure 2.16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.18 Real position and the prediction step output given a certain trajectory.
2.19 Error (2.17) obtained during Figure 2.18 trajectory.
. . . . . . . . . . . . . . . . . . . . 23
2.20 Real position and the prediction step output given a certain trajectory.
2.21 Error (2.17) obtained during Figure 2.20 trajectory.
. . . . . . . . . 23
. . . . . . . . . 24
. . . . . . . . . . . . . . . . . . . . 24
2.22 Real position and the prediction step output given a certain trajectory.
. . . . . . . . . 28
2.23 Error (2.17) obtained during Figure 2.22trajectory. . . . . . . . . . . . . . . . . . . . . . 28
2.24 3D representation of the real position and PF’s output. . . . . . . . . . . . . . . . . . . 29
2.25 xy cut of Figure 2.24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
ix
x
LIST OF FIGURES
2.26 xz cut of Figure 2.24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.27 yz cut of Figure 2.24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.28 Detailed error for x, y and z axis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.29 Error between real position and the PF’s estimation. . . . . . . . . . . . . . . . . . . . . 31
2.30 Real position and the prediction step output given a certain trajectory.
2.31 Error (2.17) obtained during Figure 2.30 trajectory.
. . . . . . . . . 32
. . . . . . . . . . . . . . . . . . . . 32
2.32 xy cut of the real position and PF’s output. . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.33 Error between real position and the PF’s estimation. . . . . . . . . . . . . . . . . . . . . 33
2.34 Detailed error for x, y and z axis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.35 USARSim’s environment print screens. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.36 Map used in the localization algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.37 xy cut of the real position and PF’s output. . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.38 Detailed error for x, y and z axis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.39 Error between real position and the PF’s estimation. . . . . . . . . . . . . . . . . . . . . 35
2.40 Real position and the prediction step output given a certain trajectory.
. . . . . . . . . 37
2.41 3D representation of the real position and PF’s output. . . . . . . . . . . . . . . . . . . 37
2.42 Error between real position and the PFs estimation. . . . . . . . . . . . . . . . . . . . . 38
2.43 yz cut of Figure 2.41. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.44 PF’s map output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.45 Detailed error for x, y and z axis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.46 xz cut of Figure 2.41. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.47 xy cut of the real position and PF’s output. . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.48 xy cut of the map output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.49 xy cut of the real position and PF’s output. . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.1
Flow chart of the Decision Block. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2
Volume to check along the desired velocity. . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3
TTC ilustration (3.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4
Levels of threat.
3.5
Relation between the attitude of the vehicle and the area covered by the sonar. . . . . . 48
3.6
Comparison between user’s inputs and current velocity. . . . . . . . . . . . . . . . . . . 49
3.7
Vehicle’s position according to the obstacle. . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.8
Comparison between user’s inputs and current velocity. . . . . . . . . . . . . . . . . . . 50
3.9
Vehicle’s position according to the obstacle. . . . . . . . . . . . . . . . . . . . . . . . . . 50
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.10 Comparison between user’s inputs and current velocity. . . . . . . . . . . . . . . . . . . 51
3.11 Vehicle’s position according to the obstacle. . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.12 Comparison between user’s inputs and current velocity. . . . . . . . . . . . . . . . . . . 52
LIST OF FIGURES
xi
4.1
Distance between the best particle to the first object found in its own map. . . . . . . . 54
4.2
Estimated velocity of the quadrotor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3
xy cut of the real position and PFs output. . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4
xy cut of the output map. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.5
Distance between the best particle to the first object found in its own map. . . . . . . . 56
4.6
Estimated velocity of the quadrotor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.7
xy cut of the real position and PF’s output. . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.8
xy cut of the 3D map of the best particle at the end of the movement. . . . . . . . . . . 58
5.1
Lay out of the sonars’ recording positions. . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2
Log-odd representation of a 2D cut of the 3D output map. . . . . . . . . . . . . . . . . . 60
5.3
Lay out of the sonars’ recording positions. . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.4
Log-odd representation of a 2D cut of the 3D output map. . . . . . . . . . . . . . . . . . 61
5.5
Error obtained in each iteration for 10 particles.
5.6
Error obtained in each iteration for 100 particles. . . . . . . . . . . . . . . . . . . . . . . 63
5.7
xy cut of the Map obtained with GT localization. . . . . . . . . . . . . . . . . . . . . . . 64
5.8
xy cut of the best particle’s map at the end of the test.
5.9
Error obtained at the end of the 10th iteration in each position. . . . . . . . . . . . . . . 65
. . . . . . . . . . . . . . . . . . . . . . 62
. . . . . . . . . . . . . . . . . . 64
5.10 Position and attitude in each of the 5 phases. . . . . . . . . . . . . . . . . . . . . . . . . 65
5.11 yz cut of the map obtained in position 1.
. . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.12 yz cut of the map obtained in position 2.
. . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.13 xy cut of the map obtained in position 5. . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.14 yz cut of the map obtained in position 5.
. . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.15 xy cut of the map obtained in position 5 for the best particle. . . . . . . . . . . . . . . . 68
A.1 Printscreen of the USARSim’s AirRobot.
. . . . . . . . . . . . . . . . . . . . . . . . . . 76
A.2 Communication between the user, the algorithm and the simulator. . . . . . . . . . . . . 76
A.3 Pointing direction of the sonars along the propellers of the quadcopter. . . . . . . . . . . 77
A.4 Pointing direction of the sonars above and below the quadcopter. . . . . . . . . . . . . . 77
A.5 Graphical definition of the angles used (extracted from [1]). . . . . . . . . . . . . . . . . 78
A.6 Sonar invariance given the rotation along the presented axis. . . . . . . . . . . . . . . . 78
A.7 Representation of the heading direction of the sonars. . . . . . . . . . . . . . . . . . . . 79
A.8 Representation of the referential of each velocity concerning the vehicle. . . . . . . . . . 79
A.9 Representation of the referential of each velocity . . . . . . . . . . . . . . . . . . . . . . 79
A.10 Example where F is applied in two different situations. . . . . . . . . . . . . . . . . . . . 80
A.11 Diagram of the relation of input/output of Karma. . . . . . . . . . . . . . . . . . . . . . 80
A.12 Diagram of the relation of input/output of our motion model. . . . . . . . . . . . . . . . 81
xii
LIST OF FIGURES
List of Tables
2.1
Comparison of time consumption, in seconds, using the Index to Check Algorithm. . . . 20
2.2
Algorithm results with and without the altimeter observation. . . . . . . . . . . . . . . . 26
3.1
Comparison of distance traveled. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1
Time consumption on the two main blocks of the presented architecture. . . . . . . . . . 58
5.1
Comparison of the distance to the closest obstacle. . . . . . . . . . . . . . . . . . . . . . 67
5.2
Distance to the closest obstacle and classifier decision. . . . . . . . . . . . . . . . . . . . 68
A.1 Distance traveled by the vehicle given the same input. . . . . . . . . . . . . . . . . . . . 81
xiii
xiv
LIST OF TABLES
List of Algorithms
2.1
OccGrid algorithm (extracted from [1]). . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
Inverse sensor model Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3
OccGrid algorithm applied. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4
Particle Filter Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5
Select with replacement algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.6
FastSLAM algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.1
DB algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1
Full architectures algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
xv
9
xvi
LIST OF ALGORITHMS
Chapter 1
Introduction
1.1
Motivation
A quadcopter, presented in Figure 1.1 and also known as quadrotor, is an aircraft propelled
by four rotors. This type of vehicle fits in the Vertical Take Off and Landing (VTOL) category as they
can successfully perform vertical take offs, landings and hovering. The advantages of VTOLs to other
flying mechanisms are notorious, as shown in [2]. Its high agility, small size and VTOL capacity make
quadcopters a powerful tool not only for situations where hovering is crucial but also for indoor usage.
Despite what it may appear at first glance, flying a quadrotor is not a simple task: for a pilot
to have a safe flight and control the vehicle as desired, extensive trainning and experience is required.
Training, in particular, is also a hard task to accomplish as one can put either the quadcopter or
himself in danger. Furthermore, even experienced pilots may face situations where it is difficult to
guarantee a safe flight due to, for example, loss of visual contact with the vehicle.
As for real-life applications — and taking a natural disaster as an example — it is easy to
assess the potential of a quadrotor in finding victims and in aiding ground forces to discover safe paths
to travel. However, if one is flying through a non-static environment or, if by any chance, a direct line of
sight cannot be kept with the vehicle, its safety may be compromised. Nowadays, bridge infrastructure
inspections — yet another example — are performed by a climbing officer. By replacing the human
climber with an inspection performed by a camera mounted on a quadcopter, a combination of the
difficulty in controlling the vehicle at long range along with the need to be close to small technical
details, that a camera focus can not fulfill, may arise and increase the danger faced by the vehicle. A
method to avoid collisions is, therefore, a major necessity.
The solution proposed in this thesis consists of an assisting algorithm for teleoperation of
a quadrotor to avoid collisions based on FastSLAM [3]. Three Dimensional (3D) Localization and
Mapping in a Global Positioning System (GPS) denied environment is a very actual matter of study
as there is still no definitive solution. Since the purpose of this thesis is neither a detailed map of
2
CHAPTER 1. INTRODUCTION
the environment nor a precise measurement of object positions, the collision avoidance problem can
be efficiently addressed by knowing our position in relation to the objects in opposition to a global
localization approach.
After the position and map are known, a decision block based on a classifier is applied. This
classifier overrides the user’s inputs if they compromise the quadcopter’s safety in short-term. Overriding may extend from simple velocity reduction to, in extreme cases, an evasive maneuver. As our
aim is to ensure the quadcopter safety at all times, the algorithm considers unknown areas as occupied
until proven to be free.
In robotics, obstacle avoidance is usually implemented as simple reactive systems, being only
able to make decisions based on current-time perception. The ability to build and keep a map in
memory enables the system to know the world and make decisions according to it. By creating and
updating a map of the surrounding environment, the vehicle is able to cope with obstacles in, e. g.,
blind spots of the sonars, a situation unfeasible with a purely reactive methodology.
Due to the risk associated with testing this type of algorithm in a real robot, this thesis was
initially developed on a simulator and later tested in a real vehicle. This simulation environment is
carefully chosen in order to reduce the effort to port the algorithms to the real world. We consider as
a test bed a quadrotor equipped with an Inertial Measuring Unit (IMU), an altimeter, and six sonars,
one above each of the propellers, and one above and one below the main body of the quadcopter. The
position and orientation is denoted by the arrows in Figure 1.1. With this approach we want to target
all quadrotors with small payload and equipped with low consumption and low cost hardware.
Figure 1.1: Example of a quadcopter. Presented arrows correspond to the orientation of the mounted
sonars. The mounting position is denoted in red.
1.2
1.2.1
Related Work
SLAM
Robotic community has performed thorough investigation in SLAM techniques. Different
methods for solving this problem have been developed such as Particle Filters or Extended Kalman
Filters (EKF) [4] for localization, and occupancy grid or Kalman Filtering [5] for mapping. 3D Simultaneous Localization and Mapping (SLAM) in Unmanned Aerial Vehicles (UAV) using lasers has been
1.2. RELATED WORK
3
studied but typically including techniques, such as loop closure algorithms [6] or laser scan matching [7]
enhancing the localization of the vehicle. Since our study is limited to the use of sonars, the application
of these algorithms is not possible. 6D SLAM in UAVs resorting to vision is yet a different approach
widely used by the community [8], but once again unfeasible since there is no camera equipped in the
used quadrotor. Other SLAM approaches, resorting to particle filters with an EKF for mapping [9],
can be found in the literature, but applied in wheeled robots.
This thesis differs from most SLAM studies in that building an accurate map is not our
objective. Instead, we aim at a rough and low complexity map, and thus more time efficient.
1.2.2
Obstacle Avoidance
As for obstacle avoidance methodologies for UAVs, literature mostly addresses path re-planing
topics [10] or to vision based solutions [11] [12]. [10] addresses the obstacle avoidance by estimating
the motion of the vehicle and building a map of the objects in the scene. However, the work presented
targets control issues to navigate the UAV freely in the environment and it is vision based.
Researchers also address the problem of collision avoidance for multiple robot systems as in
[13]. In the presented works, the scope is towards avoiding collisions inside a formation or a swarm
of robots. In this case, the effort concentrates mostly on communication and localization issues and
collision with obstacles is not considered.
1.2.3
Assisting Teleoperation
Some publications, focusing on methods for aid in the teleoperation of quadcopters, were
analyzed and, from the investigation performed, the community appears to be considering this problem
as a step towards autonomy and not as a research topic by itself. Some researches as [10] uses vision
based techniques to detect obstacles and aim at modifying way points for path replanning. As said
before, the scope of this work is not path replaning nor finding alternative solutions when facing an
object. On the other hand, [14] compares methods for danger classification, such as the Time to
Collision, used in this thesis. It also presents results on effective manners of communicating to the user
that the vehicle is in danger by the use of force feedback controllers. The presented work aspires to
solve the presented problem but, unlike our solution, considers a pure reactive methodology to avoid
collisions, meaning no map nor danger classification is performed.
The sensor options in most studies rely on laser range finders or vision, being the only exception
the already talked [14]. While in [15] the choice falls on analysis of the optical flow, in [16] an hybrid
method combining optical flow with telemetric measurements is studied. In neither of the presented
works the mapping nor localization are addressed.
4
1.3
CHAPTER 1. INTRODUCTION
Objectives
This thesis main objective is to develop an assisting teleoperation architecture for a quadrotor
guaranteeing the vehicle’s safety from collisions at all times. Due to the possible indoor usage of the
vehicle, this work aims for GPS denied environments. As for onboard sensors, a IMU and six sonars
were considered. In order to allow the proposed methods to be used in real worlds scenarios, an online,
user friendly and easily adjustable algorithm is targeted.
1.4
Approach
To carry out the main objective, a full online architecture integrating all modules is imple-
mented as shown in Figure 1.2. As stated before, the vehicle is equipped with an IMU, responsible
for giving a 3D orientation in Euler Angles — Roll, Pitch, Yaw — , and six sonars, considered to be
triggered simultaneously. Along with the user’s inputs — given by a 6 axis joystick — these sensor
measurements are the only inputs considered for the presented work. The SLAM module, a combination of the mapping and localization algorithms, considers as inputs the 3D attitude and the sonar
readings and outputs a binarized version of the map, the predicted velocity and the vehicle position in
3D, x, y and z. The decision block is responsible for the danger evaluation and for feeding commands
into the vehicle. The inputs for this section are the binarized map, the position, and the current
velocity. The output is the vehicle’s commands.
Figure 1.2: Full architecture of the approach.
1.5
Outline
This thesis is divided into two main parts: 3D SLAM and decision block. For each section,
corresponding theory, work developments, and experimental results are presented.
1.6. CONTRIBUTIONS
5
Chapter 2 focuses on FastSLAM. The approach adopted for the individual problems, mapping
and localization separately, shall be stated and tested; first individually and then as a full part.
Decision and classifier definition are addressed in Chapter 3. A description of the types of danger that
the quadrotor may face along with the associated action to perform are carefully detailed throughout
this chapter. In Chapter 4 the simulation results for the full architecture will be provided and discussed,
while in Chapter 5 real quadcopter preliminary results are presented. Final conclusions and future
work are presented in Chapter 6. Appendix A covers details concerning the simulator used.
1.6
Contributions
The contribution of this thesis is an online architecture for assisted teleoperation of a quadrotor
to avoid obstacles by creating a map and by self-localization in the surrounding environment. The
proposed methods are fully demonstrated in simulation where the vehicle is able to successfully avoid
collisions. Real world results, from separate parts of the algorithm, are also presented. This thesis
also contributed with the submission of a paper to the 2012 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2012).
6
CHAPTER 1. INTRODUCTION
Chapter 2
FastSLAM
2.1
Problem Statement
Mapping and localization has been widely used and developed by the robotic community.
SLAM (Simultaneous Localization and Mapping) addresses the problem of estimating the pose and
the map of a robot at the same time (2.1). Consider 1 : t to be the temporal range between 1 and t,
x1:t to be the set of poses of the robot in all time instances referred — pose denotes three Degrees of
Freedom for attitude and three more for position —, m the map, z1:t all measurements and u1:t all the
control inputs. To solve the SLAM problem we use the FastSLAM approach proposed by Motermerlo
et al. [9]. The conditional independence propriety of the SLAM problem [1] allows us to perform a
factorization that decomposes SLAM into a path estimation problem and a mapping problem (2.2).
p(x1:t , m|z1:t , u1:t )
(2.1)
p(x1:t |z1:t , u1:t )p(m|z1:t , x1:t )
(2.2)
For position estimation FastSLAM uses a Particle Filter (PF). By analysis of the expression
it is possible to check that the estimation of the posterior is based on control commands and sonar
observations. For the map posterior estimation an occupancy grid algorithm is applied. Note that this
posterior is conditioned by the vehicle’s position.
As localization and mapping depend directly on sensors, sonar poses have to be computed along
time. Given a certain rotation and translation, a transformation matrix, as detailed in Appendix A,
can be applied to compute the actual position and orientation of each sensor. With this transformation,
together with the fact that the map posterior is conditioned by the vehicle’s position, we can consider
each sonar as an independent reading with a given position and orientation.
Since nowadays IMUs are able to give very accurate attitude estimations, the correct attitude
is considered to be available at all times. By considering the attitude as sufficiently accurate for our
8
CHAPTER 2. FASTSLAM
purposes, we can reduce a 6D problem (position and orientation) to a 3D problem (position only).
2.2
2.2.1
Occupancy Grid Mapping
Introduction
OccGrid [17] addresses the mapping problem with uncertain measurement data under the
assumption that the robot’s pose is known. By factorizing SLAM (2.2) it is possible to solve the
mapping problem conditioned to the pose of the robot. The main objective of the occupancy grid
mapping is to calculate the posterior probability over the map, given the robot path and all sonar
readings. To do so, the OccGrid represents the map as a field of random variables. Each random
variable corresponds to the binary occupancy of the cell it covers, where the cells — each one denoted
by mi — correspond to an evenly spaced discretization of map (2.3).
m=
[
mi
(2.3)
i
During consecutive iterations of the algorithm, these random variables will be updated according to the probability of the covered area being occupied, where p(mi ) = 0 stands for a free cell
and p(mi ) = 1 refers to an occupied cell. With this approach, the standard problem breaks down
into several separate single cell mapping problems (2.4). By doing so, the posterior over the map is
approximated as the product of its marginals (2.5).
p(mi |z1:t , x1:t )
p(m|z1:t , x1:t ) =
Y
(2.4)
p(mi |z1:t , x1:t )
(2.5)
i
Although we can assume this decomposition, problems may arise from it as modeling dependencies between neighboring cells are not enabled [1]. An implementation of the OccGrid algorithm
is described on Algorithm 2.1. The representation of the occupancy of a cell is made resorting to logodds. This representation has the advantage of avoiding numerical problems near probability values
of zero or one. The transformations between probability and log-odds are presented in (2.6) and (2.7).
lt,i = log
p(mi |z1:t , x1:t )
1 − p(mi |z1:t , x1:t )
p(mi |z1:t , x1:t ) = 1 −
1
1 + elt,i
(2.6)
(2.7)
The algorithm inputs are the previous map — hereby represented as lt−1 — the current
position xt and sonars last measurement zt . Throughout this thesis the chosen value for the prior of
the map is l0 = 0 (corresponding to an occupancy probability of 0.5), nevertheless it can be changed
2.2. OCCUPANCY GRID MAPPING
9
Algorithm 2.1 OccGrid algorithm (extracted from [1]).
1: Occupancy Grid mapping (lt−1 , xt , zt )
2:
for all cells mi
3:
if mi is in the sonar FoV
lt,i = lt−1,i + inverse sensor model(mi , xt , zt ) − l0
4:
5:
else
6:
lt = lt−1
7:
end
8:
end
9: return
to best suit our needs. The algorithm loops through all the grid cells and updates the values of the
ones inside the Field of View (FoV) of our sensor. This update, line 4, is carried out by an inverse
sensor model that will be detailed in the following section. Note that the presented update is purely
based on sonar measurements.
In addition, the position of the robot is also used as a viable source of information as we can
state that its current position is, of course, obstacle free. That assumption is extended to a volume
around the vehicle with pre-defined size. For each OccGrid iteration, the probability of each cell inside
that volume is lowered by a constant value. By defining this constant dependent on the triggering
frequency of the sonars, it is possible to guarantee that the cell’s probability is not set in a value too
low to be changed if necessary.
2.2.2
Proposed Method
The proposed method is a slightly different implementation of the occupancy grid algorithm
described on Algorithm 2.1. Initially, we will present the considered sonar model and, at the end of
this section, the applied inverse sensor model. A more computationally efficient method for searching
cells inside the sonar’s FoV is also derived.
Sonar Model
Typically sonars are modeled by a cone whereas in this thesis they are modeled by a quadrangular pyramid, Figure 2.1. This assumption does not compromise our results and is applied only for a
simplified, and faster, computation of the perception of the sonar.
Since Algorithm 2.1 only refers to one observation, changes have to be made as our quadcopter
is equipped with six sonars, distributed as shown in Appendix A. We are considering each one of them
as a separate sensor and consequently considering six observations every time OccGrid is applied.
10
CHAPTER 2. FASTSLAM
Figure 2.1: Adopted sonar model.
Sonars are considered to be triggered simultaneously. This situation is not entirely realistic but in
practice has very small impact since the update step — detailed further ahead — can be performed in
different time instants for each sonar independently.
Index to check
OccGrid tends to be computationally expensive especially in 3D applications. If we consider a
constant cell size, the bigger the map, the greater the time to compute the probability for all the cells.
Taking as an example a square map with a side equal to N meters and square cells with side equal to
M meters, the number of cells is given by
N3
M
from where it is possible to check that the number of
cells grows cubically with N . If we directly apply the Algorithm 2.1 standard OccGrid algorithm, a
loop with a number iterations equal to the number of cells is performed and, of course, as the number
of cells increases so will the computational time. Our approach precludes that concept by keeping a
constant computational time independently of the map size.
The function index to check receives as input the vehicle’s position xt and outputs a vector
containing the indexes of the cells close to the robot and that are worth being analyzed by the OccGrid.
This algorithm computes a volume — with pre-defined size proportional to the maximum range of the
sonars — around the robot and returns the indexes of all the cells inside that volume. The FoV of the
sonar is not taken into account at this stage.
Figure 2.2 is an illustration of the mentioned above. The robot’s position is the darker cell,
the cell’s worth being analyzed are represented in white and the cells that will not be analyzed by our
algorithm are presented in grey. As it is obvious, the variable l can be changed as it is proportional to
the maximum range of the sonars used.
l
Figure 2.2: xy cut of the graphical interpretation of index to check.
2.2. OCCUPANCY GRID MAPPING
11
Inverse Sensor Model
The inverse sensor model implements equation (2.4) in its log-odds form:
inverse sensor model(mi , xt , zt ) = p(mi |z1:t , x1:t )
(2.8)
The inverse sensor model applied is an adaptation of one proposed in [1]. The pseudo code for
the function is presented in Algorithm 2.2. The model receives as inputs the vehicle current position
xt , the index of the cell in analysis i, and the sonar reading zt . The algorithm computes whether the
distance from the robot to the cell i is equal, smaller or bigger than zt and outputs a value depending
on this classification. Figure 2.3 is a graphical explanation of the inverse sensor model. A positive
value is returned if dist = zt — corresponding to the green cell — and a negative one if dist < zt —
blue cell. A dist > zt corresponds to a cell behind the object and, logically, it is not possible to state
whether the cell is occupied or not and, therefore, no update is performed on it — red cell. As a trivial
conclusion, the higher the log-odd value of the cell the higher is the chance that the cell is occupied.
Figure 2.3: Graphical interpretation of the inverse sensor model.
Algorithm 2.2 Inverse sensor model Algorithm.
1: Inverse sensor model (i,xt ,zt )
p
2:
r = (xi − x)2 + (yi − y)2 + (zi − z)2
3:
kd = −0.2 · r + 1
4:
if r < zt
return lf ree · kd
5:
6:
else if r = zt
return locc · kd
7:
8:
9:
10:end
else
return 0
12
CHAPTER 2. FASTSLAM
In addition, it was also implemented a measurement confidence system where a distance factor
is multiplied by the log-odd assigned to a cell — lines 5 and 7 of Algorithm 2.2. This factor is
dependent on the distance between the ith cell and the robot. By doing so, the algorithm will have
a greater confidence in nearby cells which lowers linearly with the distance. The linear function used
was experimentally derived and sonar type dependent. This change allows the algorithm to be less
prone to outliers near the sonar maximum value — proven useful in low dimension cells as shown in
Chapter 5.
2.2.3
Experimental Results
Through the following experiments the vehicle’s position is considered to be ground truth at
all times. Unfortunately UT2004 does not provide the real map so a quantitative comparison is not
possible to make. One might think that by using a ground truth localization a real map is built, but this
is not completely true. As discussed in Appendix A, USARSim’s output values are not constant and
may slightly change between experiments. Therefore a qualitative comparison, by visual confirmation
or consecutive trials, is the adopted methodology to evaluate the OccGrid results. For better and
easier comprehension of the tests, print screens of the environment where each test is performed are
shown.
The OccGrid — Algorithm 2.3 — results for three different situations are shown and discussed.
For each situation a 2D slice of the world, with a representation of the log-odds, and a binarized version
of that figure, given a certain threshold, are presented. 3D representations of the map, along with some
notes for better understanding, are also provided. Cells are considered to be square with one meter
length. Results shown are obtained exclusively in online mode.
Algorithm 2.3 OccGrid algorithm applied.
1: OccGrid(xt , zt , lt−1 )
2:
indexes = index to check(xt )
3:
for all sonars
4:
5:
lt = occupancy grid mapping(lt−1 , xt , zt , indexes)
end
6: return(lt )
Wall Border
In this experiment a movement around a corner was performed. The screenshot taken from
the UT2004 is presented in Figure 2.4. The vehicle is pointed by the red arrow and the trajectory is
denoted by the black arrow.
From Figure 2.5 — where the higher the probability of being occupied the darker the cell
2.2. OCCUPANCY GRID MAPPING
13
Figure 2.4: USARSim’s environment print screen.
is — it is possible to verify that the algorithm is able to recognize the wall and successfully map it.
Since for the Decision Block (DB) it is only necessary to know whether a cell is occupied — and not
a probability — a binarized map is also computed and presented in Figure 2.6. This binarization is
performed considering a certain threshold. In the presented case the value chosen was lt = 0 meaning
all the cells with a log odd higher than 0 are considered occupied — p(mioccupied ) > 0.5 — and others,
with log odd lower or equal to 0, are considered to be free — p(mioccupied ) ≤ 0.5. By analysis of
this figure it is possible to notice some situations where the algorithm outputted false positives or
false negatives. These inconsistencies, where the algorithm did not manage to fully characterize the
obstacle, can be solved if the vehicle acquires more data near the obstacle. Nevertheless the number
of cells with false negative — situation where it failed to detect an object — is not enough to cause
problems to our classifier as explained in Chapter 3. In both figures the trajectory performed by the
vehicle is presented in red.
45
40
35
y [m]
30
25
20
15
10
5
0
0
5
10
15
20
25
x [m]
30
35
40
45
Figure 2.5: Log-odd representation of a 2D cut of the 3D output map.
From Figure 2.7 is possible to see that despite the mentioned problems we successfully manage
to fulfill our objective of making a rough 3D map of our environment. As it is noticeable, and due to
the small number of sonars used allied with its small FoV and maximum range, the amount of blind
spots is significant. Since it is not possible to initially state whether the non-observed cells are free or
14
CHAPTER 2. FASTSLAM
45
40
35
y [m]
30
25
20
15
10
5
0
0
5
10
15
20
25
x [m]
30
35
40
45
Figure 2.6: Binarized version of Figure 2.5.
occupied it is not safe to fly towards them as we may be flying into an obstacle. This lack of knowledge
is the main motivation for the choice of the binarization threshold, used in the Decision Block.
Occupied
Free
50
40
z[m]
30
20
10
0
0
50
40
10
30
20
20
30
10
40
50
0
y[m]
x[m]
Figure 2.7: 3D plot of the center of mass of the free and occupied cells. Never observed cells are not
represented.
Corner
The area to map in this example is presented in Figure 2.8. During the teleoperation, the
quadcopter was always kept near the walls and at the same height. The vehicle is pointed out by the
red arrow while the trajectory is presented in white.
From Figure 2.9 and Figure 2.10 it is possible to notice the false negatives and false positives
2.2. OCCUPANCY GRID MAPPING
15
Figure 2.8: USARSim’s environment print screen.
problems mentioned before. In the presented test the algorithm was, once again, only allowed to
perform few iterations for us to evaluate its performance on low information scenarios and therefore
limiting the OccGrid in its results.
45
40
35
y [m]
30
25
20
15
10
5
0
0
5
10
15
20
25
x [m]
30
35
40
45
Figure 2.9: Log-odd representation of a 2D cut of the 3D output map.
During this test, and in order to prove the ability of the algorithm to map with the sonar
placed under the vehicle, the quadcopter was held close to the ground. Unfortunately, by doing so, the
3D plot becomes more difficult to interpret. The results are shown in Figure 2.11. Some notes were
added to ease the interpretation. In this particular case, and clearer than in the previous example, it
is possible to notice the ability of the algorithm to correctly map a 3D world. While maintaining a
certain height, the bottom sonar could see the ground and successfully identify it.
16
CHAPTER 2. FASTSLAM
45
40
35
y [m]
30
25
20
15
10
5
0
0
5
10
15
20
25
x [m]
30
35
40
45
Figure 2.10: Binarized version of Figure 2.9.
50
Free
Occupied
45
40
35
z [m]
30
25
20
15
10
Wall
Ground
5
0
50
0
y [m]
0
10
20
30
40
50
x [m]
Figure 2.11: 3D plot of the free and occupied cells. Non observed cells are not represented.
Indoor
In this experiment the mapping of an indoor environment was performed. The trajectory is
represented in Figure 2.12 and Figure 2.13 by the black and white arrows. The quadcopter movement
was initialized inside the room and passed the corridor 1 into corridor 2. Once again the height was
maintained during the entire trajectory.
2.2. OCCUPANCY GRID MAPPING
17
Figure 2.12: USARSim’s environment print screen.
Figure 2.13: USARSim’s environment print screens.
From Figure 2.14 and Figure 2.15 it is possible to see that the mapping errors are higher than
in the previous cases and we can clearly state that the classifier will not be able to cope with them.
In this particular case roaming in the room and corridor would not solve the problem since there are
too many inconsistencies to correct and the nature of the problem is not the same as in previous
examples. In this case the algorithm faces a different limitation: During the presented tests the cell’s
volume was considered to be one cubic meter. While in outdoor the vehicle is able to do a coarse
map of the environment given that cell size, in an indoor application it is not able to do so. Even for
building a rough representation of the environment, a smaller cell size is needed for indoor applications since, for example, the length of the corridor is around 3 meters which is equivalent to 3 grid cells.
18
CHAPTER 2. FASTSLAM
45
40
35
y [m]
30
25
20
15
10
5
0
0
5
10
15
20
25
x [m]
30
35
40
45
Figure 2.14: Log-odd representation of a 2D cut of the 3D output map.
45
40
35
y [m]
30
25
20
15
10
5
0
0
5
10
15
20
25
x [m]
30
35
40
45
Figure 2.15: Binarized version of Figure 2.14.
To prove the ability of the algorithm to map the indoor scenario successfully the grid cell
length was lowered to 10 cm. The obtained results are presented in Figure 2.16 and, after binarization,
in Figure 2.17. By comparing the results obtained using different cell sizes, it is possible to see that
the algorithm is able to correctly map the indoor environment when using a smaller cell size. However,
the cell size is a parameter that has to be carefully chosen, as a smaller value means higher memory
used and computational effort. From both figures it is also noticeable that the algorithm, in some
situations, fails to map the doors in the room. This error is inherent to the inverse sensor model used
as the door’s open space is replaced by an occupied area due to readings in the door edges. As before,
if the quadcopter faced a door directly the map would be corrected.
2.2. OCCUPANCY GRID MAPPING
19
Figure 2.16: Log-odd representation of a 2D cut of the 3D output map using a smaller grid cell size.
Figure 2.17: Binarized version of Figure 2.16.
Computational Effort
Table 2.1 presents the computational time comparison1 between the usage and non-usage of
the Index to check algorithm. In this test, 100 calls on the OccGrid algorithm with and without the
aid of the Index to Check were performed. As it is possible to see the usage of this algorithm allows an
independence between the grid size and the time spent on mapping. Although these values may vary,
depending of the number of calls to the inverse sensor model, it is possible to infer that the average
time consumption for 100 calls of the Algorithm 2.4 is 63ms. In the presented table, it is also possible
1 Running
on MATLAB2010a in an Intel Core 2 Duo at 2.53GHz CPU with 3GB of RAM memory
20
CHAPTER 2. FASTSLAM
to check that the improvement brought by this algorithm may ascend to 98% in the case of the bigger
map.
Table 2.1: Comparison of time consumption, in seconds, using the Index to Check Algorithm. The
presented values are means and standard deviations.
50x50x50 Grid
100x100x100 Grid
Without [s]
With [s]
Without [s]
With [s]
Time Consumption
0.508 (0.004)
0.063 (0.003)
3.5903 (0.023)
0.0632 (0.004)
Percentual Improvement
—
87.6%
—
98.2%
2.3
2.3.1
Particle Filter
Introduction
Particle Filter (PF) is a bayesian state estimation method which approximates the posterior
[n]
bel(xt ) by a set of weighted particles St (2.9). Each particle st is a different hypothesis, xt
=
[X [n] Y [n] Z [n] ]T , of the state to estimate xt = [Xr Yr Zr ]T . Multiple particles, considering a total
number of N , are used and to each one is associated a weight, wt , representing the importance of that
specific hypothesis.
[1]
[2]
[N ]
[1]
[1]
[2]
[2]
[N ]
[N ]
St = {st , st , ..., st } = {[xt wt ], [xt wt ], ..., [xt wt ]}
(2.9)
The Particle filter, similarly to other Bayes filter algorithms, is recursive and computes the
belief bel(xt ) from bel(xt−1 ). For each iteration a predict and an update step are performed. The
predict step models the effects of the control inputs u(t) on each particle of St−1 by sampling from the
motion model distribution, thus spreading the particles according to the noise of the motion model.
The referenced motion model is identical to the dynamic model applied by the USARSim. The second
step acts as an evaluation for each one of the N particles. During this phase, a measurement model is
used to re-evaluate the weight of each particle based on sensor information. This weight, which defines
the contribution of the particle to the estimation of the real position, is updated by the probability of
the measurement zt given the prediction x̄t . As the weight is proportional to the posterior, the particle
with the greater importance is the one with the highest likelihood and expectedly closer to the real
position of the robot.
A naive and straightforward application of the PF would perform poorly. After consecutive
iterations of the PF algorithm, some particles will drift away, due to the white noise addiction, and,
therefore, will become less important. In this infeasible approach, the computational effort is spent
on particles with low weight instead of focusing only in the high weighted ones. To avoid these
2.3. PARTICLE FILTER
21
situations, where a starvation of high importance particles may occur, a resampling method is applied.
This resampling method will duplicate high weight particles and eliminate low ones, whilst keeping
a constant particle number. The probability of a particle to be replicated is, therefore, proportional
to its weight. The motion model applied during the predict step guarantees that, in the subsequent
iteration of the algorithm, the particles will be spread from each other but kept in the same space
region.
The applied 3D Particle filter is presented on Algorithm 2.4. Since our objective is to know
our position according to the obstacles, in opposition to an absolute localization, the particles position
[1]
[2]
[N ]
initialization is equal between them, i.e., x0 = x0 = ... = x0 . PF’s inputs are the current map,
the user’s inputs, and the last sensor readings. Each particular step is explained further ahead in the
Proposed Method section.
Algorithm 2.4 Particle Filter Algorithm.
1: Particle filter (m,ut ,zt ,ht )
2:
∆xt = motion model(ut , ∆xt−1 )
3:
for all particles N
[n]
[n]
4:
x̄t = xt−1 + ∆xt + ω
5:
wt ∝ p(zt |x̄t )
[n]
[n]
6:
end
7:
wt ←
8:
St ← Select with Replacement(wt )
9:
n∗ = arg maxi∈{1,...,N } wt
[n]
[n]
w
Pn t [n]
wt
% weight normalization
[i]
10: return(n∗ )
2.3.2
Proposed Method
Motion Model
As stated before, the predict step is based on a motion model. This model makes a first
[n]
prediction of the vehicle’s position based on its last position x̃t−1 and the control inputs ut . As for
simulation, the control input used is the vcommand imposed to the vehicle. In a real quadcopter,
the approach to build a reliable motion model needs to be based on sensors information such as a
Revolution per minute (RPM) measurement installed in each motor [2]. In (2.10), control inputs are
represented by ∆x denoting the variation in position the inputs caused, and where ω is assumed to be
Gaussian white noise with zero mean and covariance matrix Qt (2.11).
[n]
[n]
x̄t = x̃t−1 + ∆xt + ω
(2.10)
22
CHAPTER 2. FASTSLAM

σ2
 x

Qt =  0

0
ω = N (0, Qt ),

0
0
σy2


0

σz2
0
(2.11)
Before diving into the motion model details, some considerations must be made, concerning
the simulator. USARSim is, as said before and explored through Appendix A, a simulator based in
Unreal Tournament 2004. This video-game’s physics engine is KARMA, which is a registered product
and, consequently, no source code nor development information is provided to the community. Reverse
engineering was performed on KARMA to build a realistic physics estimator for this thesis.
Since simulation was firstly used while developing the algorithm, an approach to mimic the
one adopted by USARSim, in Figure A.11 from Appendix A, was derived. As discussed in Chapter 6,
the need for a different and more realistic model arises in a real quadcopter implementation [2].
To do so, some considerations based on known characteristics of USARSim were made. The
most important assumption made, used for further analysis, is that the robot’s position will only be
computed for the center of mass of the quadcopter. It will be assumed to have mass equal to m.
Resorting to [18], a method to simulate the dynamics of a rigid body was implemented. Considering
x(t) the center of mass position at time t and v(t) its velocity at time t, we can define a state vector
Y (t) for our system:
Y (t) =
x(t)
v(t)
(2.12)
If we consider F (t) as the sum of all forces acting in the center of mass at time t, the change
of Y (t) over time is represented by (2.13). By applying a discretization our problem becomes the
estimation of the next state space Y t+1 (2.14).
v(t)
d
d x(t)
Y (t) =
= F (t)
dt
dt v(t)
m
Y
t+1
t
= Y + ∆t ·
t
v
F
m
(2.13)
(2.14)
Using (2.14) we can compute an estimation of the robot position given certain F . From
experience and USARSim analysis it was concluded that KARMA applies drag force to objects. This
force was experimentally approximated by a linear expression (2.15).
Fdrag = −Kdrag · v(t)
(2.15)
Experiments were conducted to find the Kdrag which approximated USARSim values the best.
Although this approximation fits most experiments, some deviation exists to the USARSim values.
2.3. PARTICLE FILTER
23
This error probably arises from an unaccounted or a non-measurable parameter inside KARMA. By
feeding the sum of all applied forces (2.16) into (2.14), a prediction for the robot’s position is obtained.
F = Fdrag + Fapplied
(2.16)
A comparison between the predict step and real USARSim values was performed to evaluate
the quality of our model. In Figure 2.18 is visible that the motion model’s output is very similar to the
USARSim real values. A closer look at Figure 2.19 will show that the predict was moving accordingly
with the trajectory performed at all times with small error between its position and ground truth
values.
40
35
z[m]
30
25
20
15
25
45
40
30
35
30
35
25
Real
Predict
y[m]
x[m]
Figure 2.18: Real position and the prediction step output given a certain trajectory.
0.7
0.6
Absolute error [m]
0.5
0.4
0.3
0.2
0.1
0
0
50
100
150
Iterations
200
250
300
Figure 2.19: Error (2.17) obtained during Figure 2.18 trajectory.
24
CHAPTER 2. FASTSLAM
If faster movements are performed and combined with direction changes — such as a “slalom”
motion — the algorithm will not have such good results as presented in Figure 2.20. Figure 2.21 shows
that the motion model deviates to a maximum of eight meters from the real position. However, these
results are obtained using only the predict step and will be improved with the use of the complete PF
algorithm. Error shown in Figure 2.19 and Figure 2.21 represents the Euclidean distance between the
real position, given by USARSim, and our estimation given by the motion model (2.17).
error =
q
(X̄t − Xr )2 + (Ȳt − Yr )2 + (Z̄t − Zr )2
(2.17)
45
40
z[m]
35
30
Real
Predict
25
20
15
60
40
20
y[m]
10
20
50
40
30
x[m]
Figure 2.20: Real position and the prediction step output given a certain trajectory.
8
7
Absolute error [m]
6
5
4
3
2
1
0
0
10
20
30
40
Iterations
50
60
70
80
Figure 2.21: Error (2.17) obtained during Figure 2.20 trajectory.
2.3. PARTICLE FILTER
25
Observation Model
The particle weights are computed in the second step, the update. In this phase, an observation
model is used to evaluate the weight of each particle based on sensor information. This weight is
updated by the likelihood of the sensor measurements zt given the prediction xt and the map m
(2.18).
p(zt |m, x̄t )
(2.18)
The weight of each particle results from the joint likelihood of all measurements, given the
map and the path. These measurements are given by (1) sonars and (2) the altimeter. The sonar
readings are modeled with a Gaussian distribution given by (2.19).
[n]
P (zti |m[n] , x̄t )
=
[n] 2
2
− zti −d¯t
/2σdist
1
√
σdist 2π
e
(2.19)
[n]
where zti stands for measurement of sonar i, m for the map and d¯t is the Euclidean distance between
[n]
position hypothesis x̄t
and the first occupied cell in the map of the n-th particle. Note that equation
(2.19) is applied for each sonar.
The altimeter measurements are modeled with another Gaussian (2.20) where ht is the altimeter measurment. By using the altimeter readings efficiently it is possible to significantly reduce the
[n]
uncertainty along the vertical axis. The final weight wt
of each particle is equal to the multiplication
of all involved likelihoods, assuming conditional independence given the robot position and map (2.21).
[n]
P (ht |m[n] , x̄t ) =
[n]
[n] 2
2
− ht −Z̄t
/2σalt
1
√
σalt 2π
[n]
wt = P (ht |m[n] , x̄t ) ·
e
Y
[n]
P (zti |m[n] , x̄t )
(2.20)
(2.21)
i
The importance of the altimeter readings in the observation model is stressed in Table 2.2.
The input values, and as a consequence the trajectory, is equal in all tests. The improvement brought
to the mean error is statistically significant (one sided t-test, p < 0.05). Error presented is, once again,
considered to be the Euclidean distance between the particle with maximum weight x̃t (2.22) and the
ground truth position xt (2.23). Tests were performed using a pre-mapped area for careful analysis of
this particular situation.
[n∗ ]
x̃t = x̄t
error =
q
,
where
[i]
n∗ = arg max wt
(2.22)
i∈{1,...,N }
(X̃t − Xr )2 + (Ỹt − Yr )2 + (Z̃t − Zr )2
(2.23)
26
CHAPTER 2. FASTSLAM
Table 2.2: Algorithm results with and without the altimeter observation.
Without Altimeter
With Altimeter
Mean [m]
Max [m]
Mean [m]
Max [m]
Error obtained
1.996 (0.2565)
3.966 (0.3912)
1.5728 (0.3881)
3.036 (1.2571)
Percentual Improvement
—
—
19.9%
23.4%
Resampling
Different resample algorithms have been developed by the community [19]. In the present work,
the implemented method was the Select with Replacement shown on Algorithm 2.5, as described in
[20]. The algorithm computes a cumulative sum of the particle weights and T ∈ [0, 1] numbers are
sorted. Afterwards, the number of sorted numbers that fall in each interval of the cumulative sum
represents the number of times this particle is going to be replicated. The background concept is
that if a particle has small weight, the equivalent cumulative sum is small and the odds of any random
number appearing in it are, therefore, very small. On the other hand, the higher the weight, the higher
the equivalent cumulative sum, the higher the chance that many random numbers are found in it. The
function’s output is a vector containing the particles which will be replicated.
Algorithm 2.5 Select with replacement algorithm.
1: Select with Replacement (wt )
2:
Q = cumulative sum(wt )
3:
T = sort(rand(N ))
4:
While i ≤ N
5:
if T [i] < Q[j]
6:
Index[i] = j
7:
i=i+1
8:
else
9:
j =j+1
10:
11:
end
end
12: return(Index)
Despite the resampling method adopted, an effective approach to know when that resampling
is performed is required. Liu [21] introduced the effective number of particles N ef f (2.24).
Nef f = PN
n=1
1
[n] 2
(2.24)
wt
This metric evaluates how weights of the particles evolve over time. A constant value of N ef f
2.3. PARTICLE FILTER
27
means that the new information does not help to identify unlikely hypotheses in the particles, whereas
a decrease in N ef f states that some particles are less likely than others. If, in an iteration, N ef f is
lower than a given threshold, based on a percentage of N , a resample is applied.
Estimation
Since we are considering several different hypotheses for our position, a method to provide just
one position that will, at that very moment, be considered as the best estimation of the real position
of the vehicle is mandatory. Several ways to compute that estimation exist but, in our particular case,
most of them are not feasible. Logic methods for choosing particles such as mean or weighted mean
cannot be used considering that along with the hypothesis for a position at time t we also have assigned
to each particle its own map. We should never forget that the i-th particle’s map was built according
to its own position and merging maps is, of course, impracticable as erroneous values would appear on
the result.
Concerning our limitations, the best way to estimate the robot’s position is to choose the
particle with the maximum likelihood of the particle set (2.25). This approach will cause the estimated
pose to oscillate as the best particle at time t may not be the best particle at time t + 1.
In a different localization problem better algorithms could be applied, such as selective weighted
mean, where oscillations are smoothed and problems considering multi-modal distributions can be
avoided.
[n∗ ]
x̂t = x̄t
2.3.3
,
where
[i]
n∗ = arg max wt
(2.25)
i∈{1,...,N }
Experimental Results
In this section an evaluation to our implementation of the Particle Filter is performed. In
order to evaluate only the localization performance, the algorithm receives the map as an input. As
stated before, no ground truth values of the map are available so we previously mapped the area with
the real position and then fed the resulting map to the PF algorithm. Therefore, throughout these
tests, mapping is not performed.
As before, results are obtained using an online version of the algorithm. The total number
of particles is N = 100 and the covariance matrix for the error is (2.26). The variances used in the
observation model are equal to σdist = 2 and σalt = 2. Once again, the sonars used have a FoV= 30◦
and a maximum range of 5 meters.

0.1


Qt =  0

0
0
0.1
0
0



0

0.1
(2.26)
28
CHAPTER 2. FASTSLAM
Wall border
In this test the map achieved in Figure 2.7, page 14, was extended and a trajectory was
performed around it. The USARSim’s environment print screen is the one presented in Figure 2.4,
page 13. The detailed results obtained from the PF’s localization are presented below. From Figure 2.22
is notorious that the predict step performed well in this trajectory. The error in the predict step during
the flight is plotted in Figure 2.23.
Predict
Real
40
35
z[m]
30
25
20
15
50
40
30
20
y[m]
24
26
30
28
32
34
x[m]
Figure 2.22: Real position and the prediction step output given a certain trajectory.
0.9
0.8
Absolute error [m]
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
150
200
250
Iterations
Figure 2.23: Error (2.17) obtained during Figure 2.22trajectory.
In Figure 2.24 is represented the best particle (2.25) at each iteration. From Figures 2.25,
2.26, and 2.27 it is clear that the PF managed to estimate the position of the robot. Since only the
best particle at each iteration is presented, the expected oscillations in the estimation are noticed. It
is also clear from the figures, that the algorithm is performing poorly in some situations. It is known
that PF results tend to become worst when facing lack of information situations as, for example,
2.3. PARTICLE FILTER
29
corridors [1]. Unfortunately, we face many cases where only one axis is limited by an obstacle. If
we consider the initial iterations of the algorithm as an example, we will check that the quadcopter
has a good estimation on its x axis but it has no information to improve its estimation on y nor z.
When the vehicle approaches situations where that ambiguity is solved — as passing by corners — an
improvement in the localization is noticed.
Particle
Real
40
35
z[m]
30
25
20
15
20
25
30
x[m]
35
35
30
25
50
45
40
y[m]
Figure 2.24: 3D representation of the real position and PF’s output.
50
Particle
Real
45
y[m]
40
35
30
25
24
26
28
30
x[m]
32
Figure 2.25: xy cut of Figure 2.24.
34
36
30
CHAPTER 2. FASTSLAM
Particle
Real
40
35
z[m]
30
25
20
15
24
26
28
30
x[m]
32
34
36
Figure 2.26: xz cut of Figure 2.24.
Particle
Real
40
35
z[m]
30
25
20
15
25
30
35
40
45
50
y[m]
Figure 2.27: yz cut of Figure 2.24.
In Figure 2.28 the error is split for x, y and z. We can verify that at the beginning of the
movement the error in the z axis is the greatest. Although the observation model is using the height,
the σalt value can not be too strict. If that value is lowered too much, the z estimation will improve but,
since we are using a limited number of particles, a starvation problem may occur and the total error
will increase. The deviation around axis x and y appears from the already explained PF limitation.
The error (2.23) obtained is presented in Figure 2.29.
errorx = Ẑt − Zr
(2.27)
2.3. PARTICLE FILTER
31
errory = Ŷt − Yr
(2.28)
errorz = Ẑt − Zr
(2.29)
1.5
x
y
z
1
0.5
error [m]
0
−0.5
−1
−1.5
−2
−2.5
0
50
100
150
200
250
Iterations
Figure 2.28: Detailed error for x, y and z axis.
0.9
0.8
Absolute error [m]
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
50
100
150
200
250
Iterations
Figure 2.29: Error between real position and the PF’s estimation.
Low Corner
For this test the map obtained from Figure 2.11, in page 16, was fed to the PF. The set-up
defined for this test is equal to the previous one. Once again, and as occurred in the last example,
the predict step results were accurate, Figure 2.30, and with low error to the ground truth value,
Figure 2.31.
32
CHAPTER 2. FASTSLAM
35
30
25
20
15
10
25
30
35
40
45
50
Figure 2.30: Real position and the prediction step output given a certain trajectory.
0.7
0.6
Absolute error [m]
0.5
0.4
0.3
0.2
0.1
0
0
20
40
60
80
100
120
Iterations
140
160
180
200
Figure 2.31: Error (2.17) obtained during Figure 2.30 trajectory.
In this case the PF results were not as good as the previous ones. Nevertheless, it is possible
to observe in Figure 2.32 a particular situation in which, despite being lost at a certain time, the
algorithm managed to recover its position. The stressed situation, that happened twice, is pointed
out by two black arrows. Despite the error inherent from being lost, these particular results show
that the algorithm is performing well and can recover from adverse situations. The error is plotted in
Figure 2.33.
Other important detail to notice is the error in the z axis plotted in Figure 2.34. By comparing
with the previous example it is possible to notice that the error along the z is much smaller. This
situation derives from having the vehicle flying close to the ground. By doing so, the observation
model can evaluate particles using sonar readings from below the robot. Unfortunately the same does
not happen with the y and x axis as few information is available at the beginning of the movement.
When the vehicle starts moving closer to walls an improvement is noticed in the position estimation.
Lack of information during the flight is, once again, intended in order to evaluate the performance of
the algorithms in harsh situations.
2.3. PARTICLE FILTER
33
Particle
Real
35
30
y[m]
25
20
15
10
20
25
30
35
x[m]
40
45
50
Figure 2.32: xy cut of the real position and PF’s output.
5
4.5
4
Absolute Error [m]
3.5
3
2.5
2
1.5
1
0.5
0
0
20
40
60
80
100 120
Iterations
140
160
180
200
Figure 2.33: Error between real position and the PF’s estimation.
4
x
y
z
3
2
error [m]
1
0
−1
−2
−3
−4
−5
0
20
40
60
80
100
120
Iterations
140
160
180
200
Figure 2.34: Detailed error for x, y and z axis.
34
CHAPTER 2. FASTSLAM
Indoor
This test was performed in an indoor environment inside the room shown in Figure 2.35. The
map used is presented in Figure 2.36. It is clear that this map does not have the same level of detail
than the original since, in our case, doors are not considered. Unfortunately all attempts to correctly
map this area with a higher level of detail were unsuccessful due to the difficulty in correctly mapping
doors, as previously discussed. The set-up adopted is equal to the one in the previous examples.
Figure 2.35: USARSim’s environment print screens.
Figure 2.36: Map used in the localization algorithm.
Results obtained are presented in Figure 2.37. It is possible to check that the algorithm was
able to keep track of the true position of the robot. From Figure 2.38 it is also clear that the error
along z is the highest among the three axis. Since we are in an indoor scenario, sonars below and above
the quadcopter should be able to help in localization. The error obtained is nearly constant along the
duration of the flight, so the most probable cause is the starting map. Nevertheless, the algorithm
managed to localize the robot with a considerable accuracy, Figure 2.39.
2.3. PARTICLE FILTER
35
32
Particle
Real
30
y[m]
28
26
24
22
20
16
17
18
19
20
21
x[m]
22
23
24
25
26
Figure 2.37: xy cut of the real position and PF’s output.
1.5
x
y
z
1
0.5
error [m]
0
−0.5
−1
−1.5
−2
−2.5
0
20
40
60
Iterations
80
100
120
Figure 2.38: Detailed error for x, y and z axis.
2.5
Absolute error [m]
2
1.5
1
0.5
0
0
20
40
60
Iterations
80
100
120
Figure 2.39: Error between real position and the PF’s estimation.
36
2.4
CHAPTER 2. FASTSLAM
FastSLAM
The applied algorithm is presented in Algorithm 2.6. The FastSLAM’s inputs are all sensor
readings — IMU, sonars, altimeter — and user’s commands. The outputs are the position and map
[n∗ ]
of the best particle st
of the set St .
Algorithm 2.6 FastSLAM algorithm
1: FastSLAM(xt−1 ,ut ,zt ,ht )
2:
∆xt = motion model(ut , ∆xt−1 )
3:
for all particles N
[n]
[n]
4:
x̄t = xt−1 + ∆xt + ω
5:
wt ∝ p(zt |x̄t ) · p(ht |x̄t )
[n]
[n]
[n]
6:
end
7:
wt ←
8:
r = Select with Replacement(wt )
9:
for j = 1 : length(r)
[n]
12:
% weight normalization
j
OccGrid(xt ,zt ,lt−1
)
10:
11:
[n]
w
Pn t [n]
wt
end
St = St (r)
[i]
n∗ = arg maxi∈{1,...,N } wt
[n∗ ] 14: return st
13:
2.5
Experimental Results
During the following experiments a N = 10, a covariance error matrix equal to (2.26), σdist = 2,
and σalt = 2 were considered. User’s commands and ground truth attitude values are the only inputs
for this part of the work. Neither aid in localization nor any prior knowledge of the map is fed to the
algorithm. As stated earlier, our goal is to have a version of the algorithm that can perform in real time,
therefore, all results were obtained using an online FastSLAM. Sonars are, once again, limited to a
maximum range of 5 meters and a FoV= 30◦ . This set-up is maintained throughout all the experiments.
Wall border
For a first validation, the Figure 2.4, on page 13, edge was used once again. A trajectory was
performed with careful attention to keep the vehicle close to walls so information, for the OccGrid and
for the observation model, could be provided by the sonars.
Predict model results are presented in Figure 2.40. As expected from former experiences, the
predict step had a very good performance by being able to accurately track the position of the robot
2.5. EXPERIMENTAL RESULTS
37
just accounting for the user’s inputs. The particle with the higher weight at each iteration (2.25) is
plotted in Figure 2.41. It is possible to notice some deviation from the real position. The main question
to be discussed is if the deviation sensed can compromise the performance of our Decision Block.
Predict
Real
40
35
z[m]
30
25
20
15
45
40
35
30
25
y[m]
26
24
28
34
32
30
x[m]
Figure 2.40: Real position and the prediction step output given a certain trajectory.
Particle Filter
Real
40
35
z[m]
30
25
20
15
50
40
40
35
30
30
y[m]
25
20
20
x[m]
Figure 2.41: 3D representation of the real position and PF’s output.
At first glance, the error between our localization estimation and the real position, Figure 2.42,
seems difficult to overcome. But by comparing the estimated position with the obtained map, Figure 2.43 and Figure 2.44, we can check that the obtained map drifts accordingly with the estimated
position. Since the objective of this thesis is not to build a precise map but to be able to recognize
obstacles and perform localization relatively to them, it is possible to state that results obtained fit
our needs.
From Figure 2.45 and Figure 2.46 is possible to notice the limitations of the PF in situations
with ambiguous information. In the situations pointed by the black arrows, the robot does not have
38
CHAPTER 2. FASTSLAM
4
3.5
Absolute error [m]
3
2.5
2
1.5
1
0.5
0
0
50
100
150
200
Iterations
250
300
350
Figure 2.42: Error between real position and the PFs estimation.
Particle Filter
Real
40
35
z[m]
30
25
20
15
25
30
35
40
45
50
y[m]
Figure 2.43: yz cut of Figure 2.41.
occupied cell
free cell
50
40
z[m]
30
20
10
0
0
10
20
30
40
50
x[m]
0
10
20
30
y[m]
Figure 2.44: PF’s map output.
40
50
2.5. EXPERIMENTAL RESULTS
39
any source of information that allows it a correct location in the x axis and, in this type of situations,
large amounts of error may appear. These ambiguities are solved and results improved as soon as the
algorithm receives some information concerning that axis.
x
y
z
3
2
Error [m]
1
0
−1
−2
−3
−4
0
50
100
150
200
Iterations
250
300
350
Figure 2.45: Detailed error for x, y and z axis.
Particle Filter
Real
40
35
z[m]
30
25
20
15
24
26
28
30
x[m]
32
34
36
Figure 2.46: xz cut of Figure 2.41.
The results obtained are satisfactory and precise enough for our application. An approach
that would substantially improve results would be to increase the total number of particles. Unfortunately, by doing so, the computational effort associated also increases. As discussed in Chapter 6,
large computation is one of the main drawbacks of our approach and, therefore, to allow the algorithm
to perform in real time, a lower number of particles was chosen. Further discussion on possible improvements to the results and computation is addressed in Chapter 6.
Corner
With this test we wanted to expose our algorithm to a more challenging situation. The
40
CHAPTER 2. FASTSLAM
environment chosen to do so was the corner - already used for OccGrid’s tests — represented in
Figure 2.11 on page 16. As in the previous example, the algorithm is performing an online FastSLAM.
In Figure 2.47 is presented a xy cut of our position estimation while in Figure 2.48 is presented the
obtained map. Once again, the deviation in the position, to the real values, is reflected in the mapping
performed. Despite the error obtained, Figure 2.49, and keeping in mind the limitations previously
discussed, it is possible to state that the FastSLAM is outputting satisfactory results.
Particle Filter
Real
35
30
y[m]
25
20
15
10
20
25
30
35
x[m]
40
45
50
Figure 2.47: xy cut of the real position and PF’s output.
50
occupied cell
free cell
45
40
35
y[m]
30
25
20
15
10
5
0
0
5
10
15
20
25
x[m]
30
35
40
45
50
Figure 2.48: xy cut of the map output.
Although the error obtained during this experiment is large and with the major drawback of
being cumulative, it does not affect the results of our final solution, as it is proven in the last sections
of this thesis. As said before, the need for a correct map of the environment is not our primary goal.
The ability to avoid crashing the vehicle only depends on our position in relation to the objects and,
therefore, as long as our position estimate is concordant with the map our algorithm manages to deal
with the inconsistencies.
2.5. EXPERIMENTAL RESULTS
41
4.5
4
Absolute error [m]
3.5
3
2.5
2
1.5
1
0.5
0
0
50
100
150
Iterations
200
250
300
Figure 2.49: xy cut of the real position and PF’s output.
The reader might imagine a situation where an obstacle is mapped in the beginning of our
flight and, after flying away from the obstacle for some time, the pilot orders the quadcopter to fly
closer to the object previously spotted. From the experience acquired, and from the examples shown
before, it is possible to state that the localization error at this stage of the flight is higher than at the
beginning, so, the quadcopter will perceive the obstacle in a different place. Two different situations
may happen from this case. If the error is manageable by the PF, the particles will change weights
and the algorithm will correct its localization. If the error is too high to be corrected by the PF,
the OccGrid will correct the maps and the object will be treated as a new one. Therefore, the full
architecture is able to cope with this kind of situations.
42
CHAPTER 2. FASTSLAM
Chapter 3
Decision Block
3.1
Problem Statement
This chapter focus on the definition of a classifier based on a previously estimated bina-
rized map m, position x̂t and current velocity v̂t . The threshold adopted for the binarization is
p(mioccupied ) < 0.5 in order to consider the non-observed cells, initialized with a prior p(mioccupied ) = 0.5,
as occupied. Note that the decision is solely based on the map and no direct readings from sonars are
used. By analysis of the map, it is possible to predict the distance to the object and, by considering
that the vehicle maintains the current speed, it is also possible to compute the Time To Collision
(TTC) — time needed for the vehicle to reach the object if the velocity is kept unchanged.
Once the TTC is known a multiplexer decides an action according to it. This action can be:
• The vehicle is not in danger, keeps the user with full control;
• The vehicle is in danger, overrides the user’s inputs to avoid colliding;
• The vehicle is in a imminent collision situation, applies an evasive maneuver.
Keeping the robot safe at all times causes the algorithm to be strict in many cases. Nevertheless, it is possible to lower the algorithm stringency. The existence of profiles, where pilots can
tune the algorithm to suit their needs the best, was considered from the start. Our final solution
aims to be smooth, user friendly and to ensure the quadrotor safety without removing the ability to
operate the vehicle close to objects. The algorithm applied is represented in Figure 3.1 and described
in Algorithm 3.1. A careful analysis of implementation details is presented in the section below.
44
CHAPTER 3. DECISION BLOCK
Figure 3.1: Flow chart of the Decision Block.
Algorithm 3.1 DB algorithm
1: Decision Block(xt ,vt ,m)
2:
Azimuth and Elevation computation
3:
4:
i = volume check(xt , azimuth, elevation, m)
p
dist = (x − xcell (i))2 + (y − ycell (i))2 + (z − zcell (i))2
5:
TTC =
6:
vcommand = Classif ier(T T C)
7:
Apply vcommand
dist
v̂t
8: end
3.2
Proposed Method
To successfully avoid crashes, the position of obstacles needs to be known as well as the
direction to which the robot is flying. So, the need of a volume check (VC) in the direction of
the movement arises. This volume check corresponds to the extrusion of a square centered on the
quadcopter’s position, along the velocity vector, as illustrated in Figure 3.2. The size of this square,
b, encompasses the quadrotor volume while a is a visibility bound. Defining this volume enables the
algorithm to know which grid cells are in the future path of the vehicle and to find which is the position
of the closest occupied cell.
b
v
a
Figure 3.2: Volume to check along the desired velocity.
3.2. PROPOSED METHOD
45
Time to Collision
Time to collision is considered to be the time (3.1) remaining for the vehicle to collide with
an object at distance d if velocity v is maintained, as presented in Figure 3.3.
Tcollision =
d
v
(3.1)
Figure 3.3: TTC ilustration (3.1).
Given the robot actual position and current map an efficient search for obstacles in the movement direction needs to be performed. Since the current velocity is a state variable we consider it to
be available at all times. Upon having the velocity estimation and the distance to the closest cell a
direct use of (3.1) computes the TTC. On the other hand, finding the closest occupied cell is not as
easy.
Although the concept of VC has already been introduced, the direction of the velocity is a
problem not yet addressed. After some transformations, and considering the type of velocities applied
to the system as described in Appendix A, the direction of the velocity can be fully defined using
azimuth α (3.2) and elevation β (3.3).
α = tan−1
β = tan−1
v
linear
(3.2)
vlateral
valtitude
p
2
2
vlinear + vlateral
!
(3.3)
After knowing the direction of the movement, the algorithm computes whether a cell is inside
the VC — Figure 3.2 — and saves the index of the closest occupied cell. To do so, the algorithm resorts
to planar equations to define the desired prism followed by a cycling trough the cells — previously
selected by index to check — and computes which one is closer to the robot. By having the index and
the respective distance to the robot, the algorithm is able to compute the TTC.
Classifier
At this phase the TTC is computed allowing us to classify the danger and choose the appropriate action. Figure 3.4 represents the 4 levels of threat considered and the corresponding action.
Experimental tests were conducted to find the best values for an efficient and accurate action concerning the position and velocity of the robot.
46
CHAPTER 3. DECISION BLOCK
Figure 3.4: Levels of threat.
The characteristics of each level can be summarised as:
• NA - No Action : For the given TTC the algorithm considers that no threat exists and no action
is performed on the inputs, meaning the vehicle is fully controlled by the user. Two relevant
cases are contemplated in this level: (1) there are no objects inside the VC - corresponding to
infinite TTC; (2) the TTC is still high enough for the algorithm to consider that the robot is not
in danger.
• SLOW - Action: Velocity is lowered: If the TTC falls into the given interval the quadcopter
is considered to be in medium danger and user’s inputs will be limited. The vehicle’s velocity is
immediately lowered according to (3.4)
vcommand = k(T T C) · vcurrent
(3.4)
where vcommand stands for the next velocity to be ordered to the vehicle and k(T T C) is a function.
The need for the k to be a function of the T T C — instead of a constant — is the necessary ability
to output a different vcommand to the vehicle depending on the situation faced. As an example,
if the TTC is high, the algorithm will firstly apply a small decrease in the velocity; while if the
TTC is still inside the SLOW level but has a very small value, the decrease in velocity has to
be higher. The existence of different responses while inside the SLOW danger level is crucial to
successfully avoid collisions. Since k < 1, it will slow down the vehicle until the TTC reaches a
safe value, thus causing the decision block to leave the SLOW threat level.
• STOP - Action: The vehicle is stopped: At this level, threat is considered to be of high
value and velocity equal to zero is immediately imposed to the robot making it stop moving in
that direction and hover.
The thresholds defined for this level are based on an experimental test of Time To Stop (TTS).
Experiments were conducted to find the distance traveled by the robot after the stopping order.
Results are presented in Table 3.1. From the results we can acknowledge that the
D
V
progression
is linear. Linear behavior of USARSim is the justification for no velocity dependency of the
threshold values, i.e., the thresholds presented in Figure 3.4 do not depend on velocity.
3.2. PROPOSED METHOD
47
STOP thresholds are set in a way we can always guarantee that given T T C < T HST OP the
vehicle will stop before hitting the object.
• EM - Action: Evasion Maneuver: At this level the threat is considered to be extremely high.
If TTC falls inside the EM interval, it means that by simply ordering the quadcopter to hover
— as in the STOP level — the distance traveled by the vehicle before stopping is superior to the
distance to the object and crashing is, in this case, inevitable. The solution to avoid a collision in
this extreme situation is to impose a velocity in the opposite direction of the actual movement.
By doing so the distance traveled for deceleration is lower than in the STOP stage, as observed
in Table 3.1. Since EM is supposed to be applied as a last resort, only one velocity specification
is set. The velocity applied will have a value vcommand = vmax and a direction opposite to the
current movement. This EM is applied for a short duration, just to guarantee that the vehicle
does not hit the object, and a hovering order is commanded next.
Table 3.1: Comparison between the maximum, minimum and mean value of the distance traveled
concerning a set of twenty trials executed at full speed.
Distance [m]
Min
Average
Max
STOP
8.25
8.43
8.71
EM
4.02
4.65
4.98
One might question the need for STOP and SLOW stages since we can efficiently avoid obstacles at smaller distances. The reason for these stages is the user’s experience during the flight. Our
algorithm aims at being an aid for the pilot and not to behave in an unexpected way. If EM was set
as the standard collision avoidance methodology the user might feel uncomfortable and would possibly
not fully understand the movement the quadcopter was performing. Another reason is safety. Our
approach states that the vehicle’s safety should be kept at all times and, of course, STOP and SLOW
are safer methodologies than the EM. It is always expected that the vehicle stops before the need for
EM arises but, once again, safety has the highest weight.
Unknown Areas
Consider as an example that the vehicle is ordered to move towards an unknown part of the
map. Since no information regarding those cells is yet known, the algorithm will consider them as
occupied for safety reasons and it will perform the collision avoidance routine. Therefore, the above
mentioned levels are equally applied although in this particular case the program needs to rely on
direct information from sonars. A problem occurs whenever the pitch or roll of the vehicle is higher
than half the field of view of the sonar, i.e., the sonar will not be able to check whether the volume
in the desired direction is occupied or not as it is not pointing there. This situation is presented in
48
CHAPTER 3. DECISION BLOCK
Figure 3.5 in the first and last sub-figure. The presented figure is a simplification of the simulation as
only one sonar is represented. A linear relation exists between the velocity and the attitude angles.
v
v
v
Figure 3.5: Relation between the attitude of the vehicle and the area covered by the sonar.
If the user orders a movement towards an unknown area of the map and no sonar is directly
heading to that direction an active perception maneuver is applied. The algorithm autonomously
points the nearest sonar to that direction. This way, the area is firstly scanned before the movement
is performed guaranteeing the safety of the vehicle.
By direct application of the computation and selection of the danger level, it is possible to
conclude that a Slow order is given and the velocity will decrease. The sonar will then point back
towards the direction of the movement, due to velocity reduction, and the distance to the object is
updated. The algorithm will cause the vehicle to alternate between acceleration and deceleration in
order to maintain knowledge of the occupancy of the cells to which it is heading. The emergence of
this behavior can be seen in our experiments described in Chapter 4.
Validate
This block performs a first evaluation of the user’s inputs. With the same approach as for the
TTC computation the distance to the first occupied cell is computed. The main difference between this
step and normal TTC are the inputs. While in TTC the input is the current velocity of the vehicle,
in this case, it is the desired velocity inputted directly by the user.
The objective of this first filtering is to avoid movements that compromise the vehicle safety
in a very near future. The vehicle only performs the movement if the distance to the first obstacle is
higher than a given threshold. Not having this first limitation imposed to the user might cause difficult
situations since a movement towards an object could be ordered and, if the distance was too short,
the algorithm would not have time to avoid the crash. As stated before the threshold may be tuned
to best fit the user’s needs.
3.3
Experimental Results
Throughout this section results obtained for the decision block are presented. The results
were obtained using the ground truth localization and map. Since the ground truth map is used, some
routines are not shown in these results — as the map does not have any unknown area. For a careful
analysis of DB’s performance, velocities — desired and the current estimation — and positions will be
3.3. EXPERIMENTAL RESULTS
49
compared for each different situation. All tests were performed at the vehicle’s maximum velocity and
the examples considered were all executed in the same obstacle with the same starting position. The
key aspect to focus is the action the algorithm performs under a certain level of danger.
SLOW
The vcommand is presented in Figure 3.6 and corresponds to the movement shown in Figure 3.7.
The user inputs a constant value of 5m/s to the vehicle for the entire duration of the flight. In Figure 3.7
the obstacle is represented in red, while in blue is represented the positions of the quadcopter under
the user’s commands, the green denotes the positions of the quadcopter during the override.
5
4.5
4
Velocity [m/s]
3.5
3
2.5
2
1.5
1
0.5
0
0
2
4
6
Time [s]
8
10
12
Figure 3.6: Comparison between user’s inputs and current velocity.The user’s inputs remain equal to
5 m/s for the entire movement
Obstacle
Position NA
Position Override
Figure 3.7: Vehicle’s position according to the obstacle. Movement performed is from left to right.
By analysis of Figure 3.6, it is clear that starting from the fifth second, which corresponds to
the first green point in Figure 3.7 the vehicle is imposed a different velocity while the user’s inputs
remain the same. We can also check that the algorithm takes around 7 seconds to reduce the velocity
50
CHAPTER 3. DECISION BLOCK
to zero. From that moment on the vehicle will not perform any movement until the user’s inputs
change. From Figure 3.7, and as a future reference, is possible to acknowledge that the vehicle stopped
at approximately 7 meters from the wall.
STOP
In this experiment the same set-up as in the last example was applied but this time, instead
of the SLOW stage, we will be testing the STOP level action. As in the previous example it is possible
to notice, from Figure 3.8, the decrease in the current vehicle’s velocity despite the user’s inputs. The
difference between both levels is the time to stop, and consequent distance traveled — Figure 3.9. In
this particular case the time spent for the robot to assume the hover position was three seconds and
at the end of the movement the distance to the obstacle was 10 meters. This level is, as explained in
the proposed method, harsher then SLOW and will only be used in near collision situations.
5
4.5
4
Velocity [m/s]
3.5
3
2.5
2
1.5
1
0.5
0
0
1
2
3
4
5
Time [s]
6
7
8
9
10
Figure 3.8: Comparison between user’s inputs and current velocity.The user’s inputs remain equal to
5 m/s for the entire movement
Obstacle
Position NA
Position Override
Figure 3.9: Vehicle’s position according to the obstacle. Movement performed is from left to right.
3.3. EXPERIMENTAL RESULTS
51
EM
The approach used for this example is slightly different from the others. While in the previous
ones the user maintained its inputs throughout the experiment, in this case the user will change them,
but always maintaining the direction. By observation of Figure 3.10 it is possible to see that the first
12 seconds are equal to the SLOW example but then the user, and after vcommand is set to 0, re-inputs
the desired velocity. As shown the vehicle starts moving but realizes it is compromising its safety
and performs first a SLOW action and, after understanding it will not be able to avoid the crash,
performs an evasion maneuver. This time the vehicle minimum distance value to the wall was 2,5
meters, Figure 3.11.
One might question what would happen, if at the end of this example, the user re-introduced
velocity with the same direction to the vehicle. This particular situation is covered by the next example.
Once again, and as the STOP level, EM is only used in extreme cases and just to guarantee
safety in a limit case.
5
4
Velocity [m/s]
3
2
1
0
−1
Current Velocity
User´s inputs
−2
−3
0
2
4
6
8
10
Time [s]
12
14
16
18
20
Figure 3.10: Comparison between user’s inputs and current velocity.
Obstacle
Position NA
Position Override
Figure 3.11: Vehicle’s position according to the obstacle. Movement performed is from left to right.
52
CHAPTER 3. DECISION BLOCK
A different result, given the exactly same approach was the one shown in Figure 3.12. As
stated the same situation was intended but, this time, at the third time the user inputs a velocity
the vehicle is already inside the threshold defined by the Validate block. This way the algorithm will
override the velocity and the movement is not even initialized. This example proves its utility in our
architecture.
5
4
Velocity [m/s]
3
2
1
0
−1
0
User´s inputs
Current Velocity
2
4
6
8
10
Time [s]
12
14
16
18
20
Figure 3.12: Comparison between user’s inputs and current velocity.
Chapter 4
Experimental Results
Throughout the previous chapters of this thesis experimental results have been presented
showing results of a specific module. In those experiments some inputs were considered to be ground
truth and some simplifications were considered. Throughout this one, a full architecture, as presented
in Figure 1.2 in page 4, and described in Algorithm 4.1, is implemented, tested and evaluated in the
USARSim environment. A total number of particles of 10, together with σdist = 2, σalt = 2, FoV= 30◦ ,
maximum sonar range equal to 5 meters and a square cell with length equal to 1 meter was considered.
The examples inputs are the user’s commands, attitude, and sonars readings. Neither the map nor the
position is provided by the simulator. Note that the FastSLAM and the decision block are running
online during real time simulation on USARSim.
Algorithm 4.1 Full architectures algorithm.
1: Collision Avoidance Algorithm ( )
2:
While (1)
3:
Get Sensors and ut
4:
if validate(ut )
5:
Apply ut
6:
FastSLAM(ut−1 ,ut ,ht )
7:
Decision Block(xt ,m,vt )
8:
9:
end
end
10:end
Movement towards unknown area
In the first example the robot was ordered a full speed movement towards a wall out of its
feasibility range. As it is possible to notice from Figure 4.1, the vehicle is, up until around the 7th
second, consecutively lowering and increasing its distance to the obstacle. Since this distance keeps
54
CHAPTER 4. EXPERIMENTAL RESULTS
oscillating it means that no real obstacle is there but, as it is the first time the area is mapped, the
algorithm is updating the cells to free while moving. After 7 seconds the distance starts to lower
meaning a real obstacle was found.
In Figure 4.2, it is possible to observe the estimated velocity of the vehicle along time. Note
that the velocity, as well as the distance, is also oscillating. The user is ordering a full speed movement
of 5m/s but the classifier applies a Slow override to the vehicle. As soon as the obstacle is considered
to be further away from the robot the inputs are once again given to the user. When the algorithm
perceives that the vehicle is moving towards a real obstacle, once again after 7 seconds, an override is
imposed and the velocity fully limited by the classifier and lower till zero. By fusing the information
from both figures it is possible to see that the vehicle stopped its movement at, approximately, 2.2
meters from the wall despite being constantly ordered by the user to move in that direction. The
distance at which the robot stops can be tunned in the decision block by changing the danger level’s
thresholds.
8
7
Distance [m]
6
5
4
3
2
0
2
4
6
8
Time [s]
10
12
14
16
Figure 4.1: Distance between the best particle to the first object found in its own map.
2
1.8
1.6
Velocity [m/s]
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
2
4
6
8
Time [s]
10
12
14
16
Figure 4.2: Estimated velocity of the quadrotor. User inputs have a constant value of 5m/s.
55
Figure 4.3 presents the PF’s position estimation. Once again, only the particle with the highest
weight is plotted causing the already discussed oscillations. The algorithm is not being able to localize
itself accurately along the y axis due to lack of information. Nevertheless, the localization along the
x axis — the important axis during this movement to avoid the collision — remains with low error.
Despite the error in the localization, the map was successfully built as shown in Figure 4.4.
30
Particle Filter
Real
29
28
27
y[m]
26
25
24
23
22
21
20
24
26
28
30
32
x[m]
34
36
38
40
Figure 4.3: xy cut of the real position and PFs output.
50
Occupied
Free
45
40
35
y[m]
30
25
20
15
10
5
0
0
5
10
15
20
25
x[m]
30
35
40
45
50
Figure 4.4: xy cut of the output map.
From the presented example it is possible to conclude that the algorithm is able to avoid
collisions even if the obstacle is not previously identified. This example establishes the necessity and
improvement brought by the decision block in unknown environments. When operating in an unknown
area, the algorithm manages to check if the area to where the vehicle is moving is free and, therefore,
never flies towards non-mapped areas.
56
CHAPTER 4. EXPERIMENTAL RESULTS
Movement in a known area
The previous example showed the algorithm’s action when moving in an unknown area, the
next experiment shows the results while moving in an already scanned area. The objective is to
illustrate how the knowledge of the map changes the vehicle’s motion in known to be free areas. This
test was performed right after the first one without reseting the map, i.e., the map built is maintained
as well as the error acquired in the localization. The vehicle was flown back to start and toward the
wall once again. The plots shown start when the vehicle is given this last order.
From Figure 4.5 it is possible to see that the vehicle is, at second 13, at 16 meters from the
nearest obstacle in the direction it is flying. By simultaneously interpreting Figure 4.6 it is possible to
check that as the velocity increases the distance to the object decreases. It is important to stress that
the orders of the user remain the same and the vehicle was continuously ordered to move at full speed.
18
16
Distance [m]
14
12
10
8
6
4
2
12
14
16
18
20
22
Time [s]
24
26
28
30
Figure 4.5: Distance between the best particle to the first object found in its own map.
3.5
3
Velocity [m/s]
2.5
2
1.5
1
0.5
0
−0.5
12
14
16
18
20
22
Time [s]
24
26
28
30
Figure 4.6: Estimated velocity of the quadrotor. User inputs have a constant value of 5m/s.
As it is possible to see, at around the 15th second, the quadrotor started to slow down. At this
particular moment the TTC crossed the threshold delimited to the SLOW stage and started reducing
the speed. The TTC maintained itself inside these bounds as it kept slowing down until it stopped at
57
around second 22 and at roughly 2 meters from the obstacle.
Wall Re-visited
In this experiment the ability of building a map is once again proven useful. The beginning
of this test is equal to the first one. After the quadcopter approached the obstacle, a movement along
the wall was performed. Then, it was ordered to fly away from the wall and towards it again. We
want to test if the algorithm is able to localize according to the map built and if it is able to recognize
a danger situation and act faster to prevent a crash.
The localization performed by the PF diverges from the real position Figure 4.7 but since our
work’s aim is not a global localization method, a careful analysis has to be performed. It is possible to
notice that, despite having drift away from the real position, the algorithm managed to build a map
according to the belief of its position. By comparing the obtained map Figure 4.8 with the line denoting
the real position of the obstacle it is possible to verify that the built map is shifted from the ground
truth one. By performing the same comparison to the robot path we can state that localization is also
shifted by the same amount. Since, and as stated before, our goal is to localize ourselves relatively to
our map it is possible to conclude that, despite the amount of error obtained to the real world, the
algorithm is performing satisfactorily.
Particle Filter
Real
42
40
38
36
y[m]
34
32
30
28
26
Real FastSLAM
24
22
24
26
28
30
32
34
36
38
x[m]
Figure 4.7: xy cut of the real position and PF’s output. The arrow represent the limits of the ground
truth wall (Real) and the obtained map (FastSLAM).
As presented in Figure 4.7, the algorithm, after performing the trajectory along the wall,
successfully managed to recognize a danger situation and avoid it further away from the obstacle than
in the first time the robot sensed it.
Although the algorithm managed to successfully fulfill its objective, there are issues that need
to be stressed. The low amount of particles used is very limitative to the results obtained. By causing
oscillations, that were not too restrictive up till now, errors are starting to accumulate and localization
is starting to perform poorly. Nevertheless, our architecture is being exposed to many harsh situations.
58
CHAPTER 4. EXPERIMENTAL RESULTS
Occupied
Free
50
45
40
35
y[m]
30
25
20
15
Real
10
5
0
0
5
10
15
20
25
x[m]
30
35
40
45
50
Figure 4.8: xy cut of the 3D map of the best particle at the end of the movement. The Real labeled
arrow indicates the limit of the ground truth wall location.
During significant part of the movement, sonars are not getting any readings and, therefore, PF’s observation model is limited in its performance. Possible approaches for improvements in the results are
presented in the subsequent chapter.
Computational Effort
Table 4.1 presents the computational time1 spent on the two main blocks of the presented
solution. The results represent the mean over 20 trial runs of the full architecture. The Decision Block
is running at 10Hz while the FastSLAM performs at 5Hz. It is important to stress that along side with
the algorithm the program is also responsible for tele-operating the vehicle and for all communication
with the USARSim. By increasing the computational effort — by adding a larger number of particles
— the vehicle would not have such a faster response to the user’s commands. This is the reason while
during the experimental results a low number of particles was used. It is also worth mentioning that
the Decision Block time is independent on the number of particles — as the size of the inputs remain
the same regardless of the number of particles.
Table 4.1: Time consumption on the two main blocks of the presented architecture. Mean and
standard deviation are presented
Mean [s]
1 Running
FastSLAM
Decision Block
Total
0.096 (0.002)
0.0062 (0.0001)
0.1036 (0.0016)
on MATLAB2010a in an Intel Core 2 Duo at 2.53GHz CPU with 3GB of RAM memory
Chapter 5
Real World Experimental Results
Through this chapter results obtained in several real world experiments are presented. As previously, the different parts of the full architecture are initially tested individually for better evaluation
of their performance. Despite each section having a specific test, the quadrotor/sonar set-up and the
following specifications are maintained.
The available quadrotor, kindly borrowed by UAVision1 , is equipped with an IMU capable of
providing filtered values of yaw, pitch and roll. The communication between the quadrotor and the
computer — for all telemetry parameters — is supported by a Xbee module. The vehicle also has an
onboard altimeter sensor. Unfortunately, its sensitivity to innumerable and unavoidable factors, such
as people around it and small wind drafts, cause it to be unusable.
Unlike in simulation, only 4 sonars2 were used and placed above each of the propellers. The
lobe was modeled by a 30◦ FoV cone and maximum range considered was three meters. An Arduino3
was used to perform the readings and all necessary transformations to meters.
All readings were recorded with the quadrotor’s motors off, for safety issues, and all experiments were performed in offline mode.
5.1
Occupancy grid mapping
In order to demonstrate the mapping capabilities of the Occupancy Grid Mapping, two dif-
ferent tests are presented. In the first one, 4 points inside a room were defined and the quadrotor
successively placed as shown in Figure 5.1. The localization is, for now, ground truth. While in each
position, the sonar values were recorded while manually spinning the vehicle 360◦ . By doing so, it is
possible to guarantee that the algorithm is able to map the entire room area.
The built map has a 10 cm cell size. It is important to mention that the algorithm is performing
1 http://www.uavision.com/
in July 2012
2 http://www.superdroidrobots.com/product
3 http://www.arduino.cc/
in July 2012
info/SRF04.aspx in July 2012
60
CHAPTER 5. REAL WORLD EXPERIMENTAL RESULTS
3
1
4
1m
2
2m
Figure 5.1: Lay out of the sonars’ recording positions.
a 3D mapping and the plots presented are in 2D just to ease the interpretation. The obtained result
is presented in Figure 5.2. As before, the darker the cell the higher the probability of being occupied.
90
80
70
y [dm]
60
50
40
30
20
10
0
0
10
20
30
40
50
x [dm]
60
70
80
90
Figure 5.2: Log-odd representation of a 2D cut of the 3D output map. The 4 predefined positions are
presented in red.
As it is possible to observe, the algorithm managed to roughly map the room. Although
◦
the 360 rotation allowed us to map the entire scenario with just 4 positions, it does bring some
inconsistencies to the map due to the sonar model used. These inconsistencies are mainly caused by
objects inside the room — impossible to be removed — and sharp edges. A possible improvement would
be to perform more readings in more and different positions. Actually, in a real situation the sonars
would be constantly performing readings. By doing so, not only would the general result improve but
also some of the inconsistencies would be solved.
Despite any possible improvements, the algorithm was tested with this set-up to prove its
ability to cope with low information scenarios. Although the map lacks on detail, the reader should
keep in mind that the proposed goal is to build a coarse map of the environment in order to detect
obstacles. Knowing that assumption, it is possible to state that the algorithm is fulfilling its objective.
The second evaluation to the occupancy grid mapping was performed in a corridor. In this
5.1. OCCUPANCY GRID MAPPING
61
experiment sonar values were recorded with a 50 cm difference as represented in Figure 5.3. Only the
first positions are represented in the figure but the recording performed followed the same pattern until
the end. The mapping result is shown in Figure 5.4.
9m
4.5m
10.5m
0.5m
2.85m
1.7m
Figure 5.3: Lay out of the sonars’ recording positions.
Figure 5.4: Log-odd representation of a 2D cut of the 3D output map.
As it is possible to see, the algorithm managed to map the corridor. In this example, all the
62
CHAPTER 5. REAL WORLD EXPERIMENTAL RESULTS
problems mentioned for the last test are highly noticeable. Another constraint faced is inherent to sonar
use. Even when the sonar was carefully pointed towards the corridor — where it should not detect
any obstacle ahead — it was concluded that the sonar sound was getting reflected by unaccounted
and unmovable objects such as door edges. This particular situation cause the algorithm to sometimes
consider the corridor as an occupied area. Nevertheless, if the vehicle approached one of the false
occupied cells the occupancy grid mapping would correctly update the map.
5.2
Particle Filter
The particle filter was tested using the map obtained in the previous section. Two tests were
performed. One with 10 particles and another with 100 particles in both cases randomly initialized
through the map. Sensor readings — sonars and IMU — were fed to the algorithm along with the
map. In Figure 5.5 and Figure 5.6 is represented the euclidean distance between the best particle of
the set and the real position were the sonars’ readings were recorded. Iteration is considered to be one
full run of the particle filter algorithm, including the predict, update and resample steps.
2.6
2.4
2.2
2
Error [m]
1.8
1.6
1.4
1.2
1
0.8
0
2
4
6
8
10
12
Iterations
14
16
18
20
Figure 5.5: Error obtained in each iteration for 10 particles.
From Figure 5.5 it is possible to state that the error to the ground truth position is being
lowered and the algorithm is localizing itself. It is also possible to notice that, despite the number
of iterations, the PF is converging into approximately 1 meter error. By simultaneous analysis of
Figure 5.6 it is possible to see that the error at the end of 20 iterations is also around 1 meter.
Considering the fact that in both algorithms the error experienced and the convergence point is the
same it is possible to consider that the map used — acquired from the previous example — is slightly
inaccurate. On a SLAM architecture this error would be corrected.
5.3. FASTSLAM
63
1.5
Error [m]
1
0.5
0
0
2
4
6
8
10
12
Iterations
14
16
18
20
Figure 5.6: Error obtained in each iteration for 100 particles.
5.3
FastSLAM
In this experiment the FastSLAM, combining localization and mapping, was applied. The
environment used is the same room as before. In this test, the particles are all initialized in the same
point since it does not compromise the results — as stated in Chapter 2. No initial map is provided nor
any information concerning the localization. The total number of particles used was 10. No knowledge
about the motion model is yet experimentally acquired and, therefore, the predict set of the particle
filter becomes (5.1) where ω and Qt are defined in (5.2) with σx = σy = 0.3 and σz = 0.
[n]
[n]
x̄t = x̃t−1 + ω

σ2
 x

Qt =  0

0
ω = N (0, Qt ),
(5.1)

0
0
σy2


0

σz2
0
(5.2)
The z axis is limited as no information is given about the height of the vehicle — no sonars
pointing top or bottom nor altimeter readings — and, as a consequence, the weights of the particles
in the observation model become only dependent on sonar readings (5.3).
[n]
wt =
Y
[n]
P (zti |m[n] , x̄t )
(5.3)
i
For better comparison, in Figure 5.7 is presented the map obtained with ground truth positions.
The sonars’ values used were the same for both situations but, in the FastSLAM case, no localization
was provided. The points where sonar values were recorded are presented in red. 10 sensor readings
were performed in each position with 30 cm distance between them. The trajectory was performed
from the right side of the image to the left. In Figure 5.8 is presented the best particle’s map at the
end of the movement.
64
CHAPTER 5. REAL WORLD EXPERIMENTAL RESULTS
90
80
70
y [dm]
60
50
40
30
20
10
0
0
10
20
30
40
50
x [dm]
60
70
80
90
Figure 5.7: xy cut of the Map obtained with GT localization.
90
80
70
y [dm]
60
50
40
30
20
10
0
0
10
20
30
40
50
x [dm]
60
70
80
90
Figure 5.8: xy cut of the best particle’s map at the end of the test.
The algorithm is being able to successfully map the environment while localizing itself. Although the obtained map diverges from the ground truth, it is possible to check that the dimensions
of the room were roughly preserved and, as before, a coarse object identification was performed. In
Figure 5.9 is plotted the Euclidean error (2.23) between the best particle and the real position along
the positions.
Although the error experience in localization is substantially high — considering that the
vehicle is operating indoor — no motion model is being used for now. The inclusion of one would
significantly improve the particle filter performance and hence the map obtained. However, and as
mentioned in Chapter 2, error to the real position is admissible as a localization in relation to the
obstacles is intended in spite of a global localization.
The goal of the algorithm is being reached. However, some changes can be done to improve the
results. One is the small number of particles used, as results would benefit from a higher particle set.
The main aspect compromising the results is the lack of a motion model. With a better predict step,
5.4. DECISION BLOCK
65
1.5
Error [m]
1
0.5
0
1
2
3
4
5
6
7
Position Set Number
8
9
10
11
Figure 5.9: Error obtained at the end of the 10th iteration in each position. Iteration corresponds to
one run of the full FastSLAM algorithm.
the particles would be more efficiently spreed and results would improve. Nevertheless, this experiment
proves the capacity of the algorithm to work even without a motion model.
5.4
Decision Block
In this section the difference between a reactive collision avoidance method versus the proposed
solution is shown. Once again, the vehicle was manually placed in the different poses with all motors
shut down. The presented test considers 5 different stages illustrated in Figure 5.10. With this set-up,
we aim at a possible and common movement where a purely reactive methodology would not manage
to detect the obstacle and consequently would loose its purpose.
1
0
2
1
3
2
4
5
2.25
3.5
wall
z
4.5
y [m]
Figure 5.10: Position and attitude in each of the 5 phases.
The obtained maps, using ground truth position, are presented in the figures below. In Figure 5.11 it is possible to see a yz cut of the map. In this case the vehicle has pitch = 0◦ and is able
to map the area where it is going to be in position 2. In Figure 5.12, with pitch ≈ 35◦ , it is plotted
the updated map and, represented in red, the position of the vehicle. As shown, the vehicle is in the
previously mapped and known to be free area. Since the distance to the nearest obstacle — in this
case an unknown position — is still high enough, as will seen in Table 5.1, the algorithm allows the
user to maintain control of the quadcopter.
66
CHAPTER 5. REAL WORLD EXPERIMENTAL RESULTS
00
10
20
30
40
50
60
70
80
90
10
20
30
40
50
60
70
80
90
Figure 5.11: yz cut of the map obtained in position 1.
00
10
20
30
40
50
60
70
80
90
10
20
30
40
50
60
70
80
90
Figure 5.12: yz cut of the map obtained in position 2.
The map obtained at the end of the 5 positions described in Figure 5.10 is presented in
Figure 5.13 and Figure 5.14. Here, it is shown that this pitch changes allow the mapping algorithm to
maintain track of the heading position of the quadcopter.
90
80
70
y [dm]
60
50
40
30
20
10
0
0
10
20
30
40
50
x [dm]
60
70
80
90
Figure 5.13: xy cut of the map obtained in position 5.
5.5. FULL ARCHITECTURE
00
67
10
20
30
40
50
60
70
80
90
10
20
30
40
50
60
70
80
90
Figure 5.14: yz cut of the map obtained in position 5.
Recalling the Figure 3.2 — Chapter 3 on page 44 — we are defining b = 40cm and a = 5m.
Therefore, and applying the algorithm to compute the distance to the closest cell, we obtain the values
shown in Table 5.1. Since the reactive method does not keep track of the map, the distance between
the vehicle and the obstacle — and considering an analog method to compute it — is unknown for the
pitch assumed in positions 2, 3 and 5 in Figure 5.10 due to the pitch ≈ 35◦ . In Table 5.1, a comparison
between the reactive method and the proposed solution — using as a reference the distance to the
nearest object — is presented. In the same table, it is also presented the action chosen by the classifier
given the distance in the second row and a constant velocity of v = 0.6m/s.
Table 5.1: Comparison between the distance from the center of mass of the vehicle to the closest
obstacle in each position of Figure 5.10 for both methods and consequent classifier decision.
1
2
3
4
5
dreactive
3
–
–
2
–
dmapping
3.25
2.25
1.25
2.3
1.05
Classifier decision
User
User
Override
User
Override
By Table 5.1 analysis, it is possible to conclude that whenever the algorithm approaches an
obstacle — independently whether it is an occupied or unknown area — the classifier acts as intended
and imposes an override to the vehicle. This architecture, and as explained in Chapter 3, only allows
the vehicle to fly in a known and free area.
5.5
Full Architecture
In this last section tests to an architecture combining the FastSLAM and the decision block
are presented. This test uses the Figure 5.10 as set-up. Due to the distance between positions, around
1 meter, a motion model was introduced. The difference between the real positions is, therefore, being
68
CHAPTER 5. REAL WORLD EXPERIMENTAL RESULTS
fed to the predict model of the particle filter. The xy cut of the best particle’s map — at the end of
the movement — is presented in Figure 5.15.
90
80
70
y [dm]
60
50
40
30
20
10
0
0
10
20
30
40
50
x [dm]
60
70
80
90
Figure 5.15: xy cut of the map obtained in position 5 for the best particle.
As it is possible to see, the obtained map is similar to the one obtained with the ground truth
position. By comparison between Table 5.1 and Table 5.2 is noticeable that the Classifier decision,
again for v = 0.6m/s, is the same even though the distance to the obstacle slightly changes.
Table 5.2: Distance to the closest obstacle and classifier decision.
1
2
3
4
5
dmapping
3.25
2.34
1.38
2.28
1.11
Classifier decision
User
User
Override
User
Override
In all of the presented examples the obtained real world results are satisfactory and the algorithm behaves as expected. Despite the simplified experimental setup adopted, the presented tests
represent a proof of the adopted concept and a considerable step towards a full real world implementation.
Chapter 6
Conclusions
The main objective of this work is to develop an assisting teleoperation algorithm for quadcopters with the purpose of avoiding collisions with obstacles. Whenever confronted with an unknown
area, the algorithm overrides the user’s inputs in order to reduce uncertainty of the area to where the
movement is intended. The main objective was fully achieved in simulation using a FastSLAM approach for simultaneous localization and mapping combined with a danger classification methodology,
in order to classify and act correspondingly in any situations. Initial tests were conducted in a real
quadcopter in order to prove the algorithm’s ability to function in a real world scenario.
During the development of this work some options were taken and among the best we can
find the choice of the USARSim as simulator. Although it does not have very realistic models for
neither of the sensors nor for the vehicle dynamics, it is an ”all in one” solution that can provide fast
preliminary results. Unfortunately it also has some negative aspects, such as its inflexibility regarding
some models, difficult debugging and lack of published information about the KARMA engine.
The choice of sonars as distance sensors is the most questionable as it may not be the appropriate solution to tackle our problem. Even though its use does not compromise the results, an hybrid
approach using a Laser Range Finder (LRF) and sonars may be a good solution to improve results.
By doing so, a combination of the strengths of each sensor — accuracy and precision for the LRF;
small amount of data and ability to obtain 3D information for the sonars — could be implemented
and results would benefit from this fusion.
As a final statement it is possible to conclude that the main objectives of this thesis were
achieved as it is possible to state that the algorithm managed to perform a collision avoidance in online mode. The positive results obtained during the simulation, the acquired knowledge and sensibility
to the problem along with the good initial real word results tend to expect promising results in a full
real world implementation.
70
CHAPTER 6. CONCLUSIONS
Future work
This work represents the first step towards a real world application of a collision avoiding
system. Results obtained in simulation and in a real vehicle prove the feasibility of our approach.
Nevertheless, some assumptions and simplifications were made that are not viable in a full real implementation. Transposing the proposed algorithm to the real system will force some changes and
probably unaccounted constraints will be found. The major bottlenecks are stated below:
• Motion model: Throughout Chapter 2 a motion model based on USARSim was considered.
The adopted model is highly unrealistic and despite performing very well in simulation it is
one of the main constraints and limitations in transposing the algorithm to the real world.
The implementation of a dynamic model [2] that takes into account the quadcopter real world
characteristics is mandatory.
• Sensors: In simulation sensors malfunctions or errors are not considered. The only error possible to simulate is adding white noise to the reading. Unfortunately, real sensors tend to have
erroneous outputs that were not considered in this thesis. As a consequence, a pre-processing
step — in order to avoid, for example, outliers — and a full sensor characterization is advisable.
• Stability and control: This study steps aside from control and stability issues. In Chapter 3,
override of inputs is applied and considered to be stable but, since the driving physics of a real
quadcopter is totally different from the simulated one, an adaptation to the flying inputs is
needed.
• Computation: One of the limitations of our approach is, undoubtedly, the computational effort.
An efficient use of computation by programming strategies is needed. Since nowadays processing
power is moving towards multi-core solutions, a parallel implementation is a possible and viable
approach. Another possibility is the use of an aiding system in the localization accurate enough
to make us reduce the number of total particles and therefore reduce the computation.
• Memory: Our algorithm initializes the map with a given size and remains unchanged. This
approach is limited as the pilot may want to fly further away from the map limits. An approach
for online extension of the map is a necessity for a free run in the real world.
Bibliography
[1] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics (Intelligent Robotics
and Autonomous Agents). The MIT Press, 2005.
[2] Samir Bouabdallah. Design and control of an indoor micro quadrotor. In In Proc. of Int. Conf.
on Robotics and Automation, 2004.
[3] Anthony Stentz, Dieter Fox, Michael Montemerlo, and Michael Montemerlo. FastSLAM: A factored solution to the simultaneous localization and mapping problem with unknown data association. In Proceedings of the AAAI National Conference on Artificial Intelligence, pages 593–598.
AAAI, 2003.
[4] Simon J Julier and Jeffrey K Uhlmann. New extension of the Kalman filter to nonlinear systems.
Proceedings of SPIE, 3(1):182–193, 1997.
[5] M. W. M. Gamini Dissanayake, Paul Newman, Steven Clark, Hugh F. Durrant-whyte, and
M. Csorba. A solution to the simultaneous localization and map building (SLAM) problem.
IEEE Transactions on Robotics and Automation, 17:229–241, 2001.
[6] Cyrill Stachniss, Dirk Hhnel, Wolfram Burgard, and Giorgio Grisetti. On actively closing loops
in grid-based fastslam. ADVANCED ROBOTICS, 19:2005, 2005.
[7] Albert Diosi and Lindsay Kleeman. Fast laser scan matching using polar coordinates. I. J. Robotic
Res., 26(10):1125–1153, 2007.
[8] Jorge Artieda, José M. Sebastian, Pascual Campoy, Juan F. Correa, Iván F. Mondragón, Carol
Martı́nez, and Miguel Olivares. Visual 3-d SLAM from UAVs. J. Intell. Robotics Syst., 55(45):299–321, August 2009.
[9] Michael Montemerlo, Sebastian Thrun, Daphne Koller, and Ben Wegbreit. Fastslam: A factored
solution to the simultaneous localization and mapping problem. In Proceedings of the AAAI
National Conference on Artificial Intelligence, pages 593–598. AAAI, 2002.
[10] Zhihai He, Ram Venkataraman Iyer, and Phillip R Chandler. Vision-based UAV flight control
and obstacle avoidance. 2006 American Control Conference, pages 2166–2170, 2006.
71
72
BIBLIOGRAPHY
[11] Javaan S. Chahl Geoffrey L. Barrows and Yam V. Srinivasan. Biomimetic visual sensing and flight
control. In Proc. Bristol UAV Conf, pages 159–168, 2002.
[12] Clark Taylor Brandon Call, Randy Beard and Blake Barber. Obstacle avoidance for unmanned air
vehicles using image feature tracking. In AIAA Guidance, Navigation, and Control Conference,
2006.
[13] Nora Ayanian and Vijay Kumar. Abstractions and controllers for groups of robots in environments
with obstacles. In IEEE International Conference on Robotics and Automation, pages 3537 – 3542,
Anchorage, AK, May 2010.
[14] Adam M. Brandt and Mark B. Colton. Haptic collision avoidance for a remotely operated quadrotor UAV in indoor environments. In Proceedings of the IEEE International Conference on Systems,
Man and Cybernetics, Istanbul, Turkey, 10-13 October 2010, pages 2724–2731. IEEE, 2010.
[15] Robert Mahony, Felix Schill, Peter Corke, and Yoong Siang Oh. A new framework for force
feedback teleoperation of robotic vehicles based on optical flow. In Proceedings of the 2009 IEEE
international conference on Robotics and Automation, ICRA’09, pages 1719–1725, Piscataway,
NJ, USA, 2009. IEEE Press.
[16] Minh-Duc Hua, Tarek Hamel, Hala Rifai, Pascal Morin, and Claude Samson. Automatic Collision Avoidance for Teleoperated Underactuated Aerial Vehicles using Telemetric Measurements.
Technical report, 2010.
[17] Alberto Elfes. Occupancy grids: A probabilistic framework for robot perception and navigation.
PhD thesis, Carnegie Mellon University, 1989.
[18] David Baraff. An introduction to physically based modeling: Rigid body simulation I - unconstrained rigid body dynamics. In In An Introduction to Physically Based Modelling, SIGGRAPH
’97 Course Notes, 1997.
[19] M. Sanjeev Arulampalam, Simon Maskell, and Neil Gordon. A tutorial on particle filters for online
nonlinear/non-gaussian bayesian tracking. IEEE Transactions on Signal Processing, 50:174–188,
2002.
[20] Ioannis M Rekleitis. A particle filter tutorial for mobile robot localization. Technical Report
TRCIM0402, 2004.
[21] Jun S. Liu. Metropolized independent sampling with comparisons to rejection sampling and
importance sampling, 1996.
[22] Yoko Watanabe, Anthony J. Calise, and Eric N. Johnson. Vision-based obstacle avoidance for
uavs, 2007.
BIBLIOGRAPHY
73
[23] S. Carpin, M. Lewis, Jijun Wang, S. Balakirsky, and C. Scrapper. USARSim: a robot simulator
for research and education. In Robotics and Automation, 2007 IEEE International Conference
on, pages 1400–1405, 2007.
[24] Stephen Balakirsky and Elena Messina. Moast and usarsim - a combined framework for the
development and testing of autonomous systems. In in Proceedings of the 2006 SPIE Defense and
Security Symposium, 2006.
74
BIBLIOGRAPHY
Appendix A
USARSim
A.1
Introduction
In robotic investigation simulation plays a crucial role in the projects as it would be infeasible
to test any approach combining the development of an algorithm with real world difficulties. Moreover,
the problem studied through this thesis may compromise the vehicle’s and even the pilots security,
therefore, a simulator based on the real robot’s behavior and real world constraints is fundamental.
The use of a simulator that is able to provide this initial tools, allows the core work of the project to
be focused on high-level programming and debugging. By doing so, when the project evolves to a real
robot implementation just small details on well identified limitations need to be fixed.
To best cope with the objectives proposed, and to smooth some difficulties in a future quadcopter implementation, the simulator should contain a 3D physics simulator, environments suited for
3D flight, sensors and robot pre-made and easily changeable models. The choice made was the Urban
Search And Rescue Simulator (USARSim) 1 . USARSim is a robotic simulator based on the computer
game Unreal Tournament [23] . By choosing this simulator aspects such as modelation, animation,
graphical details and physic simulations are solved. Since USARSim has been widely used by the community, even in international competitions as RobotCub [24], many maps made specifically for Urban
Search and Rescue applications — and suited for our needs — have been previously built. Along
with the environments, different type of robots are defined and built accordingly with the real world
models, including a quadcopter model, Figure A.1, the AirRobot. The tool Unreal Editor, provided
with the Unreal Tournament 2004 installation, also allows changes in the maps while the definition
of new objects, including robots, can be changed or entirely built by programming a set of scripts on
Unreal Script to communicate with the UT physics engine, KARMA2 . TCP/IP communication with
the USARSim is supported and easy to use allowing the user to choose any programming language
1 http://usarsim.sourceforge.net/
in July 2012
in July 2012
2 http://wiki.beyondunreal.com/wiki/Karma
76
APPENDIX A. USARSIM
desired.
Figure A.1: Printscreen of the USARSim’s AirRobot.
Along the already referred advantages of the USARSim, others can be enumerated, being the
most important ones:
• The flexibility of the simulator to changes in the sensors that can be assembled on our robot
including the definition of which, where and how they are place along the vehicle — defined in
a initialization file;
• Previous works in ISR/IST that allowed a head start on the simulator’s particularities;
• The previous implementation of the quadcopter dynamics and sonars cone simulation — defined
in the USARSim manual available in the website.
The combination of this factors allowed fast first results and, as intended, a concentration of
the effort in the development of high level algorithms.
Although the communication can be easily implemented through a socket, with IP defined
to the machine where USARSim is running and fixed port, problems appeared when information was
requested since messages were inconsistent and sometimes with lacking or erroneous information. To
allow a more consistent communication a Matlab toolbox was downloaded 3 and adapted to our needs.
By doing so, not only the communication became faster but the outputs problems were also solved.
The final diagram of the communications is represented in Figure A.2 where u0t represents the
user’s input values, ut the algorithm’s output values and zt all sensor readings. The joystick used was
a common 6 axis joypad. For sake of simplification USARSim and UT are coupled together in this
figure, nevertheless they are separated blocks that will be split in the following section.
Figure A.2: Communication between the user, the algorithm and the simulator.
3 http://robotics.mem.drexel.edu/USAR/
A.2. TRANSFORMATIONS
A.2
77
Transformations
Throughout this work no reference was made to coordinate transformations concerning the
attitude angles of the vehicle. All details concerning pose or velocity transformations will be hereby
addressed.
A.2.1
Sonar Pose
The localization and mapping performed by the algorithm are always based on a world ref-
erence. That reference is changeable but, for testing purposes, is kept in the middle of the map, i.
e.,every time the robot is initialized the algorithm automatically assumes its position as being equal
to the center of the map and attitude equal to the one defined in the initialization file. The lay-out of
the sonar is presented in the next figures. As mentioned before the sonars are displayed below each of
the propellers, Figure A.3; and two, one below and one above, the body of the vehicle, Figure A.4.
Figure A.3: Pointing direction of the sonars along the propellers of the quadcopter.
Figure A.4: Pointing direction of the sonars above and below the quadcopter.
As the vehicle moves the pose of the sonars changes. Position of the sonars with reference to
the world can efficiently be calculated using (A.1) where w P is the position of the sonars in the world
frame, v P is the position of the sonars in the vehicle frame and w
v T is the transformation from frame
v to frame w. v P is constant as the position of the sonars with reference to the vehicle never changes.
78
w
v T,
APPENDIX A. USARSIM
w
defined in (A.2), is dependent on the vehicle’s rotation, w
v R, and center of mass position Pv . The
rotation matrix depends on the attitude of the vehicle (A.3), where α = yaw, β = roll and γ = pitch
are defined in Figure A.5. Still in (A.3), c stands for the trigonometric function cos and s for sin. The
robots position is considered to be given by the PFs estimation and the attitude directly by the IMU,
therefore, and by direct application of the following equations it is possible to obtain a sonar position
in the world frame coordinates.


 
v
P
P
 =w
 
vT·
1
1


w
w
R
P
v
v
w


vT=
0
1
w

w
vR
cα cβ


= sα cβ

−sβ
cα sβ sγ − sα cγ
sα sβ sγ + cα cγ
cβ sγ
cα sβ cγ + sα sγ
(A.1)
(A.2)



sα sβ cγ − cα sγ 

cβ cγ
(A.3)
Figure A.5: Graphical definition of the angles used (extracted from [1]).
Azimuth and elevation were the angles used for defining the attitude of the sonars given a
certain attitude of the vehicle. The reader might question this choice since the vehicle’s attitude
is defined by three angles but one shall remember that the sonars and modeled by a cone that, by
definition, is invariant in one rotation as exemplified in Figure A.6. Therefore, it is possible to fully
define the heading direction of each sonar with resort to two angles. The graphical definition of the
angles used is presented in Figure A.7.
Figure A.6: Sonar invariance given the rotation along the presented axis.
By computing the pose for each cone it is possible to consider each sonar independent of the
others, as used in Chapter 2.1.
A.2. TRANSFORMATIONS
79
Figure A.7: Representation of the heading direction of the sonars.
A.2.2
Velocities
In this section, details of the USARSim frame of reference for velocities and velocities trans-
formations, to world frame, are explained. In the presented figures the quadcopter is represented as
a rectangle, pointing in the direction of the darker side. The USARSim’s reference for velocity is
presented on Figure A.8. This velocities are attitude independent as, and seen on Figure A.9, they
never change despite the, in this case, yaw of the vehicle. For an easier understanding consider the
Figure A.10 example. In both images we apply the same F and the output, distance traveled, is equal
for the two cases. The box surrounding the vehicle will be explained latter.
Figure A.8: Representation of the referential of each velocity concerning the vehicle.
Figure A.9: Representation of the referential of each velocity concerning the vehicle given a certain
yaw.
80
APPENDIX A. USARSIM
Figure A.10: Example where F is applied in two different situations.
As shown in the previous figures, only the vlinear and vlateral change with the attitude and
are only dependent on the yaw angle, therefore, the rotation matrix to apply to transform the vehicle
velocity into world velocity is derived by (A.4). By applying this transformation it is possible to obtain
the velocities in the world frame.
  
 

vx
cos(yaw) sin(yaw) 0
vlinear
  
 

  
 

vy  = sin(yaw) −cos(yaw) 0 ·  vlateral 
  
 

vz
0
0
1
valtitude
A.3
(A.4)
Karma Engine
USARSim acts as a bridge between the user and Karma by transforming velocities, user’s
inputs, into forces and torques, KARMA inputs, as represented in Figure A.11. Since, and as stated
before, KARMA is not an open source product reverse engineering was applied to discover and to best
fit the motion model results to the UT2004 outputs. The diagram of the final result is presented in
Figure A.12, where the block velocity to force mimics the math inside USARSim and the block physics
laws represents the rigid body dynamics described in Chapter 2. We denote F 0 as an approximation
of the F on Figure A.11 and x̃ as our estimation of x. Through experience with the simulator along
with research on the quadcopter model was concluded that the dynamic on the vehicle body resembles
to the motion of a particle. It is possible to assume this similarity since we are considering that the
attitude is available at all times and, therefore, we can estimate the position of the vehicle without the
use of angular velocities [18]. This simplification is proven useful in Chapter 2 during the predict step
of the PF where just the force needs to be accounted to the position and velocity estimation.
Figure A.11: Diagram of the relation of input/output of Karma.
A.4. USARSIM TESTING
81
Figure A.12: Diagram of the relation of input/output of our motion model.
A.4
USARSim testing
Throughout this thesis was stated that the simulator had a non-constant behavior in different
tests. The following table serves to support that statement. The input is the same for each of the
experiments and, as it is possible to see from Table A.1, the outputs diverge up to a maximum of
0.17 meters. Factors as rounding, lack of accuracy of the sensors, small misplacements of the sensors
or small divergences, unaccountable and unavoidable, on the initialization of the robot might be the
core of the presented issue. Nevertheless this question does not compromise the global operation of
the algorithm, but serves as a justification for the difficulty in the SLAM since, for example, it is not
possible to guarantee that the robot will navigate the exact distance on different tests.
Table A.1: Distance traveled by the vehicle given the same input.
Test
Distance [m]
(Distance-mean) [m]
1
42.77
0.07
2
42.63
-0.07
3
42.87
0.17
4
42.55
-0.15
5
42.68
-0.02
mean(dist) [m]
42.7
—
Download