Document 10998506

advertisement
Reactive Trajectory Adjustment for Motion
Execution Using Chekhov
by
Sean E. Burke
Submitted to the Department of Electrical Engineering and Computer
Science
in partial fulfillment of the requirements for the degree of
Master of Engineering in Electrical Engineering and Computer Science
,<FCHVEB
at the
MASSACHUSE17S -M111E
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
JUL 15 2014
June 2014
@ Massachusetts Institute of Technology 2014. All rights reserv-e-d
LIBRARIES
z
7-7
Signature redacted
A uthor ................................
Department of Electrical Engineering and Computer Science
May 21, 2014
Signature redacted
Cert.......by....
.................
Andreas Hofmann
MIT Research Scientist
Thesis Sunervisor
Signature redacted
C ertified by ...........................................................
Brian C. Williams
P fessor of Aeronautics and Astronautics
i gf n aThesis
Supervisor
Signature redacted
Accepted by...
.........
Prof. Albert R. Meyer
Chairman, Masters of Engineering Thesis Committee
2
Reactive Trajectory Adjustment for Motion Execution Using
Chekhov
by
Sean E. Burke
Submitted to the Department of Electrical Engineering and Computer Science
on May 21, 2014, in partial fulfillment of the
requirements for the degree of
Master of Engineering in Electrical Engineering and Computer Science
Abstract
Robots are becoming more and more prominent in the world of manufacturing for
assembling products. Currently most of these robots, such as the ones used in automobile manufacturing have specific pre-programmed tasks and motions, and no sense
of their surrounding environment. In many of today's applications, this method will
not be sufficient, as many real world environments are unstructured and could cause
disturbances to the robots requiring the motion and task plans to be modified. If a
robot has a task to complete, a planner, such as Bidirectional RRT [5], will generate a
motion plan to complete the task. If that motion plan becomes infeasible, because the
goal has changed, or an obstacle has moved into the robot's path, the robot will need
to make an adjustment. One method is to generate a new plan. This can be quite
time-consuming, especially since the time is not proportional to the size of the change
making re-planning excessive for small adjustments. The problem we would like to
solve is adjusting to minor disturbances much faster than re-planning. Re-planning
can often take a few seconds, where we would like to make adjusted plans in less than
a second. In this thesis, we present a method for solving this problem. We use an
incremental adjustment approach that can make minor adjustments in response to
collisions or goal changes where the time taken to make adjustments is proportional
to the extent of the changes made. To make the adjustments to the plan, we have
developed a quadratic program that will make near-optimal adjustments to each robot
joint pose in a robot's motion plan based on the goal region and a reaction vector.
The goal region is the region the robot manipulator needs to be in to accomplish its
task. The reaction vector is a vector that specifies the direction the robot would need
to move in order to remove itself from a collision if there is a collision. Along with this
quadratic program, we give a method for computing these reaction vectors. These
two pieces are the major components of our algorithm and the key innovations made
in this thesis. The algorithm allows the robot to make minor adjustments to its plan
in an unstructured environment in about a quarter of a second. The adjustments
are near optimal, in that they only deviate slightly from the original plan, and are
made much faster than traditional planning algorithms. The overall goal is to build a
3
complete robust execution system, and the reactive trajectory adjustment algorithm
presented in this thesis is an important piece of the overall system.
Thesis Supervisor: Andreas Hofmann
Title: MIT Research Scientist
Thesis Supervisor: Brian C. Williams
Title: Professor of Aeronautics and Astronautics
4
Acknowledgments
First and foremost, I would like to thank my Thesis supervisor, Dr. Andreas Hofmann.
Andreas has been the most important part of my success, and has been with me
every step of the way for my thesis. He provided me with the initial direction of our
problem and how to solve it. He also made several of my struggles throughout the
year disappear. When I couldn't figure something out, Andreas was the first and
last person I would go to. Providing an excellent balance of freedom and direction,
the process of completing this thesis went by very smoothly. I can't thank Andreas
enough and I hope that I have the opportunity to work with him in the future. I will
be lucky if I ever have a mentor as helpful as Andreas.
The completion of this thesis would not have been possible without the help and
support from several people and organizations. First I would like to thank Professor
Brian Williams for giving me the opportunity to work in his lab, the Model-Based
Embedded Robotic Systems (MERS) lab, on this project.
I was lucky enough to
stumble into Professor Williams' Cognitive Robotics class during my senior year. I
loved the material, which consisted mostly of topics related to work done in the MERS
lab, and I knew I wanted to be apart of it. Luckily, there was a position open in the
lab for a Masters of Engineering student. I knew that I would only enjoy this year
if I had an interesting thesis topic, and Professor Williams was able to give me that.
Sadly, he spent most of my time here on sabbatical in Australia, so I didn't get to
interact with him as much as I would have liked to.
Under Professor Williams, is the MERS lab, which helped me with everything I
needed. Coming into a new group is never easy, but everyone in the MERS community
is incredibly helpful and always willing to help you tackle challenges. The amount
I learned this year from lab exercises and just interacting with everyone has been
incredible. The one thing I will miss most about this year is the atmosphere and
culture of the MERS lab.
Another person who has been crucial to my success at MIT has been my academic
advisor, Professor Jim Glass. He definitely went above and beyond for the advisor
5
role, providing me a recommendation letter when I needed one, and making sure I
was taking the right classes so I could graduate on time while getting a tremendous
education. Along with Professor Glass is the rest of the Electrical Engineering and
Computer Science administration who have always been there, and perform their jobs
flawlessly. This is one thing that often goes unnoticed, because I have rarely had any
problems in regards to this. After seeing some of the administrative nightmares that
my brother had at Boston College, I am truly thankful for our administration.
Next, I would like to thank my sources of funding. I would not have been able to
complete this thesis without my two sources of funding. First, is the Boeing Company.
I am privileged to work under such a prestigious company, and explore application
of my research in such an interesting field. Not only the funding but the Boeing
project in our lab, headed by Scott Smith, has provided me with several ideas used
in this thesis. Second, I would like to thank the Skolkovo Institute of Science and
Technology for providing additional funding, and an exciting field. Working under
the Space Strategic Development Project has been extremely interesting, particularly,
exploring the application of robotics in space. I am grateful for both of these entities
for trusting me with providing meaningful research over this past year.
Outside of school, there were three major groups I could not have made it through
college without. First, is my fraternity Sigma Chi, where I have made most of my
closest friends over the last several years. I don't know where I would be today if I had
not pledged freshman year. Secondly, is the MIT lacrosse team. I didn't initially plan
on playing lacrosse at MIT, which would have been a huge mistake. Bonding with
the players and coach Walter Alessi while playing a sport I love were an awesome way
to take my mind of things and keep my sanity. Lastly, is the MIT hockey team. I was
extremely disappointed to hear about the cut of MIT hockey as a recently accepted
pre-frosh, but thankfully the dedication of the players of the team allowed hockey to
continue at MIT. Hockey has always been a huge part of my life, and continuing it at
MIT made a huge difference in my happiness. The members of the team are some of
my best friends, and coach Dave Hunter has become a phenomenal mentor over the
past four years. The hockey team has been important to my years here, and I hope
6
to remain a part of the program for years to come.
I am extremely grateful to have attended MIT for the past 5 years, and complete
both my Bachelors and Masters degrees here.
This Thesis was graciously funded as part of the Space Exploration Strategic
Development Project funded by the MIT Skoltech Initiative. Additional funding was
given by the Boeing MIT-BA-GTA-1 grant.
7
8
Contents
1
1.1
Motivation.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
1.2
Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
1.3
Approach
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
. . . . . . . . . . . . . . . . . . . . . . . . .
23
1.3.1
2
3
Key Innovations.
25
Background
2.1
Potential Field Method . . . . . . . . . . . . . . . . . . . . . . . . . .
25
2.2
BiRRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
2.3
Autonomous Aerobatic Helicopter Flight . . . . . . . . . . . . . . . .
27
29
Problem Statement
3.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.2
Pose Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
3.2.1
Collision Pose Adjustment . . . . . . . . . . . . . . . . . . . .
31
3.2.2
Goal Pose Adjustment . . . . . . . . . . . . . . . . . . . . . .
32
3.2.3
Formal Problem Formulation
. . . . . . . . . . . . . . . . . .
33
Trajectory Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . .
34
3.3.1
Collision Trajectory Adjustment . . . . . . . . . . . . . . . . .
35
3.3.2
Goal Trajectory Adjustment . . . . . . . . . . . . . . . . . . .
36
3.3.3
Formal Problem Formulation
. . . . . . . . . . . . . . . . . .
36
3.3
4
15
Introduction
39
Methods
9
4.1
4.2
5
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
4.1.1
Overview
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
4.1.2
Collision Pose Adjustment . . . . . . . . . . . . . . . . . . . .
40
4.1.3
Goal Pose Adjustment
. . . . . . . . . . . . . . . . . . . . . .
45
Trajectory Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . .
48
4.2.1
Overview
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
4.2.2
Run Trajectory Algorithm . . . . . . . . . . . . . . . . . . . .
49
4.2.3
End-effector Collision Trajectory Adjustment
. . . . . . . . .
51
4.2.4
Non-End-effector Collision Trajectory Adjustment . . . . . . .
57
Results
5.1
5.2
6
Pose Adjustment
63
MATLAB ........
......................
5.1.1
Pose Adjustment .......
5.1.2
Trajectory Adjustment .....
OpenRAVE ..
...
........
..................
. . . .....
5.2.1
Pose Adjustment
5.2.2
Trajectory Adjustment ......
.. ......
.....
...............
.....
..
...
.
64
.
64
......
67
. . . . . . . . . ..
77
.......................
..................
77
....
Discussion
79
91
6.1
Analysis of Results . . . . . . ..
. . . . . . . . . . . . . . . . . . . .
91
6.2
Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
6.3
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
10
List of Figures
1-1
Scenario2: The original planned trajectory . . . . . . . . . . .
18
1-2
Scenario2: The original planned trajectory with an obstacle
.
18
1-3
Scenario2: The original planned trajectory with an adjustment
19
1-4
Auto Mobile Manufacturing Factory . . . . . . . . . . . . . . . . . . .
20
2-1
Potential Field Method . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3-1
Collision Pose Adjustment Problem . . . . . . . . . . . . . . . . . . .
32
4-1
Computing the Middle Reaction Vector Plane
52
4-2
Computing the Reaction Vectors
. . . . . . .
54
4-3
Using the Same Reaction Vector . . . . . . . .
55
4-4
Computing the Reaction Vector . . . . . . . .
59
4-5
Using the Fan Approach for Non-End-Effector
60
5-1
Pose Adjustment Scenarios . . . . . . . . . . .
. . . . . .
65
5-2
Successful Pose Adjustments . . . . . . . . . .
. . . . . .
66
5-3
Scenario 1: Trajectory Adjustment Result
. . . . . .
68
5-4
Scenario 2: Trajectory Adjustment Result
. . . . . .
69
5-5
Scenario 3: Trajectory Adjustment Result
. . . . . .
70
5-6
Scenario 3: Sequential Pose Jump . . . . . . .
. . . . . .
71
5-7
Scenario 4: Trajectory Adjustment Result
. .
. . . . . .
72
5-8
Scenario 4: Sequential Pose Jump . . . . . . .
. . . . . .
73
5-9
Scenario 5: Trajectory Adjustment Result
. .
. . . . . .
74
5-10 Scenario6: Trajectory Adjustment Result . . .
. . . . . .
76
11
5-11 OpenRAVE: Pose Adjustment Result ........
. . . . . . . . . . .
78
5-12 OpenRAVE: Goal Adjustment Result 1 . . . . . . . . . . . . . . . . .
80
5-13 OpenRAVE: Goal Adjustment Result 2 .......
. . . . . . . . . . .
82
5-14 OpenRAVE: End-Effector Collision Adjust 1
.
. . . . . . . . . . .
84
5-15 OpenRAVE: End-Effector Collision Adjust 2
.
. . . . . . . . . . .
85
5-16 OpenRAVE: Non-End-Effector Collision Adjust 1 . . . . . . . . . . .
86
5-17 OpenRAVE: Non-End-Effector Collision Adjust 2 . . . . . . . . . . .
87
6-1
96
Chekhov System Architecture . . . . . . . . . . . . . . . . . . . . . .
12
List of Tables
5.1
OpenRAVE: Collision Adjustment Timing Results ..............
13
88
14
Chapter 1
Introduction
The fields of robotics and artificial intelligence are growing at a rapid pace. As seen
with many Fortune 500 companies investing heavily in the field of robotics, such as
Google, people believe the future will be filled with complex robotic systems performing non-trivial tasks. The next major step in robotic advancement will most likely
come in manufacturing. Currently, there are several applications of manufacturing
where everything is completely automated, and other applications where there is little
to no automation. Bridging this gap will be key in the future of robotics. The major
task in bridging this gap is improvements in robotic artificial intelligence. Currently
robots are widely used, for example in automobile manufacturing, that can move from
one predetermined place to another predetermined place in very structured environments. These robots are not aware of their surroundings, and have no ability to adapt
to changes.
Robots that are aware of their environments and can adapt rapidly to changes
are necessary to perform many tasks. Most environments are unstructured and unpredictable, especially when humans are involved in interacting with robots. There
are two major advancements that must be made for this to happen. The first is in
sensing. Currently robotic sensing and vision is not nearly at the level of human perception, and for many tasks, accuracy and precision are necessary. The other aspect
is that of cognitive reasoning or planning. The field of cognitive reasoning has advanced rapidly in recent years, but is still not mature enough to allow for robots with
15
human-like cognitive ability. Especially in robotics, artificial intelligence is necessary
for reasoning about an environment. This is the aspect we will be focusing on.
1.1
Motivation
Automobile manufacturing has been automated by robots for several years. This is
the same with manufacturing of most other products that are mass produced. The
reason we are able to do this is because we can use an assembly line and just assemble
one piece at a time at each step of the assembly line. Due to the nature of an assembly
line, every piece is almost always where it should be, and the robots don't need to
know anything about the environment. Each robot simply has a predetermined set
of tasks which it can perform to complete the assembly. These robots have limited
to no sensing, and often won't be able to detect failures. In the case that the robot
can detect a failure, it will simply fail and not be able to adjust in order to complete
its task. This process is considered automated; however, many manufacturing tasks
require more intelligent robots.
An example where intelligent manufacturing is necessary is with the manufacturing of airplanes by companies like Boeing. These planes are too big to move along
an assembly line, so each piece of the assembly must be brought to the plane. There
is a lot of uncertainty in this environment, making it very difficult to automate. For
this reason, Boeing's manufacturing is done mostly by humans, and very little is automated. It would be nearly impossible to completely eliminate human involvement,
so robots that can reason about their environment and interact with humans are necessary. This is a very complex task, and we will be focusing on one small piece of it.
For a robot to accomplish tasks, a robot must plan its motions, taking into account
knowledge of the environment. There are currently algorithms that can generate a
motion plan, such as Bidirectional RRT (BiRRT) [5], but these can be slow and costly,
especially when the plans are constantly changing. These also don't necessarily produce optimal plans as some, such as BiRRT, use a random approach that tries to find
a successful plan, not necessarily the best successful plan. We selected BiRRT as our
16
example motion planner because it is well known and has already been implemented
in the OpenRAVE simulation environment which we used in implementing our algorithms. Other motion planners can find optimal plans, but they run even slower. We
will introduce the reactive trajectory adjustment algorithm. This will lessen the need
for planning algorithms when a minor adjustment to the plan can be made.
For our problem, we are given a robot and a motion plan for the robot to carry
out as well as knowledge about the environment. In an ideal world with no variance,
this plan will be valid, and the robot should be able to accomplish its task with no
error.
Realistically, the environment is constantly changing, as is the goal of the
robot. In order to account for these changes, we need robots that are aware of their
environment and can adjust to disturbances.
We can see an example of this through a simple scenario using MATLAB. In figure
1-1, we see a simple planar 2 Degree of Freedom (DOF) robot as represented by the
blue lines moving into a goal region, as specified by the green box. The green line
denotes the path of the end-effector. This plan can be generated by a planner such
as BiRRT [5]. Now, lets move onto figure 1-2. As we can see a blue square obstacle
is blocking the robot's planned trajectory. We could re-plan, but it looks like only
a minor adjustment from the original plan is needed. We would like to make this
adjustment rapidly without needing to call the planner, which can be slow. Figure
1-3 shows an example of this adjustment. Now you can see that the path denoted in
green is valid and the robot doesn't collide with the obstacle.
17
Figure 1-1: The original planned trajectory. The trajectory created by the planner
with no collisions.
Figure 1-2: The original planned trajectory with an obstacle shown as a blue box.
An obstacle has been introduced that creates a collision.
18
Figure 1-3: The original planned trajectory with an adjustment shown as the green
line that moves around the box. A proper adjustment has been made to avoid the
collision.
For a more realistic scenario, imagine a car manufacturing plant such as the one
shown in Figure 1-4. One robot's task is to put a new part on a car, so it creates
a plan to accomplish that task. The robot tries to execute this task, but there is
another robot putting on a part to a similar location on the car and is blocking the
first robot's path. The first robot needs a new plan, so it could call the planner to
make a new plan around the second robot. This can be slow, and the second robot
may have moved by the time the planner has completed, perhaps in collision with the
new plan. This solution to collision adjustment can be very costly.
19
Figure 1-4: This image shows an auto mobile manufacturing plant with several robots
working on a car simultaneously.
It is also possible that the goal, or task of the robot has changed. For instance if
the robot is trying to pick up a moving object on a conveyor belt. At each step, the
object will move slightly altering the necessary path to pick up the object. Another
example would be if the robot was sensing the object. As the robot gets closer to the
object, the sensing may improve and the robot's predicted location of the object will
change. We aim to solve these problems by making adjustments to a path that is
mostly valid, but slightly in collision or outside of the goal. This thesis will describe
how we solve these problems.
1.2
Challenges
This problem of reactive trajectory adjustment can be very difficult because not only
are we trying to create an algorithm that can run very fast, we would also like the
adjustment to be near optimal. Since we are making minor adjustments, and want
the adjusted plan to be as close to the original path as possible we need to detect the
depth of collisions and when a collision is resolved. We want the adjusted trajectory
to be close to the original path, because this adjustment will likely be made while the
robot is executing, and it would be impractical for the robot to have to move a long
distance if a smaller distance is possible. We also want the robot end-effector (the
20
robot's gripper or manipulator) to be as close to its original goal position as possible.
Optimizing on this as well as resolving a collision can be difficult. There are some
trade-offs that must be made such as a less optimal adjustment, or a slower execution
time.
There are also many different possibilities of types of collisions. For instance there
can be multiple obstacles in collision with multiple different parts of the robot. The
obstacles can also come in many different shapes and sizes. Our adjustment algorithm
will be able to cope with these challenges; however, we do make some assumptions
on the obstacles the robot is in collision with. There are also different collision types,
such as a collision with the end-effector or with a link of the robot. This can pose
more challenges and requires different methods of adjusting the trajectory.
The main challenge is determining the direction in which to resolve the collision.
In 2 dimensions this isn't too tricky, as there were really only two different direction
vectors, either above or below the obstacle. When moving onto the 3 dimensional
case, the 2 potential adjustment vectors becomes an infinite set of vectors. Finding the
optimal vector would be nearly impossible, while finding a near optimal adjustment
vector can still be challenging.
These challenges cover the main factors behind making an algorithm that creates
a proper adjustment; however, a significant piece of our algorithm is the speed at
which it runs. This is especially true if the goal of the robot is constantly changing,
necessitating constant adjustments.
Making our algorithm run significantly faster
than BiRRT can be challenging. In order to justify the use of our algorithm, it must
not only create an adjustment that is considered optimal or near optimal, but it must
be much faster than BiRRT. The speed at which BiRRT performs can vary greatly
depending on the scenario, and for the majority of cases we must perform much better
than BiRRT. With these challenges in mind, we will solve this problem.
21
1.3
Approach
To find an adjusted path, we use an approach of incremental adjustment. First we
detect all poses of the robot that are in collision to find the poses that are invalid. A
robot pose is simply a set of joint angles for the robot. Then, for each pose, we will
detect the collision, and calculate exactly where the collision is on the robot. We will
call this the reaction point. The next step, which will end up being very difficult, is
to compute the reaction vector for each pose. The reaction vector is the direction the
robot will move in order to resolve the collision. Each collision pose may have the
same reaction vector or different reaction vectors depending on whether the collision
is an end-effector or non-end-effector collision. This will be explained in more detail
later.
The next step is the incremental adjustment. For each pose in collision, the pose
will be adjusted by a small increment in the direction of the reaction vector. To
calculate this adjustment we will use a jacobian for the reaction point to translate
the reaction vector direction into a set of joint angle adjustments for the robot. This
value is optimized through a quadratic program. This process of incrementing the
pose will continue until the pose is no longer in collision with an obstacle. In order to
minimize disontinuities and create a smooth trajectory, successive poses of a trajectory
are adjusted similarly. Once each pose is adjusted, the robot should be able to move
between each pose without colliding with an obstacle. It is also possible that a minor
adjustment is not sufficient and our algorithm will fail. In this case, re-planning is
neccessary.
The robot will constantly be checking for obstacles and performing this adjustment
until the goal is reached. It is also possible that the goal is changing as the robot is
trying to reach the goal. These adjustments will be similar, except the notion of a
reaction vector is no longer needed. The reaction vector is no longer needed because
The goal optimization is sufficient to make an optimal adjustment. This makes the
goal adjustment algorithm much simpler than the collision adjustment algorithm.
22
1.3.1
Key Innovations
Throughout the process of solving this problem, we made several key innovations.
The first was formulating the quadratic program. Originally we felt we could just
use the reaction vector and the jacobian calculation, but in order to take the goal
region into account, it was necessary to have a quadratic program. We optimized on
both the reaction vector calculation and keeping the end-effector as close to the goal
as possible. This gave us much more favorable results as the robot was now able to
adjust out of collision while the end-effector remained in the goal region.
The next key innovation was differentiating between end-effector collisions and
non-end-effector collisions. This became key in calculating the reaction vectors because the method we used was different for each type of collision. The method used
to compute these reaction vectors was also very important. Especially for the endeffector collisions, the way we computed the reaction vectors was crucial in completing
our algorithm. This was a new method we designed and implemented that ended up
working very well.
Finally, to increase the speed for link 1 collisions, or collisions involving the robot
link closes to the robot base, we figured out that only the first and last collision pose
needed to be adjusted. The rest could just use a linear interpolation between the two
adjusted poses. Since adjusting each pose is the slowest step, this greatly increased
our speed for certain scenarios.
Throughout this thesis, we will dive deeper into the topics we have discussed in this
introduction, starting with background information needed in solving the problem,
and followed by formulating the problem more clearly, and separating the problem
into multiple pieces. Then, we examine the methods necessary in solving the problem.
Next, we implement our methods and produce several results. Finally, we analyze
the results and provide insight for future work.
23
24
Chapter 2
Background
2.1
Potential Field Method
In order to understand our approach and the reasoning behind it, we first need to
look at some previous work done on this subject. A very important article on collision avoidance is Khatib's work on the potential field approach [4]. This approach
was developed for a robot with several degrees of freedom to avoid obstacles in an
environment while the robot was executing a path. This approach uses force vectors
that the robot is intended to follow. Any obstacle will have forces directing away
from the obstacle, where as open space will have forces directing toward the goal. An
example of how this method can be used is seen in Figure 2-1 [2]. What this method
accomplishes is very similar to what we are trying to do. The only issue is that it
can be costly to compute the potential field in a dynamic environment. Every time
an obstacle moves, we would need to recompute the potential fields.
25
etc
~
is~~
di
wo
vo
Si
a),
ty
sth dmt sb
Anteiatftepoeta
Fi0r
2
- 1:
Thls diagram.shws ho
th
d
p
p
Tspb
.
.
s
e
meho
poenilfIeld
IIIItIImii
Soks
Thr-r
field. ....tors.-
Figurith 2-1y wats tiagrdjstfws hs tThe potential field method wod Theeare
her
forc ecoput gingd for the bstarte povedten to t goasn' diostarble hs frbet
BiRR
2.2
popetial.
poiin avayageom t.he otentia ten fslthat the asthcetemdebytser
works. Therate.Ho e arcmls hswl e
Fiue2tohsdiga hw o the potential field method
ilrajsmn
ein nagrtmtatcnmk
Uild titorewsd.iet
hsoeta
a f rom othsheit. Terbtte olw h ahcetdb
ptin
ixsp tuhfse.hwe ossiblish oh local
toteliiaino the potential field method,
u oti,
lner
thel faius and rehis the.
o theln potetiranfielduretho
Another mtio
nwa.This p could
be olvde by
u
ssed
anise
h
is thepsiiiyooca mnia.
octo
omhne planning andexectiOn system pasn
Aotermuehieds f handln distlrbaedee duingws'isbnh robot.
Ultimlat is a weolle
bema
pnn algorithm
a deg bysan
26
xetinstgerTea
. thtcaek simila ausmeg
algorithm is Bidirectional Rapidly-Exploring Random Trees (BiRRT). This is based
on a bidirectional implementation of RRT [5]. BiRRT is a planning algorithm that
will try to find a path from a start position to a goal position. The path found is not
necessarily optimal. This is one of the issues with using BiRRT for making adjustments to a plan that is infeasible. Even if the plan only needs a minor adjustment,
it is possible that BiRRT creates a very different alternate plan that can increase
the operation time of the robot. Other planning algorithms such as RRT* [3] create
optimal plans, but they are slower, and it hasn't been included in the OpenRAVE
simulation environment.
The major problem with BiRRT is the time it takes. It is very variable, depending
on the complexity of the plan necessary, but it usually takes much longer than we
would like. BiRRT usually runs in a couple of seconds, while we would like to be
about an order of magnitude faster. This is especially true in a constantly changing
environment.
If a robot has to create a new plan every few seconds, and it takes
a few seconds to generate that new plan, the robot will never reach its goal. This
is the main reason why a faster method of adjusting plans is necessary.
BiRRT
can be useful for generating the initial plan, but is impractical when only a minor
adjustment is necessary to continue execution because a faster method should be
plausible.
Although there are several planning algorithms that could be used as
comparison to our algorithm, we will use BiRRT as the example planning algorithm
for this thesis because it is reasonably fast and commonly used in the OpenRAVE
simulation environment.
2.3
Autonomous Aerobatic Helicopter Flight
The approach presented in this thesis uses several similar concepts found in Pieter
Abbeel's paper on aerobatic helicopter flight [1]. Although the goal and overall approach are very different, there are some important similarities between their work
and ours. One thing to note is that in both of our problems, speed is important. For
helicopter flight, it would be impractical to have a long planning algorithm or some-
27
thing of the similar to allow the helicopter to perform an aerobatic maneuver. Abbeel
used reinforcement learning along with a differential dynamic program to solve their
problem. Although learning was not used in our problem, the differential dynamic
program is very important.
The differential dynamic program in this paper computes a linear approximation
to the dynamics and a quadratic approximation to the reward function around the
trajectory [1]. Then based on these approximations the optimal policy it computes.
This is the inspiration for much of our pose adjustment problem.
Although the
approaches differ, the fundamental concepts are the same. For our problem, we use
a jacobian to translate a vector into approximate joint angle adjustments. We then
use this along with a quadratic program to optimize on this adjustment. The results
shown in this paper were very promising, which led us to believe that we could use
a similar approach for optimization.
Our approach that uses similar concepts to
Abbeel's work is outlined in this thesis.
28
Chapter 3
Problem Statement
3.1
Overview
What we are developing is a more robust execution system. When a robot needs to
accomplish a task, it first creates a plan to execute. In most real world situations,
environments are constantly changing and plans may become infeasible. The current
method of solving this issue is simply re-planning every time a plan becomes infeasible;
however, this can be very slow. Whether the plan requires a slight alteration, or
complete overhaul, the re-planning algorithm will take a lot of time. We will look at
the case when a plan requires a slight alteration. If a plan is mostly valid, but an
obstacle has moved slightly in the way, or the goal has moved slightly, we shouldn't
need to create a full new plan. Ideally we can create an adjustment to the plan in
much less time than re-planning. This is the problem that we focus on in this thesis.
In our problem, we are given a robot which has a task it needs to carry out. For
instance, the task could be a robot arm picking up a block, or the task could be
moving a mobile robot to some location. We will be focusing on using a robot arm
to carry out tasks. More specifically, the robot arm will begin in some pose, the start
pose, and will need to move into a new pose, the goal pose. The goal pose may be a
pose that the robot arm needs to be in to grasp an object.
To accomplish the task of getting from the start pose to the goal pose, a planner
will create a trajectory, or a sequence of robot poses, for the robot. The trajectory will
29
be valid and avoid any obstacles that are in the environment. If possible, executing
the trajectory will reach the goal pose. This is the current method of completing
robot tasks. This works well in scenarios such as car manufacturing or assembly line
work where the environment is very structured and there is little uncertainty.
For unstructured environments, simply planning and executing won't necessarily
work. Once the robot begins to execute the planned trajectory, it is possible that an
obstacle could come into contact with its planned trajectory. It is also possible that
the goal region may change. For example, suppose that a robot arm is trying to grab
an object that can move, or was moved by some other member of the environment.
This would make the current planned trajectory in-feasible, either because the trajectory will collide with an obstacle, or it will not reach the goal region. If the robot
senses an obstacle obstructing its path, or that the goal has changed, then a new plan
is needed for the robot to complete its task.
The current way to solve an infeasible plan is to create a new plan. The robot will
simply call some planner and a new feasible plan will be found. As mentioned before,
this can be very slow, and by the time the plan has generated the obstacle may have
moved again, or the goal region could have changed. Therefore, the newly generated
plan will once again become infeasible. Also, it is often the case that only a minor
adjustment needs to be made, and a completely new plan is not needed. In addition,
creating a new plan for a minor adjustment, might be far from the original path and
take longer for the robot to execute. We would like to create a new solution that
can make reactive adjustments to a trajectory during execution. A new algorithm
is necessary that is tightly coupled with execution. The algorithm should be able to
adapt quickly to changes in the environment and plan around them efficiently.
Our overall goal is to design and develop an algorithm that allows a robot to
quickly adjust to an obstacle or change in goal region during execution. The robot
then should make the adjustment to the trajectory, and continue with the planned
trajectory to reach the goal region. If a slight adjustment can not be made to make
the plan feasible, then the robot should recall the planner asking for an entirely new
plan. Overall, this algorithm should allow robots to execute tasks faster and more
30
effectively in unstructured environments.
3.2
Pose Adjustment
The first problem we aim to solve is the pose adjustment problem. This is the first
step in solving the overall problem and it is important to solve the pose adjustment
problem before we can solve the overall problem. A robot's plan consists of several
poses, so any infeasible pose will need to be altered to make the overall plan feasible.
The pose adjustment is simply adjusting a single pose that is no longer valid. This
pose could be invalid due to a collision or the goal position of the end-effector has
changed. We want to find a new valid pose for the robot as quickly as possible. This
new pose should also be near the optimal solution. By optimal, we mean the best
possible solution, as if every possible solution were tested. Our criteria for optimal is
having the end-effector as close to the goal position while toe pose is out of collision.
We also want the adjusted pose to be close to the initial pose so the robot doesn't
have to move very far. We will separate the pose adjustment problem statements into
the collision pose adjustment and goal pose adjustment.
3.2.1
Collision Pose Adjustment
The collision pose adjustment involves a robot pose where some part of the robot is in
collision with one or more obstacles. This could be some objects in the environment,
another robot, or perhaps a human working with the robot. The collision pose will
place the end-effector in some goal position, while a buffer region surrounding the
goal position accounts for the goal region. For any intermediate pose, where there
is no goal position, we consider the original placement of the end-effector the goal
position. The pose is considered invalid because it is in collision with one or more
obstacles. We want to find a new valid pose which places the robot's end-effector as
close to the goal position as possible. We will consider any pose valid if the robot is
not in collision, and the robot's end-effector is within the bounds of the goal region.
The collision pose adjustment algorithm should find a new valid pose very quickly.
31
Figure 3-1: The robot pose, shown here as the blue lines, is colliding with an obstacle,
shown as the red line. The black arrow represents the direction of the adjustment,
and the green box represents the goal region. The pose adjusts out of the collision
while the robot's end-effector remains in the goal region.
The new pose found should be near optimal in that the new pose isn't far from the
original pose and the robot's end-effector is as close to the goal position as possible.
Figure 3-1 shows an example of the collision pose adjustment problem. If the collision
pose adjustment algorithm cannot find a new valid pose quickly, then the algorithm
should fail and recall the planner. The collision pose adjustment algorithm is meant
for small adjustments, so if the necessary adjustment is large, then it would be more
efficient to recall the planner.
3.2.2
Goal Pose Adjustment
For the goal pose adjustment, the robot's end-effector is not in the goal position, most
likely because the goal position has been changed. This could be due to the robot
trying to follow a moving object, or if the pose estimate for the object has changed.
What we would like to do is find a new pose where the end-effector is in the goal
position. We would also like it so the robot has to move as little as possible, so the
adjustment should be small. We will assume for simplicity there is no possibility of
collision. The robot should be able to find the new goal pose quickly. If the goal
position has changed by a significant amount, and the goal adjustment algorithm
cannot find an accurate adjustment quickly, then the algorithm should fail and recall
32
the planner. This is the same case as with the collision pose adjustment.
3.2.3
Formal Problem Formulation
For the pose adjustment, the inputs to the problem are:
1. An environment containing obstacles
2. A plant model representing the kinematics of the robot, including the joint
limits
3. The current pose of the robot
4. A direction vector if the robot is in collision
5. A new goal position if the goal has changed
6. A goal region for the robot's end-effector
The plant model specifies what positions the robot can have its joints in so that
any planned trajectory will be feasible. We also need the current pose of the robot,
which will be the invalid pose. The direction vector is for collisions. If the robot is in
collision, we need to know which direction we need to move in to resolve the collision.
For the goal change case, we need the new goal position. For both, we will need a goal
region, so that we know where the end-effector must be within after the adjustment.
We will use an objective function that is a cost on the error between actual end-effector
position and the goal end-effector position. The problem is formulate as:
minimize c (q)
such that
-
(q n C(E))
(3.1)
qmin < q < qmax
9min < g (q) < gmax
where c is the cost function, q is the robot joint pose vector, C(E) is the collision
space of the environment, qmin and qmax are the robot joint pose limits, and g is
33
the forward kinematic transform function of the robot. Thus, g gives the position
of the end-effector in work space coordinates, given the joint space coordinates (pose
vector). The gmin and gma, specify the goal region.
Given this input, the pose adjustment algorithm will first check whether the pose
satisfies all the constraints. If it does, then no adjustment is needed. Otherwise, the
pose adjustment algorithm will attempt to compute a new valid pose quickly and
efficiently.
We assume we can rely on the collision environment component to compute the
set of collision points for the pose. If the set is non-empty for the pose, an adjustment
should be attempted. For each collision point, the collision environment computes
the 3D location of the collision and a direction vector indicating the direction the
robot's colliding link should move in to eliminate the collision. Later, we will see that
this is actually computed by the trajectory adjustment algorithm, but regardless it
is used as input to the pose adjustment formulation.
3.3
Trajectory Adjustment
Once we have solved the pose adjustment, we can look into the more interesting
trajectory adjustment problem.
As we mentioned before a trajectory will just be
a sequence of several poses, specified by a planner to accomplish some task.
The
trajectory will start with an start pose and end with a goal pose, which places the
robot's end-effector in the goal position. The original planned trajectory should be
valid and allow the robot to move through every pose without collision and reach the
goal pose. As with the pose adjustment, the problem comes in when the trajectory
is no longer valid, either because of a collision in the path, or a change in the goal
position. This is the case because we are working in an unstructured environment
typically with human interaction.
The current solution would be to re-plan every
time there is a change in the environment, but this can be extremely slow. Especially
if the trajectory is nearly valid, and only needs a minor adjustment, we should not
need to fully re-plan the trajectory.
34
Ideally we would be able to make a slight adjustment to the path quickly. This
algorithm will need to find a near optimal adjustment and execute much faster than
the planner. The algorithm should be tightly coupled with execution so the robot
can accomplish its task as quickly and efficiently as possible. This algorithm should
be the first line of defense against a disturbance. As with the pose adjustment, if
the algorithm can't find an adjusted path quickly, the planner should be called. The
trajectory adjustment problem can be separated into collision trajectory adjustment
and goal adjustment as with the pose adjustment.
3.3.1
Collision Trajectory Adjustment
For the collision trajectory adjustment problem, some obstacle in the environment is
moved into collision with the current planned trajectory of the robot. More specifically several poses of the trajectory are in collision. There may be multiple collisions
in the trajectory, or potentially multiple objects in collision with a series of poses.
The trajectory adjustment algorithm will need to identify which poses are in collision
as well as what robot links are in collision. Once the trajectory adjustment identifies
the collision parameters, it will then call the pose adjustment algorithm to adjust
each pose in collision.
The trajectory algorithm can easily identify which poses are in collision and the
robot links for each pose.
A more difficult part of the trajectory adjustment al-
gorithm is figuring out which direction to adjust the poses in.
In figure 3-1, the
pose adjustment shows a black arrow representing the direction to adjust in. This is
an input to the pose adjustment algorithm, so must be calculated in the trajectory
adjustment. Especially in 3 dimensions, this can be a difficult task, and the most
interesting problem the trajectory adjustment solves. Overall, the collision trajectory
adjustment problem involves discovering the poses in collision, which robot links are
in collision, and what direction each pose should adjust in order to avoid the collision.
The pose adjustment problem covers the rest, making a complete system that can
avoid collisions.
35
3.3.2
Goal Trajectory Adjustment
The goal trajectory adjustment problem is simpler than the collision trajectory adjustment problem, because only one pose needs to be adjusted.
Also, there is no
longer the notion of needing to calculate the direction for the pose adjustment. This
is because the goal pose adjustment problem doesn't need a direction vector out of a
collision; the direction to move is towards the new goal position. The goal trajectory
adjustment will involve a change in the goal pose. This will first require the goal pose
adjustment to find a new goal pose. The next step for the goal trajectory adjustment
is to find a path to the new goal pose. Since the planner already created a plan to
the old goal pose, it shouldn't be too difficult to find a slightly adjusted plan that can
reach the new goal. As we mentioned before, the goal should only be changed slightly,
otherwise our goal adjustment algorithm won't work and we will need to recall the
planner. The goal trajectory adjustment problem is very simple, as it relies heavily
on the goal pose adjustment algorithm.
3.3.3
Formal Problem Formulation
The Trajectory Adjustment Problem will be closely related to the pose adjustment.
Given some start pose and a goal pose, a planner will generate a sequence of poses
from the start pose to the goal pose avoiding any known obstacles. The inputs are
similar to that of the pose adjustment, the only difference being now that there are
several poses.
When a new obstacle is introduced, or an obstacle moves into the
trajectory path, our algorithm is called. One or several poses in the trajectory may
be adjusted by the solution to the pose adjustment problem. All the poses in the
collision should adjust so that a new trajectory is created, reusing several of the old
poses, that is once again a valid trajectory from the current pose to the goal region.
Since a trajectory adjustment is simply a sequence of poses, the pose adjustment
does most of the work. The trajectory adjustment needs to deal with sequential
poses. Two sequential poses must be able to be executed. So if two poses in a row are
adjusted separately, the robot must still be able to execute from the first pose to the
36
second; this implies that velocity and acceleration constraints need to be observed.
It is also possible that after the adjustment, executing between two sequential poses
would result in a collision, even if each pose is valid and out of collision. The trajectory
adjustment will need to identify if this problem exists and solve accordingly.
The final piece of the trajectory adjustment is making sure that the task gets
accomplished. The sequence of poses must place the end-effector in the overall goal
region.
The trajectory adjustment also needs to be constantly checking for new
changes in the environment in order to test the poses for in-feasibility according
to the pose adjustment problem formulation.
The trajectory adjustment problem
combined with the pose adjustment problem, should solve the overall problem of
quickly adjusting a robot's trajectory in an unstructured environment.
37
38
Chapter 4
Methods
In Chapter 3, we explained the problem that we are attempting to solve. For Chapter
4, we describe the methods used in solving this problem. This chapter will be split into
two main sections, Pose Adjustment, and Trajectory Adjustment. As described in the
previous chapter, the trajectory adjustment problem builds on the pose adjustment.
Within these two sections, there are essentially two problems being solved as explained
in the previous chapter, one where a collision requires an adjustment, and the other
where a change to the goal requires an adjustment.
4.1
4.1.1
Pose Adjustment
Overview
As described in chapter 3, we need to find a solution for the pose adjustment problem.
The foundation of the pose adjustment algorithm is the Jacobian of the robot's degrees
of freedom. We know the robot's joint angles and a position vector of where we want
the robot to move.
The position vector emanates from a reaction point, located
somewhere on the robot. From this, we use the Jacobian to convert the position
vector into joint adjustments. We calculate this Jacobian from the robots current joint
angles, and use it to translate a position vector into a joint pose vector adjustment
for the robot. The position vector At will be a 3 x 1 vector which represents the 3
39
dimensional position vector of where the robot should move to. This position vector
emanates from the robot's reaction point, which will be an approximate point on the
robot of where the collision is occuring. The joint pose adjustment vector A# is an
n x 1 vector where n is the number of joints (or degrees of freedom) of the robot.
To convert the position vector into the joint pose adjustment vector, we will use an
m x n Jacobian computed with respect to the reaction point. Using this Jacobian we
can transform coordinate vectors in small increments using the equation:
At = JAG
(4.1)
This equation will be a crucial piece in both the collision pose adjustment and the
goal pose adjustment. The other pieces of the algorithm are explained below for each
type of pose adjustment.
4.1.2
Collision Pose Adjustment
For the collision pose adjustment algorithm we assume the current pose of the robot
is in collision with some object. To solve this problem, we need a direction vector to
tell where the robot should move in order to avoid the collision. We will call this the
reaction vector, and for the pose adjustment problem, we assume this is known. This
becomes the At vector from equation (4.1). We also are able to calculate the Jacobian
based on the robots current joint pose vector,
e.
We are looking to find the new joint
pose adjustment, or the AG vector from equation (4.1). We can compute the inverse of
the Jacobian and simply solve for AO from equation (4.1). The Jacobian we computed
is only accurate for small increments, so we need to perform incremental adjustments
to the collision pose until the collision is resolved. We do this by looping indefinitely
computing the Jacobian for the current joint pose vector, and incrementing the joint
pose vector by a small adjustment for every iteration of the loop. The pseudocode is
shown below:
40
1
2
E
= current joint pose ;
while collision not resolved do
3
Compute J(E) ;
4
AE) <- J-1AX;
5
_) 8
- E) + AE) ;
This is a very simple and naive way to adjust the pose out of the collision. Depending on the magnitude of A
and the depth of the collision, this algorithm should
find a pose that is no longer in collision after several iterations. Although this does
work well for removing the pose from the collision, the new resulting pose might be
far different from the original pose. Ideally we want a pose that is no longer in collision while the end-effector of the robot is as close as possible to it's original position,
or the goal position. This algorithm has no constraints on the original end-effector
position. Even if an adjustment can be made without moving the end-effector at all,
this algorithm will not necessarily find it. To find a more optimal solution, we need to
incorporate the original position of the end-effector. We use a quadratic program to
accomplish this task. We add constraints to make sure the end-effector stays within
a defined goal region. We also add the distance the end-effector moves as a term in
the objective function. This program will replace step four in the pseudocode above.
The quadratic program is shown below:
minimize
c(Xeferor, AXrperror)
such that
JrpAO + AXrperror
=
AXrp
-Jef AE + Xef =
fi
Xeferror + Xef
=
Xno
HXef
<
k
The cost function minimizes the variables
Xefrerror,
the distance the adjusted
end-effector is from the original position of the end-effector, and
AXrprro,,
which
is the reaction vector error, or the difference between the adjustment made by the
quadratic program and the one that would have been made by just solving for the
adjustment directly as in equation (4.1).The first constraint is essentially equation
41
(4.1) but adds in the reaction vector error. This will ideally be zero, but we add some
flexibility to make sure the end-effector stays close to its original position. In the
next constraint fi represents the end-effectors position before the adjustment while
the rest of the constraint specifies where the end-effector will be after the adjustment,
this is represented by Xef. The third constraint calculates the Xeferr,.
minimizing on. X,,,
which we are
is the original position of the end-effector, or the goal position
of the end-effector. The last constraint specifies the goal region. This makes sure the
end-effector isn't too far from the original position. If this constraint cannot be met,
then the algorithm should fail and recall the planner.
This quadratic program will run as step 4 in the pseudocode shown above. The
algorithm will complete when the collision is resolved. Alternatively, if the collision
isn't resolved after some predetermined number of iterations, the algorithm will fail
and recall the planner. The quadratic program will compute the optimal AE adjustment such that all constraints are met and the cost function is minimized. After
several iterations, depending on the depth of the collision, we should have a new
pose that the robot can execute in order to remove itself from the collision while still
having the end-effector close to its original position.
The quadratic program above is currently limited to only one collision; however, it
is possible for the robot to be in collision with multiple obstacles. This program can
be easily adjusted to account for multiple collisions assuming we are given a reaction
vector for each collision. To modify the program we simply add another constraint
identical to the first constraint with new variables, except for the AE) which will
be the same variable. We will also need to add the new AXper,,,
variable to the
cost function. The rest of the quadratic program remains the same. For example a
collision with two obstacles will have the following quadratic program:
42
minimize c(Xef-error, AXrperror_1, AXrperror_2)
such that
+ AXrp.error_1
Jrp-AE_
=
AXrp_
J_218AE + AXrperror.2
-JefAE + Xef
= fi
Xeferro, + Xef
=
Xnom
<
k
HXef
This shows the overall structure of the quadratic program we use for collision
adjustments. This is the fundamental piece of the pose adjustment algorithm and is
crucial in solving the full trajectory adjustment problem.
Collision Pose Adjustment Algorithm
The full collision pose adjustment algorithm is shown below.
43
Algorithm 1: Collision Pose Adjustment
input : Pose, Obstacles, RobotDimensions, goalRegion, ReactionVectors
output: New Adjusted Pose
1
E
= Pose.theta;
ReactionPoints = Pose.reactionPoints;
3 while inCollision & numIterations < 100 do
4
AXpList <5
foreach reactionVector in ReactionVectors do
2
AXp
6
7
8
9
10
11
12
13
14
15
16
17
18
19
+-
reactionVector x multiplier;
L
AXpList.append(AXp,)
JacG <- ComputeJacobian(GripperPoint,RobotDimensions);
JacRPs +- [ ];
foreach Point in ReactionPoints do
JacRP <- ComputeJacobian(Point, RobotDimensions);
L
JacRPs.append(JacRP)
cost <- createCostFunction(constants);
equalityConstraints +- createEqualityConstraints(JacG, JacRPs);
inequalityConstraints <- createInequalityConstraints(goalRegion);
AE +- solveQuadraticProgram(cost, equalityConstraints,
inequalityConstraints);
E +- E +AE;
inCollision <- CheckCollision(Obstacles, RobotDimensions);
NewPose = MakePose(e);
This algorithm starts by initializing the E, which is given by the robot pose. It
also initializes the reaction points which correspond to collisions on the robot. As
we mentioned before, there can be multiple collisions, so there is a reaction point for
each collision. Then the algorithm goes into a while loop with constraints on being
still in a collision and on performing less than 100 iterations. The 100 iterations is an
arbitrary number to make sure the robot can adjust in this many iterations or less. If
not, then the algorithm should fail and recall the planner. Inside the while loop, the
first step is to initialize the AXp for each collision. We do this by multiplying each
reaction vector by some multiplier. The reason we do this is because the quadratic
program only works for small adjustments, so we need to make sure the AXp is small.
Next we compute the Jacobian of the end-effector denoted as JacG. The "G" stands
for gripper which is synonymous with end-effector for our purposes. The next several
44
lines use the reaction points to compute the reaction point Jacobian, or JacRP, for
each collision. The next part is simply setting up the quadratic program as explained
in detail earlier. The constants value that the cost function takes in as an argument
is the appropriate weights for the reaction vector error and the end-effector error in
the cost function. Then the
E
is adjusted based on the AE solved from the quadratic
program. Lastly we check if the adjusted theta is still in collision, and if so repeat the
while loop. Otherwise we return the new pose based on the newly adjusted
e.
For this
algorithm we are assuming that the reaction vectors are given to us. Calculating the
reaction vector is a non-trivial task and is a major focus of the trajectory adjustment
problem which we will explain later. For the pose adjustment we assumed the reaction
vector was perpendicular to the robot link that is in collision. In the 3-dimensional
case, there are infinitely many perpendicular vectors, and selecting which vector will
also be covered in the trajectory adjustment algorithm.
This covers the methods we use to solve the collision pose adjustment problem.
This will be a significant part of the overall collision trajectory adjustment problem.
4.1.3
Goal Pose Adjustment
Simpler than the collision pose adjustment algorithm is the goal adjustment algorithm. This algorithm is used when the goal position of the end-effector changes by
a small amount for some reason. To simplify the algorithm, we will assume that only
the goal is changing and there is no colliding obstacles with the robot. As with the
collision pose adjustment, this algorithm only works for minor adjustment because it
uses the same Jacobian calculation as in equation (4.1). The first step in modifying
the collision pose adjustment algorithm is modifying the quadratic program. The
only thing we need to do is remove the constraint for the reaction point, and remove
that from the cost function. Therefore we get the following:
45
minimize
C(Xef-error)
such that
-Jef A/E + Xef
Xejerror
+ Xe
HXe
1
=
fi
=
Xnom
<
k
There is one other subtle difference. The value Xnom is now the new end-effector
goal position, as opposed to the original end-effector position in the collision adjustment algorithm. Due to this difference and the fact that we are only minimizing on
the Xef-error, we only need to make one adjustment. This is because the Xeer,.,, will
try to be zero, so after just one adjustment the new pose will have the end-effector at
the new goal position. As long as there are no collisions and the adjustment is small
this should always work. Since there are no longer reaction vectors or reaction points
the overall algorithm is much simpler.
Goal Pose Adjustment Algorithm
The full goal pose adjustment algorithm is shown below.
Algorithm 2: Goal Pose Adjustment
input : Pose, newGoalPosition , RobotDimensions
output: New Adjusted Pose
1
2
3
4
5
6
7
8
E
= Pose.theta;
goalRegion = newGoalPosition + buffer;
JacG +- ComputeJacobian(GripperPoint,RobotDimensions);
cost +- createCostFunctionO;
equalityConstraints +- createEqualityConstraints(JacG, newGoalPosition);
inequalityConstraints +- createlnequalityConstraints(goalRegion);
Ae *- solveQuadraticProgram(cost, equalityConstraints,
inequalityConstraints);
6
+- E + A(;
9 NewPose = MakePose(E);
This algorithm is very similar to the collision adjustment algorithm. There are
three main differences that we see. First being the input which now includes a new
goal position and no obstacles or goal region. The second is the absence of a while
loop. Lastly we see that there is nothing in relation to the reaction vectors.
46
This shows how we can transform the collision pose adjustment algorithm into the
goal pose adjustment algorithm while leaving the core components of the algorithm
intact.
This is the method we will use in solving the goal trajectory adjustment
problem.
47
4.2
4.2.1
Trajectory Adjustment
Overview
Now that we have described how we solve the pose adjustment problem, we look at
the more interesting trajectory adjustment problem. For our purposes, a trajectory
will simply be a sequence of poses. This is why the pose adjustment algorithm will
be crucial in solving the trajectory adjustment. In theory, we can simply compute all
the poses in collision, and adjust each one based on the pose adjustment algorithm.
This is essentially what we will do, except computing the reaction vectors makes this
much more complicated.
For the trajectory adjustment we are given a sequence of valid poses from some
planner. We will call these poses the main poses. In the animation world, these
are known as keyframe poses. These poses are valid and will accomplish the goal.
To make sure the robot is easily able to detect collisions with specific poses, we will
interpolate several intermediate poses. We will call the added poses intermediate
poses and this new set of intermediate poses and main poses we will call all poses.
This is just to make the distance the robot has to travel between each pose much
smaller.
Once the all poses is calculated, the robot will begin executing each pose, checking
for a collision with any future pose, or goal adjustment at each step. If a new collision
is detected or a new goal position is needed, the trajectory adjustment algorithm will
be called. If there is a collision, the reaction vectors will be calculated for each collision
pose, which will be explained in detail below. Then the pose adjustment algorithm
will be called for each pose that is in collision in the case of the collision adjustment.
The new poses from the results of the adjustments will replace the old poses which
are in collision. If the goal position changes, then only the goal pose will be adjusted.
Then the new goal pose will replace the old goal pose in the main poses and the
intermediate poses will be recalculated. This process of executing and checking for
collisions or goal changes will then continue until the goal position is reached.
It is possible that a collision can't be resolved, or a goal adjustment can't be made.
48
If the pose adjustment fails, or if moving between two sequential adjusted poses would
still result in a collision, then a valid adjustment can't be made by our algorithm.
It is also possible that two neighboring adjusted poses have discontinuities or violate
velocity and acceleration constraints. In any of these cases, the trajectory adjustment
will fail and recall the planner.
Computing the reaction vectors is the most difficult piece of the trajectory adjustment. The method of computing the reaction vectors will also differ between what
part of the robot is in collision; whether this is the end-effector or one of the limbs of
the robot. Also, since the goal pose adjustment does not use reaction vectors, the goal
trajectory adjustment will be far simpler than the collision trajectory adjustment. To
explain the trajectory adjustment algorithm we separate the algorithm into three separate scenarios, end-effector collision, non-end-effector collision, and goal adjustment.
For simplicity, we will assume that the type of scenario is given to us so we don't have
to identify the type of scenario. All three scenarios will use the same base algorithm
for running the trajectory. The goal adjustment scenario will be incorporated into
the run trajectory algorithm because of its simplicity.
4.2.2
Run Trajectory Algorithm
As explained above, all three scenarios will run the same main algorithm when executing the trajectory. If there are no adjustments, then this will essentially just
execute the trajectory given by the planner. The run trajectory algorithm is below:
49
Algorithm 3: Run Trajectory
input : MainPoses, Obstacles, goalRegion, RobotDimensions
output: Trajectory
1
2
3
4
5
6
7
8
9
10
11
12
AllPoses +- computeAllPoses(mainPoses, numIntermediatePoses);
foreach Pose in AllPoses do
Execute Pose;
if Goal Position changed then
goalPose +- GoalPoseAdjustment(goalPose, Obstacles,
RobotDimensions, newGoalPose);
AllPoses +- computeAllPoses(mainPoses, numIntermediatePoses);
L
if New Obstacle Detected or Obstacle Moved then
Obstacles.update(newObstacle or movedObstacle);
collisionPoses <- calculateCollision(AllPoses, Obstacles, Robot
LDimensions);
if collisionPoses not Empty then
foreach collisionPose in collisionPoses do
collisionPose +- CollisionPoseAdjustment(collisionPose, Obstacles,
_ RobotDimensions, goalRegion);
This algorithm simply loops through the poses, executes the next pose, then checks
if the goal position has changed. If the goal region has changed, then the goal pose
is adjusted based on the goal pose adjustment algorithm.
The set of all poses is
then updated based on the new goal pose. Then the algorithm checks if any of the
obstacles in the environment have moved, or if a new obstacle is introduced. If a new
obstacle is introduced or an obstacle has moved, then the environment will update
the obstacles and check for a collision in a future pose. If there is a collision, all the
poses in collision will be added to the collision poses list. If this list is not empty,
then the algorithm will go through each pose and compute the proper adjustment for
each pose based on the collision pose adjustment algorithm. This algorithm assumes
the reaction vectors are calculated in the calculateCollision function. Within this
function is where we need to separate end-effector collisions from non-end-effector
collisions.
50
4.2.3
End-effector Collision T'rajectory Adjustment
For the end-effector collision trajectory adjustment algorithm we have a sequence of
poses where the end-effector is in collision with some object. For our algorithm, we
assume the object is convex. This is to simplify computing the the 2-dimensional
plane used for the 3-dimensional reaction vectors. The 2-dimensional plane is the
plane in which our infinite set of potential reaction vectors falls in. The first step in
the collision trajectory adjustment is to look for a collision when the obstacles in the
environment are changed. First, the algorithm will iterate through each remaining
pose and test if it is in collision. This is easily done with built in functions.
To compute the reaction vectors we first need to calculate the 2-dimensional plane
that they will lie in. The first step is to figure out the vector of the end-effectors path
through the obstacle. We will call this the path vector. The path vector is computed
by simply taking the difference between the last collision pose and the first collision
pose and then normalizing the vector. This step is seen visually in the second step of
figure 4-1. Next, we look at the middle collision pose. The middle collision pose will
have a reaction vector perpendicular to the path vector; however, there are infinitely
many perpendicular vectors in 3-dimensional space and we want to find the near
best one. With the middle collision pose and the path vector we can calculate the
perpendicular plane in which the ideal reaction vector will lie. From this plane we
will compute a discrete number of potential reaction vectors that lie in the plane.
This will only consist of vectors in the top half of the plane based on the z-vector
due to the orientation of the robot. Figure 4-1 shows how this is computed visually.
Now that we have several potential reaction vectors, we test each one. To do this we
run the pose adjustment algorithm with each reaction vector on the middle collision
pose. Whichever pose adjustment runs in the least number of iterations is the one we
will select as the reaction vector for the middle collision pose.
51
SRobot
Side
View
Robot End-
Effector
Trajectory
ILIz
Robot Collision
Path Vector
Obstacle
Colliding with
Path
Z
Perpendicular
Plane. The y-z
plane
Robot Collision
Path Vector
y
Axis changed to
reftect change in
Pepniua
viewPlane.. The y-z
plane
Aotential
I
VReal on Vectors
Robot Collision
Path Vector
Figure 4-1: The top figure shows the robot as the two blue lines with the robot's
end-effector trajectory as the red line. The next figure shows an obstacle obstructing
its path. From this we extrapolate the robot collision path vector as the green arrow.
From this we can find the perpendicular plane. In this case this is just the y-z plane.
In the bottom figure the axis is changed to reflect a change in view. We are now
looking directly at the y-z plane. The robot collision path vector is now coming
out of the page. The red lines represent the potential reaction vectors for the middle
collision pose. We only use the top half of the plane because of the robot's orientation.
52
The end-effector collision needs to move around an obstacle, so we cannot use one
reaction vector for all poses. To solve this, we create a fan of reaction vectors. Figure
4-2 shows the process of creating the fan of reaction vectors. The last collision pose
reaction vector will simply be the path vector. The first collision pose reaction vector
will be the reverse of the path vector as seen in the second step of figure 4-2. The rest
of the reaction vectors are interpolated between these two vectors. This is where the
middle collision pose reaction vector becomes important. This, along with the first
and last reaction vector specify the 2-dimensional plane that the reaction vectors are
interpolated in. This can be seen visually in Figure 4-2. Figure 4-3 shows why it is
necessary to use this method in computing the reaction vectors so the end-effector
can safely move around the obstacle. If the reaction vectors where all the same, there
would be collisions in moving between consecutive poses. Figure 4-3 demonstrates
this issue below.
53
Robot Side
View
Robot Collision
Path Vector
Obstacle
Colliding with
Path
z
Middle Pose
Reaction
y
LVector
Reverse Path
Vector
Robot Collision
Path Vector
''All Reaction
Vectors
Figure 4-2: The top figure shows the same scenario as figure 4-1 with the robot
collision path vector given. The next step shows the three main reaction vectors, the
first, middle, and last as the reverse path vector, middle pose reaction vector, and
path vector respectively. From here we can find the rest of the reaction vectors at
the bottom. We simply interpolate evenly spaced vectors for the rest of the collision
poses. In this example because there are 9 arrows representing 9 reaction vectors,
meaning there would be 9 collision poses.
54
I
tz
I
Robot Side
View
y
Robot Collision
Path Vector
x
A
Robot Side View
-Collision Pose
a
y
Poenwal Single
Reaccon
Veclor
x
Robot Side View
-Adjusted Path
y
Figure 4-3: The top figure again shows the scenario used in Figure 4-1. The next
figure shows the middle collision pose and a potential reaction vector we could use to
adjust all collision poses. The bottom figure shows a non-collision pose transitioning
to the first adjusted collision pose. Although the collision pose was able to make a
valid adjustment, the transition to that pose would cause a collision as seen in the
figure. This is why we can't use a single reaction vector for all collision poses in
end-effector collisions.
55
When all the reaction vectors are computed, the trajectory adjustment simply
adjusts all the collision poses and continues execution. If a new end-effector collision
is detected, then the algorithm will repeat the entire adjustment process, including
finding all collision poses, calculating all the reaction vectors, and adjusting all the
collision poses. The following pseudo-code displays these methods more formally.
Algorithm 4: Calculate End-Effector Collision
input : AllPoses, Obstacles, Robot Dimensions
output: collisionPoses, reactionVectors
/* Find all the poses in collision.
collisionsPoses = [;
2 foreach Pose in AllPoses do
3
if Pose in collision and Pose not executed then
1
4
LL
collisionsPoses.append(Pose);
Point1 +- calcEndEffectorPosition(collisionPoses(First));
Point2 <- calcEndEffectorPosition(collisionPoses(Last));
7 PathVector +- normalized(Point2 - Pointi);
8 MiddlePose +- collisionPoses(middle);
9 PerpendicularPlane +- calcPerpPlane(PathVector, MiddlePose);
10 PlaneTestVectors <- calcPossibleVectors(PerpendicularPlane, Robot
Dimensions);
/* Find the plane test vector that takes the least number of
iterations to adjust.
11 minNumIterations +- oo ;
12 foreach PlaneVec in PlaneTest Vectors do
13
numIterations <- CollisionPoseAdjustmentIterations(MiddlePose,
Obstacles, RobotDimensions);
14
_minNumIterations leftarrow min(numIterations, minNumlterations);
5
6
15
16
17
/* Find all the intermediate reaction vectors.
MiddleReactionVector <- PlanetestVectors[minNumIterations.index()];
ReversePathVector +- -1 * PathVector;
reactionVectors +- calclntermediateVectors(ReversePathVector,
MiddleReactionVector, PathVector);
56
*/
4.2.4
Non-End-effector Collision Trajectory Adjustment
Now that we have seen how we solve the end-effector collision trajectory adjustment
problem, we will describe the methods used to solve the non-end-effector collision
trajectory adjustment. We are given a set of poses in which a link of the robot is in
collision with some object. As with the end-effector collision trajectory adjustment,
we will run the run trajectory algorithm constantly checking for a changed environment and checking for collisions. Once a collision is detected with a link of the robot,
the adjustment algorithm will be called. Again we are given some set of collision
poses that have some specified link colliding with and object. Knowing which link of
the robot is in collision is important which is given to us by built in functions.
Now that we have the collision poses and which link is in collision, the next step
is to find the reaction vectors. Unlike the end-effector collision, all collision poses
will use the same reaction vector for the non-end-effector collision. This needs to be
different because we no longer need the move the robot around the object. It simply
has to avoid it in one direction. The issue shown in figure 4-3 will no longer occur
because we are only moving the robot in one direction to avoid the obstacle. If the
robot were to need to move around an obstacle with a non-end-effector collision, it
would need a bigger adjustment than our algorithm is meant to handle; therefore
re-planning would be more efficient. Another reason the fan method won't work for
link collisions is the way the robot would adjust with a fan. Figure 4-5 shows an
example of this. Essentially the robot will make very inefficient adjustments for all
reaction vectors that aren't perpendicular to the link.The adjustments likely won't
work, and if they do, will take longer because each adjustment is less efficient. Figure
4-5 shows the first collision pose where the robot will simply move parallel to the
obstacle never actually removing itself. We will use the middle collision pose to
approximate all collision poses. The first step in computing the reaction vector is
finding the perpendicular plane to the robot link. Since we know the joint transforms
of the robot, we can simply create a vector between the two joints that are attached
to the link in collision.
With this vector and a collision point, we can find the
57
perpendicular plane. This perpendicular plane will contain the reaction vector. As
with the end-effector collision, we search through a discrete number of evenly spaced
possible reaction vectors.
Figure 4-4 shows finding these possible reaction vectors
visually. We will run the pose adjustment on each of the possible reaction vectors to
see which one adjusts completely in the least number of iterations. We will then use
this reaction vector for all collision poses.
58
LA~ZJ
x
/7
Yz~
x
:
r~~1I
I
Poem ki
L~J
rx
IAASud
POWi
Eeal1
z iY
Figure 4-4: The top figure shows path the robot is trying to accomplish. Next an
obstacle is introduced obstructing part of the path. We look at the middle of the
collision poses to find the reaction vector. First we have the Link Vector represented
by the green arrow. Next we can compute the perpendicular plane based on the Link
Vector. In the bottom figure we rotate our view point to align the perpendicular
plane with the plane of the paper. From there we create a discrete number of possible
reaction vectors. These are denoted as the red arrows in the plane. We then test
these possible reaction vectors to find the best one.
59
1,
Robot
Traectory
View
Robot Side
Ix
nw
y
z
I
SRobotSide
F~n
LZLJ
r
Coision
1Lx
Y
IPose Reaction
w~
N
Robot Side
y
%now
Pose After
Adjustmnent
First Collialon
Pose Reaction
Vector
Figure 4-5: The top figure shows path the robot is trying to accomplish. Next an
obstacle is introduced obstructing part of the path. We look at what the reaction
vectors would be if we were using the fan approach as we did for the end-effector
collisions. The last image shows the robot if it were adjusted with the reaction vector
from the fan method. The robot simply moves backward, but not out of collision, so
we need the approach outlined in figure 4-4.
60
Now that we have the reaction vector for all collision poses we can simply run the
collision pose adjustment on each collision pose. The algorithm will then continue
execution with the run trajectory algorithm constantly checking for new collisions.
The following pseudo-code describes the calculate non-end-effector collision algorithm.
Algorithm 5: Calculate Non-End-Effector Collision
input : AllPoses, Obstacles, Robot Dimensions
output: collisionPoses, reactionVector
1
2
3
4
5
/* Find all the poses in collision.
collisionsPoses = H;
foreach Pose in AllPoses do
if Pose in collision and Pose not executed then
L
collisionsPoses.append(Pose);
MiddlePose
+-
collisionPoses(middle);
LinkVector +- calcLinkVector(collisionLink, MiddlePose);
PerpendicularPlane +- calcPerpPlane(LinkVector, collisionPoint);
8 PlaneTestVectors <- calcPossibleVectors(PerpendicularPlane, Robot
Dimensions);
/* Find the plane test vector that takes the least number of
iterations to adjust. This will be the reaction vector used
for all collision poses.
9 minNumIterations +- oo ;
10 foreach PlaneVec in PlaneTest Vectors do
11
numIterations <- CollisionPoseAdjustmentlterations(MiddlePose,
Obstacles, RobotDimensions);
6
7
12
13
LminNumIterations leftarrow min(numIterations, minNumIterations);
reactionVector <- PlanetestVectors[minNumIterations.indexO];
61
62
Chapter 5
Results
We have identified the problem we are trying to solve in chapter 3 and explained how
this is solved in chapter 4. Now we will see the results from the implementations of
these methods. We started the implementation with only 2 dimensions and 2 degrees
of freedom. We did this in MATLAB because it was simpler and easier to implement.
Much of the functionality necessary in our methods was also previously implemented
in MATLAB, most importantly the calculation of the jacobian. The implementation
in MATLAB was the simplest implementation step and was mostly to prove that our
methods would solve our problem. With the MATLAB scenarios we solved the pose
adjustment first, then went onto the trajectory adjustment.
Once we had proven that our methods worked well in MATLAB for the 2 dimensional case, we moved on to the 3 dimensional case. We used the OpenRAVE C++
development environment. OpenRAVE provided us with a lot of built in functionality
for implementing our methods, including a robotic simulation environment, jacobian
calculations, and collision detection among other things. We wanted to use C++ so
we could test the speed of our algorithm in a fast environment. Through OpenRAVE,
we implemented our methods from MATLAB and added the 3-dimension functionality as discussed in chapter 4. Again, we started simple with the pose adjustment
problems, and then moved on to the trajectory adjustment.
We tested multiple scenarios in both MATLAB and OpenRAVE. Some of these
scenarios failed, while some succeeded, which is ok for our algorithm as the planner
63
can be re-called on a failure. For our MATLAB scenarios, we were only looking for
a successful adjustment, and didn't focus on the speed of the implementation. In
OpenRAVE, we tested and optimized for speed, as it is necessary that our algorithm
performs significantly faster than the planner. The results of all tests are described
in more detail in the following sections.
5.1
MATLAB
First we implemented our methods in 2 dimensions in MATLAB. We believed this
would be a good stepping stone in solving the problem, and that converting between
the 2 and 3 dimension case wouldn't be too difficult. We were also able to implement
concepts much faster and test new methods quickly using MATLAB. This allowed for
faster development and testing of new methods. Since MATLAB was used for quick
development, we only focused on the collision adjustment. We did not implement any
goal change scenarios as we figured this would be trivial once the collision adjustment
was implemented. First we focused on the pose adjustment, and then moved onto the
trajectory adjustment. The results for both are shown in more detail in this section.
5.1.1
Pose Adjustment
For the MATLAB Pose Adjustment we looked at 4 different scenarios; however, all
of them used the same initial pose. The difference between them is which link is in
collision, and which direction the pose needs to adjust in. For both link collisions,
the reaction vector is perpendicular to the link. For each case, one scenario has an
upward reaction vector, and one going down. For each scenario, we did not test to
see when the collision is resolved, but rather just ran the pose adjustment algorithm
for 1000 iterations. This was using a very small AXp, specifically a .002 adjustment
factor (or .2% of the normalized reaction vector), for the pose adjustment algorithm.
Figure 5-1 shows the two collisions used for these scenarios. The first image shows the
pose with a collision on the robot's first link. The second image shows the collision on
the second link. Figure 5-2 then shows each of the four scenarios. They display the
64
Figure 5-1: The blue lines represent the robot. The red line represents the collision.
For the first image, the collision is with the first link, and in the second image, the
collision is with the second link of the robot. The goal region is outlined with the
dashed green box. The end-effector of the robot, is the point of the robot in the goal
region. In both scenarios, the robot will try to adjust out of the collision with the
red obstacle, while keeping the end-effector in the goal region.
reaction vector as well as the successfully adjusted pose after running our algorithm.
As you can see in each scenario, the pose successfully adjusts out of the collision with
the red line. The robot's end-effector also remains in the goal region. These results
proved that our methods for computing the pose adjustment were a viable solution
for getting a correct adjustment. We did not optimize for speed with this scenario,
so the algorithm ran slowly. We focused on optimizing for speed in OpenRAVE with
C++. We then used the implementation that produced these results in implementing
the trajectory adjustment.
65
Figure 5-2: Each box shows the different adjustments made. The left side shows the
collision with the first link. The reaction vector denoted by the black arrow shows
in which direction the robot was trying to adjust. The right side shows the collision
with the second link. As you can see, in each scenario, the robot makes a successful
adjustment according to the reaction vector. The end-effector remains in the goal
region and the collision is removed between the robot and the obstacle.
66
5.1.2
Trajectory Adjustment
Now that we have results showing that our pose adjustment algorithm works, we can
move onto the trajectory adjustment. We will still be using MATLAB in 2 dimensions
to rapidly test our methods. We have formulated six different scenarios for the trajectory Adjustment. For these scenarios we used a .004 adjustment factor with the pose
adjustment calculations. This sped up the process from the .002 adjustment factor.
Even a larger adjustment factor, such as .01, showed very similar results. We also
set up a buffer region for the obstacle of .3 units where the length of each robot link
is 10 units. This allowed us to more clearly see that the robot was actually avoiding
the collision successfully. The trajectory was specified by a few main poses, including
the initialpose and the goal pose. Between these main poses, 200 intermediate poses
were interpolated between each main pose. This made up the entire trajectory.
A pose adjustment would be made for any pose in collision with any of the obstacles including the obstacle buffer region. The pose adjustment would complete when
the robot was outside of the buffer region of the obstacle. The number of adjustments
made by each pose varied, but was capped at 2000. If more than 2000 adjustments
were needed, the algorithm would fail. As mentioned in the problem formulation, it
is important that two sequential poses can be executed without causing a collision.
Based on our adjustment algorithm, it is possible that two sequential poses have very
different adjustments, and the sequential poses executed would result in a collision.
The is seen in scenario 4. It is also unclear whether this may happen in scenario 3.
Figures 5-5 and 5-6 show scenario 3 and why the adjustment may fail. Figures 5-7
and 5-8 show scenario 4 and why the adjustments are unsuccessful.
We will now show the results from each scenario, explained below:
1. For this scenario, the path is made to the goal region aware of the large rectangular obstacle. The smaller obstacle comes in after the plan is made. As seen
by the red trajectory, this results in a collision. The trajectory adjustment algorithm successfully finds a similar path moving the robot around the obstacle
and completing the original path into the goal region. This is an end effector
67
collision so the reaction vectors for the collision were calculated through the
end-effector method. This can be seen below in Figure 5-3.
Figure 5-3: This shows the original planned trajectory in the top section. The path is
denoted in red and collides with the blue square obstacle in the top left. The bottom
row shows the successful adjustment made to this collision. The resulting path is
shown in green. Video Here: https: //www. youtube. com/watch?v=9euTFNsBiIO
68
2. This simple scenario involved a robot moving from the initial state to another
state directly in front of it. The trajectory was a simple straight line to the goal
region. The obstacle is then introduced resulting in a collision. The trajectory
adjustment algorithm is able to make a successful adjustment. This is an end
effector collision so the reaction vectors for the collision were calculated through
the end-effector method. Figure 5-4 shows this scenario.
Figure 5-4: This shows the original planned trajectory in the top section. The path is
denoted in red and collides with the blue square obstacle. The bottom row shows the
successful adjustment made to this collision. The resulting path is shown in green.
Video Here: https: //www. youtube. com/watch?v=A28dfyogtvQ
69
3. This scenario is similar to Scenario 2, except now the obstacle is in a different
position colliding with the second link, but not the end-effector.
Thus the
reaction vectors are calculated via the link method. The trajectory adjustment
algorithm finds a new trajectory, however, due to the large jump this may not
be a valid trajectory. In the case that the adjusted trajectory is not valid, the
robot would have to recall the planner. This scenario is displayed in Figure 5-5.
The two sequential poses that show the potential failure are shown more closely
in Figure 5-6.
Figure 5-5: This shows the original planned trajectory in the top section. The path
is denoted in red and collides with the blue rectangular obstacle. The bottom row
shows the adjustment made to this collision. The resulting path is shown in green.
This scenario may not be successful because there is a big jump in sequential poses
from the adjustment. The intermediate poses could be in collision but it is unclear.
Video Here: https: //www. youtube. com/watch?v=yYJxdaaE3C8
70
Figure 5-6: This shows the adjusted trajectory of Scenario 3 where two sequential
poses have a large gap between them and going from the first pose to the second pose
may cause a collision.
4. For this scenario, the robot end-effector needs to move in a vertical line from
its initial position to the goal region. There is an obstacle introduced into the
center of its path. This is an end-effector collision so the reaction vectors are
calculated using the end-effector method. There is a large jump in sequential
poses due to the robot moving past its zero point. this causes adjustments in
the reverse direction on each side of the zero point. When the two adjusted
poses are connected, the collision still isn't resolved. This is a clear example of
our algorithm failing and needing to recall the planner because of a singularity.
Figure 5-7 below shows this scenario. The reason for failure is seen in Figure
5-8.
71
Figure 5-7: This shows the original planned trajectory in the top section. The path is
denoted in red and collides with the blue square obstacle. The bottom row shows the
unsuccessful adjustment made to this collision. The resulting path is shown in green.
Because of the way the reaction vectors are used in adjusting the colliding poses, the
two poses above and below the zero point had very different adjustments causing an
unsuccessful new trajectory. The original planner would need to be re-called because
of a singularity. Video Here: https: //www. youtube. com/watch?v=6IeTixDKu1A
72
Figure 5-8: This shows the adjusted trajectory of Scenario 4 where two sequential
poses have a large gap between them and going from the first one to the second one
will result in a collision. This is seen clearly by the green line because it shows the
end-effector path going straight through the obstacle.
5. This scenario has the robot end-effector going from a vertical initial pose to a
goal region following a simple trajectory. There is an obstacle introduced which
collides with the first robot link. The trajectory adjustment algorithm performs
a simple adjustment to the colliding poses resulting in a valid trajectory though
the end-effector is not exactly at its nominal goal point. This was a collision
with the first robot link so the link method was used in calculating the reaction
vectors. This scenario can be seen in Figure 5-9.
73
Figure 5-9: This shows the original planned trajectory in the top section. The path is
denoted in red and collides with the blue square obstacle. The bottom row shows the
successful adjustment made to this collision. The resulting path is shown in green.
Video Here: https: //www. youtube. com/watch?v=DE7h17BRhW4
74
6. In this scenario, the original path is the same as in the first scenario. But now
the new obstacle is at the bottom of the path. This provides several colliding
poses. The trajectory adjustment performs adjustments on two collisions, with
the first one being with the end-effector. This is a minor collision and is easily
resolved. The second collision is more complex. The adjustment would cause a
collision with the larger obstacle, so it must take this into account as well. The
resulting trajectory is able to avoid all obstacles and reach the goal region. The
reaction vectors are calculated for each collision. The first reaction vectors are
calculated using the end-effector method, and the other two collisions (small obstacle second collision and large obstacle adjustment collision) reaction vectors
are calculated using the link method. This is the most complicated scenario
and really shows the power in using our algorithm. This is our final MATLAB
scenario and can be seen in Figure 5-9.
75
Figure 5-10: This shows the original planned trajectory in the top section. The path
is denoted in red and collides with the small square obstacle. The bottom row shows
the successful adjustment made to this collision. In the adjustment, the robot also
needed to adjust to the large rectangle obstacle because it would have collided with
the adjustment to the first obstacle. This is where you may have two reaction The
resulting path is shown in green. Video Here: https: //www. youtube . com/watch?v=
NBGN5rezk40
76
5.2
OpenRAVE
Once we proved that our methods worked in MATLAB for the 2-dimensional case,
we implemented our algorithm in OpenRAVE. OpenRAVE is a more realistic robotic
simulation environment built in C++ and python and uses 3-dimensions. Since our
algorithm is meant to be fast, we used the C++ version of OpenRAVE. We were
able to get very realistic results in OpenRAVE that could easily be translated onto
robot hardware. For the OpenRAVE simulations, we used the WAM7 robot. This
is a robotic arm with 7 degrees of freedom. We used this particular robot because
we have 2 WAM7 arms in our lab. Although this algorithm was implemented only
for robotic arms, we believe the overall methods could be used for various robotic
applications. As with MATLAB we first focused on the pose adjustment problem.
We then moved to the trajectory adjustment. In OpenRAVE, we implemented both
collision trajectory adjustment and goal trajectory adjustment scenarios. From these
scenarios, it should be clear the power of our algorithm, and the wide range of use
cases. The results are shown in the following section.
5.2.1
Pose Adjustment
Our first task was to replicate the pose adjustment results we had in MATLAB.
Implementing scenarios were more difficult in OpenRAVE, so this task was not quite
as easy as expected. Due to the difficulty of implementing scenarios in OpenRAVE, we
only decided to implement one pose adjustment scenario. Also the pose adjustment
algorithm doesn't really depend on the added dimensionality so we assume that if one
scenario works, they all should work as they did in MATLAB. For this scenario, we
have a table colliding with the second main link of the robot. The reaction vector will
be perpendicular to the robot link in the positive vertical direction. For OpenRAVE
we varied the adjustment factor and number of iterations, so each scenario differs.
The pose adjustment uses an adjustment factor of .004, and caps the number of
iterations at 200. The OpenRAVE pose adjustment result is shown in Figure 5-11
below.
77
Figure 5-11: This image shows three different poses of the robot. The top pose just
shows the robot in its initial vertical position. This gives a clear idea of what the
entire robot arm looks like. The next image shows the robot in a collision pose. It is
clear that the pink link is colliding with the table. The third and final image shows
the successful adjustment. It is a little unclear, but the robot is no longer in collision
with the table. The end-effector is still close to the original position. The green block
is just a member of the environment and is not relevant in this scenario. Video Here:
https : //www . youtube . com/watch?v=xOZONQExZNQ
78
5.2.2
Trajectory Adjustment
Once we implemented the pose adjustment in OpenRAVE, we were able to implement
more interesting scenarios for the trajectory adjustment. We implemented scenarios
for the goal adjustment and collision adjustment algorithms. We also optimized for
speed, while trying to keep the adjusted path near optimal. For this, we found an
adjustment factor of .03 to work best.
We also limited each pose adjustment to
15 iterations before we considered it a failure. We also used 10 intermediate poses
between each main pose. This combination gave us good results with a near optimal
adjusted trajectories. Our algorithm also ran very quickly with these parameters.
To get a more clear view that the robot is in fact adjusting out of collision, instead
of a buffer region, we simply had the robot do an extra three iterations of the pose
adjustment algorithm to give it more space. So the robot would do up to 15 iterations
of the pose adjustment until the robot was out of collision, then it would adjust 3
more times regardless of the depth of the collision. The scenarios in OpenRAVE show
what our algorithm is doing and why it is a major improvement in robot execution.
The following scenarios are separated into goal trajectory adjustments and collision
trajectory adjustments.
Goal Trajectory Adjustment
We implemented two different scenarios for the goal adjustment algorithm. The goal
adjustment algorithm runs instantly so we did not time these scenarios. These are also
not really feasible to use a planning algorithm because the goal is constantly changing.
Each scenario involves one or multiple moving cups. These cups will specify the goal
position. Since the cups are moving the goal is constantly changing; therefore, the
goal adjustment is constantly being called and the robot's trajectory is constantly
updating.
The first scenario is the simpler of the two. It involves a single cup moving in a box
pattern, with varying speeds depending on which edge of the box the cup is moving
along. The robot has an initial trajectory moving it toward the initial position of the
79
cup. As the cup begins to move the robot constantly updates it trajectory based on
the new goal position. At a certain point the robot reaches the cup. At this point,
the robot just follows the cup indefinitely. The robot is constantly performing a goal
adjustment, and then moving to the new goal, as the cup moves. This scenario can
be thought of in the real world as a robot trying to pick up something, such as a cup,
on a conveyor belt. The robot can track down the object and then pick it up. The
scenario can be seen visually in Figure 5-12
Figure 5-12: This image shows six still images of the robot's trajectory. The sequence
of images move right to left and then top to bottom. The first image (top left) shows
the initial pose and the initial position of the cup. The next image (top right) shows
the robot starting its trajectory as the cup has moved. Next (middle left), the robot
begins adjusting its trajectory to track down the cup. In the rest of the images, the
robot continues to follow the cup as it moves at varying speeds. This will happen
indefinitely until OpenRAVE is closed. Video Here: https: //www. youtube. com/
watch?v=P9PoEUqP-mM
80
The second scenario is more complex, now with multiple cups. There are 5 cups
each with a different path the cup oscillates along. For each cup, part of its path is
within reach of the robot, and part is out of reach. The robot will select a cup at
random that is within its reach and create a trajectory to that cup. The robot will
then follow that cup until the cup moves out of reach. Then the robot will select
a new cup at random and repeat the process. This will continue indefinitely until
OpenRAVE is closed. This scenario shows how the robot can adapt to changes in the
environment. Back to the conveyor belt example, if a cup the robot is reaching for is
picked up and removed from the conveyor belt by either a human or another robot,
the robot will have to adapt. Through this example, the robot can easily adapt to a
new cup and move toward that cup.
This scenario shows the true power in the goal adjustment algorithm. The goal is
constantly changing, either by following the moving cup, or going to a new cup, and
the robot is able to adapt nearly instantaneously to the new goal position. This is very
important for unstructured environments where the goal is changing, especially with
multiple robots or humans working in the environment. With multiple controllers in
the environment the robot needs to be able to adapt quickly to the other controllers
actions that could affect the the objective of the robot. This scenario highlights the
capabilities that could be used for these types of tasks. This goal adjustment scenario
is captured in Figure 5-13.
81
Figure 5-13: This image shows six still images of the robot's trajectory. The sequence
of images move right to left and then top to bottom. The first image (top left) shows
the initial pose and the initial position of the cups. The next image (top right)
shows the robot starting its trajectory as the cups begin to spread out. Each cup
has it's own path that it follows. Next (middle left), the robot begins adjusting its
trajectory to track down a randomly selected cup. that it can reach. In the next
image(middle right), The robot follows the cup it initially selected until the cup
moves out of reach. Once the cup is out of reach, the robot randomly selects a new
cup that it can reach (bottom left)' Finally, in the last image (bottom right), The
robot reaches the new cup and then will follow the cup until it moves out of reach.
This process will then repeat indefinitely until OpenRAVE is closed. Video Here:
https : //www . youtube . com/watch?v=TWIivTorNTI
82
Collision Trajectory Adjustment
The collision trajectory adjustment was the more difficult implementation due to
the complexity of computing the reaction vectors. The goal adjustment did not use
reaction vectors, so this piece could be eliminated. We implemented both the endeffector collisions and non-end-effector collisions separately. Although, you can see
by the second scenario, both pieces were needed, because it looks like an end-effector
collision, but at a point only one of the links is in collision and adjusting with the
end-effector reaction vectors would not properly adjust. For all scenarios the robot
has the same goal position; however, the placement of the obstacles and the initial
position changes. There are four total scenarios, two for end-effector collisions and
two for non-end-effector collisions. They will be described in detail below.
End-Effector Collisions
The first scenario for the end-effector is a very simple scenario that involves the
robot adjusting to a table in the planned trajectory. This scenario is similar to the
MATLAB scenario number 2. This is essentially a 2 dimensional adjustment, so we
were able to implement this more easily as we became familiar with OpenRAVE. As
this is an end-effector collision, it will use the end-effector method to calculate the
reaction vectors. Scenario 1 is shown below.
83
Figure 5-14: This image shows four still images of the scenario. The sequence of
images move right to left and then top to bottom. The first image (top left) shows
the initial pose of the robot. Next, (top right) we see the robot's original trajectory
which is in collision with the table. The bottom two images show the adjustment
being made and the robot ending in the goal region collision free. Video Here: https:
//www
.youtube . com/watch?v=IiVOW-slyy4
84
The second end-effector collision scenario is a little more complicated. This has to
use the non-end-effector collision algorithm at the end of the adjustment because the
obstacle is no longer in collision with the end-effector and adjusting with the initially
calculated reaction vectors won't work. This also uses our 3 dimensional methods
showing that our methods are valid for 3 dimensions. This scenario involves the same
planned trajectory as the first scenario, but now the obstacle is a pole which the robot
would have trouble adjusting over it, so it must go to the side. This can be seen more
clearly in the image below.
Figure 5-15: This image shows four still images of the scenario. The sequence of
images move right to left and then top to bottom. The first image (top left) shows
the initial pose of the robot. Next, (top right) we see the robot's original trajectory
which is in collision with the red pole. The bottom two images show the adjustment
being made and the robot ending in the goal region collision free. Video Here: https:
//www. youtube . com/watch?v=nlYqTM8vxU4
85
Non-End-Effector Collisions
The first scenario for the non-end-effector collisions involves the robot's first link
colliding with a pole. The robot is able to adjust to this collision and keep the
robot's end-effector in the goal position.
This uses the non-end-effector reaction
vector method to compute one reaction vector for all poses. The scenario is shown
below.
Figure 5-16: This image shows four still images of the scenario. The sequence of
images move right to left and then top to bottom. The green block in the background
is not relevant for this scenario. The first image (top left) shows the initial pose
of the robot. Next, (top right) we see the robot's original trajectory which is in
collision with the red pole. The bottom two images show the adjustment being
made and the robot ending in the goal region collision free. Video Here: https:
//www. youtube. com/watch?v=Q5mffXlxpxO
86
The second non-end-effector scenario looks very similar to the first scenario, except
now the pole is moved. The pole is moved in such a way that the calculated reaction
vector will be different. The robot will now adjust vertically out of the collision. The
robot's end-effector is not in the goal position in this scenario; however, the endeffector is still in the goal region so this trajectory is valid. This scenario is nearly
identical to the MATLAB scenario 5. The scenario is shown below.
Figure 5-17: This image shows four still images of the scenario. The sequence of
images move right to left and then top to bottom. The first image (top left) shows
the initial pose of the robot. Next, (top right) we see the robot's original trajectory
which is in collision with the red pole. The bottom two images show the adjustment
being made and the robot ending in the goal region collision free. Video Here: https:
//www.youtube
. com/watch?v=IU8gpPBgK3s
87
Collision Trajectory Adjustment Timing
As we mentioned before, a major improvement of our algorithm over current implementations is that our algorithm runs much faster than traditional planning algorithms. We were able to, test the timing of our algorithm and a planning algorithm in
OpenRAVE. The planning algorithm we used was BiRRT which is built into OpenRAVE. We did not test the goal adjustment scenarios with BiRRT as this seemed
impractical given how often the goal was changing, and would be tough to compare
results. For the collision adjustment scenarios, we were able to time our algorithm
making the adjustment as well as BiRRT planning a path based on the goal position
and the object in the environment. The results of these tests can be seen in the table
below. The times are displayed in seconds.
Scenario
BiRRT Time (s)
Trajectory Adjustment Time (s)
End-Effector # 1
1.80
.24
End-Effector # 2
2.30
.28
Non-End-Effector # 1
2.89
.24
Non-End-Effector # 2
.27
.24
Average
1.82
.25
Table 5.1: The timing of running our algorithm vs. running BiRRT on the implemented scenarios
As you can see by the results in the table we saw significant improvement in our
algorithm over BiRRT. In some cases, our algorithm adjusted an order of magnitude
faster than BiRRT. BiRRT has also been used and optimized for a very long time, with
some more research and optimization on the trajectory adjustment algorithm, it may
be able to perform much faster. For now, these results seem promising and support
our idea that re-planning is not always the best way to compensate for disturbances
88
in unstructured environments.
89
90
Chapter 6
Discussion
In the previous chapter, we demonstrated the results of our implementation of the
Reactive Trajectory Adjustment algorithm. We will now discuss the implication of
these results and what still needs to be done to create a more complete and usable
algorithm.
The results demonstrate the effectiveness of the algorithm as well as
highlight its limitations. There is clearly more work that needs to be done, but we
believe we have made significant progress in solving the overall problem.
6.1
Analysis of Results
The overall goal of our algorithm is to make valid adjustments to a path that has
become invalid. We want the adjustments to be made quickly and near optimal. For
the MATLAB results, we only tested that the path was near optimal and ignored
speed. The MATLAB scenarios were very promising in the adjustments made. Other
than scenarios 3 and 4, the adjusted paths were exactly what we were hoping for.
Scenario 3 was tough to tell, but it may have made a successful adjustment as well.
Even in scenario 6 with multiple collisions, the algorithm made a correct adjustment
and the end-effector remained in the goal region.
Scenario 4 was the one bad scenario. This shows the limitations of the algorithm,
and why this is only the first line of defense. This scenario fails because the robot
goes through its zero pose, where the robot is completely straight. Due to the fan
91
style of the reaction vectors, the pose before and after the zero pose, will adjust in
very different directions, causing the invalid path seen in the results. Although this
scenario is considered a failure, it wouldn't be a problem in the overall system. If
this scenario came up and the algorithm failed, the planner would be recalled. This
will likely be slower, but the robot will still be able to successfully plan a new path
around the obstacle.
With the success in the MATLAB scenarios, we moved onto the OpenRAVE
scenarios. With OpenRAVE we implemented the goal adjustment scenarios as well
as the collision scenarios. The goal results were very promising. The robot performed
exactly as we hoped.
Not only could the robot follow the moving object once it
reached it, but the robot could also adjusts its trajectory as it was reaching toward
the cup.
This is very good in accomplishing tasks with multiple robots working
together in an environment and executing tasks quickly and efficiently. A real world
example of this could be a conveyor belt with multiple robots working to pick up
objects. If two robots lock on one object, and one robot picks it up, the other robot
can successfully adjust and pick up a different object. There are many real world
scenarios where this algorithm could be useful.
Next, we implemented the collision adjustment algorithms. As seen from the figures and videos, all of the adjustments made were successful. One potential limitation
is the second end-effector collision. This is because at the end, it was necessary to
switch to a non-end-effector collision. We were able to detect this switch for this scenario, but in other scenarios it might not be as easy. It is tough to tell whether or not
the methods we used for detecting the switch will work for more complex scenarios.
With more testing and development, we should be able to solve this problem for all
scenarios. Regardless, the algorithm worked very well for this scenario.
Another important analysis is the timing of our algorithm vs the BiRRT runningtime.
Our algorithm did perform better than BiRRT for every scenario, and
performed much better on average. We expect that we could get even more significant results with more optimization. It also shows that BiRRT is far more variable
than our algorithm.
We believe BiRRT may have performed very well in simpler
92
scenarios, and not quite as well in more difficult scenarios. Our algorithm performed
relatively similar in all scenarios. We think this is the biggest advantage our algorithm has. In very complex scenarios, we believe our algorithm will be able to make
the minor adjustments in similar times as shown in the results, around a quarter of
a second, while BiRRT will take longer in those complex scenarios. Although the
time difference between our algorithm and BiRRT is not quite as large as we think
is possible, it still shows improvement and a solid starting ground for optimizing and
perfecting our algorithm.
Overall the results from our implementation are very promising. They provided
very good adjustments, and were able to do it much faster than BiRRT on average.
This is also the first implementation of our algorithm, with little optimization. BiRRT
has been around for a while and is highly optimized for performance and accuracy.
We believe we have found a very good starting point at solving the problem of robust
execution with our reactive trajectory adjustment algorithm.
6.2
Contributions
In this thesis, we have described a new system to replace the task of replanning in
unstructured environments.
We have made a few key contributions to the success
in making the reactive trajectory adjustment algorithm. Specifically we developed
a method for computing an optimal adjustment and calculating the correct reaction
vectors.
Our first major contribution was with the quadratic program used to find the best
incremental adjustment.
This method was very important in maintaining a valid
goal pose, while making a successful adjustment out of collisions.
Prior work has
been done using a jacobian and a reaction vector to move a multi degree of freedom
robot, but optimizing on this value as well as how close the end-effector was to the
original position was presented in this thesis.
This ended up being very valuable
as it wasn't overly computationally expensive, and computed accurate adjustments.
This quadratic program may not be perfect, but provides a very good starting point
93
for computing accurate adjustments. Some things require tweaking such as the cost
function weighting. How much to weight the goal position error versus the reaction
vector error is a difficult task to get perfectly right.
We feel that this quadratic
program is a valuable contribution to solving the original problem of adjusting to
disturbances in an unstructured environment.
The more significant contribution is in calculating the reaction vectors. This was
one of the more unique discoveries made while experimenting with the trajectory adjustment techniques, and turned out to work very well. We understand our current
methods are not perfect, and it is quite likely that a better method for computing reaction vectors could be found; however, we feel that we have made significant progress
in how to reason about the reaction vectors to avoid collisions. Distinguishing between
the end-effector and non-end-effector collisions was also a valuable contribution, although this is definitely not complete. As we saw in one of the OpenRAVE scenarios,
it can be necessary to use both. Again, these methods are not perfect but are valuable for continuing development with the reactive trajectory adjustment algorithm.
Separating the collisions between end-effector and non-end-effector and using this in
how we calculated the reaction vectors were valuable contributions made in this thesis
in solving the problem of a more robust execution system.
This thesis provided a detailed explanation of these contributions for people to
develop and enhance for several robotic systems.
Not only do these contributions
provide a working algorithm in adjusting trajectories for robots in unstructured environments, but it also lays the groundwork for building a more robust and efficient
execution system.
6.3
Future Work
Although we think we have made a lot of progress with the reactive trajectory adjustment algorithm, there is still a lot of work to be done. The first step is a more
complete implementation. One that can decipher what type of collision the robot is
in, and then make the adjustment accordingly. For our results, we separated each
94
type of scenario. We also never implemented the multiple collision functionality in
OpenRAVE as we did in MATLAB. Through the MATLAB results, we are confident
this can be done, but may be tricky, as we need to detect each obstacle and classify each collision separately. Putting everything into one complete program is very
important for any real world uses.
Once a complete program is made, there still needs to be a lot of optimization.
The adjustments seem to be near optimal from our scenarios, but there still could
be improvement.
The main optimization that is needed is with timing. There are
several aspects which could improve the time, or parameters that could be tweaked for
optimal timing. For instance, the size of adjustment is very important. We looked into
this a little, but finding the perfect value that works fast while still providing a good
adjustment is difficult. Along those lines, the number of intermediateposes between
main poses can have a drastic effect on the timing. Too few intermediate poses will
perform very quickly, but the adjustment may be way off. Finding the optimal value
for this is tricky, and may change depending on the scenario, or distance between
main poses.
Another important timing optimization is the time before our algorithm fails. If
the adjustment needed is too big, our algorithm should fail, as it would be more
efficient to call the planner. This is more an issue of the overall system but still valid
for our algorithm. How many adjustments should be made before failure is important
in making a complete robust executive system. There are likely many other timing
optimizations that can be done in order to run our algorithm more effectively.
Aside from our specific algorithm, the overall goal is to create a full reactive execution system. Once we have a completed implementation of our algorithm the next
step would be to integrated it with the full Chekhov execution system. The overall
goal is to create this system, which would solve our overall problem of allowing robots
to react and adjust to disturbances in unstructured environments. Our algorithm is
the first line of defense, but the full system is necessary to run in real world situations.
The architecture of the full Chekhov system can be seen in figure 6-1. The reactive
trajectory adjustment algorithm falls into the Motion Executive subsystem. Another
95
member of our lab, Justin Helbert, is working on a piece for the Reactive Motion
Planner. Justin is implementing an incremental All Pairs Shortest Path Algorithm
for use with Chekhov. Currently the algorithm has a high initial cost, but can generate new plans quickly when there is a disturbance. As our algorithm is used for small
adjustments, Justin's is used when a larger adjustment is made. These will both be
important pieces in creating a complete robust executive system.
IChekhov Inputs
Actuation Commands
Figure 6-1: This diagram shows the overall system architecture of Chekhov. The
algorithm covered in this thesis falls in the Motion Executive subsystem.
96
Bibliography
[1] Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y. Ng. An application
of reinforcement learning to aerobatic helicopter flight. In In Advances in Neural
Information Processing Systems 19, page 2007. MIT Press, 2007.
[2] Michael Goodrich. Potential fields tutorial.
[3] S. Karaman and E. Frazzoli. Sampling-based algorithms for optimal motion planning. InternationalJournal of Robotics Research, 30(7):846-894, June 2011.
[4] Oussama Khatib.
Real-time obstacle avoidance for manipulators and mobile
robots. The InternationalJournal of Robotics Research, 1986.
[5] Steven Lavalle. Rapidly-exploring random trees: A new tool for path planning.
Technical report, 1998.
97
Download