by

advertisement
Optical Proximity Sensor and Orientation Control of
Autonomous, Underwater Robot
by
Martin Lozano, Jr.
B.S., Mechanical Engineering, Massachusetts Institute of Technology, 2012
Submitted to the Department of Mechanical Engineering in partial fulfillment of the
requirements for the degree of
Master of Science in Mechanical Engineering
OF TC
at the
2014
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
LF3RARIES
June 2014
@ Massachusetts Institute of Technology 2014. All rights reserved.
Signature redacted
Signature of Author............................
Department of Mechkf ical Engineering
May 9, 2014
Signature redacted
Certified by...................................
LG
H. Hary Asada
Ford Professor of Mechanical Engineering
Thesis Supervisor
Signature redacted
Accepted by .............................................................................
David E. Hardt
Chairman, Department Committee on Graduate Students
Department of Mechanical Engineering
Optical Proximity Sensor and Orientation Control of
Autonomous, Underwater Robot
by
Martin Lozano, Jr.
Submitted to the Department of Mechanical Engineering
on May 9, 2014, in partial fulfillment of the
requirements for the degree of
Master of Science in Mechanical Engineering
Abstract
Autonomous mobile robots need a reliable means of navigation to reach their target
while avoiding collisions. This requires continuous knowledge of the vehicle's position,
orientation, and motion as well as a way to identify their surroundings. Exploratory
robots and those traveling in complex environments may have difficulty determining
their global location. They often rely on data from sensors to estimate their position.
While various proximity sensors have been developed for land vehicles, options for
underwater vehicles are limited.
We detail the design of an optical orientation sensor for fine positioning of highly
maneuverable underwater robots. The sensor consists of a camera-laser system (CLS)
to geometrically estimate distances to points on a surface. By aggregating and analyzing several data points from multiple lasers, an estimate of the robot's distance, yaw,
and pitch are determined. A prototype sensor is constructed and shown to achieve
highly accurate distance estimates ( 1mm) at close ranges within 270mm and yaw
rotation estimates of 2* within the range of 30*. We also show the successful integration of a gyro with the CLS on an autonomous surface vehicle. The fused estimate
of the two sensors results in improved dynamic performance than either sensor alone.
The optical sensor corrects the unbounded position error of the gyro measurements
with the added benefit of external feedback to avoid collisions in dynamic environments. The gyro provides high frequency orientation estimation in between optical
measurements, greatly reduces transient behavior, and generally smoothens vehicle
motion.
Using this sensor, an underwater robot exploring a complex environment can estimate its orientation relative to a surface in real-time, allowing the robot to avoid
collisions with the sensitive environment or maintain a desired orientation while autonomously tracking objects of interest.
Thesis Supervisor: H. Harry Asada
Title: Ford Professor of Mechanical Engineering
3
4
Acknowledgments
First and foremost I would like to thank Professor Asada for his guidance over
the past two years. His advice, support, and encouragement has been invaluable to
me, and I feel like I have made great strides as an engineer under his tutelage.
I am also grateful to my colleagues at the D'Arbeloff Laboratory for not only
providing me with great insights but also for making the lab environment thoroughly
enjoyable and exciting. In particular I'd like to thank my good friend Anirban Mazumdar for introducing me to the lab. He's been a great friend, colleague, and mentor. I
feel privileged to be able to work alongside so many bright and interesting men and
women.
I must thank my friend Ramya Swamy for making my final year at MIT so enjoyable and full of laughs. I have tremendously enjoyed the time we have spent attending
concerts, watching movies, and hanging out. I hope there will be many more such
times in the future.
I would also like to thank all my other friends, study buddies, concertgoers, film
buffs, and fine diners, whose company I deeply cherished during the last six years.
Finally, I must thank my parents for their constant love and encouragement.
5
6
Contents
1
2
Introduction
17
1.1
Underwater Robots for Cluttered Environments . . . . . . . . . . . .
17
1.2
Localization of Underwater Robots
. . . . . . . . . . . . . . . . . . .
18
1.3
Nuclear Power Case Study . . . . . . . . . . . . . . . . . . . . . . . .
19
1.4
Functional Requirements of Sensor
. . . . . . . . . . . . . . . . . . .
21
1.4.1
Measures Three Degrees of Freedom . . . . . . . . . . . . . . .
21
1.4.2
Featureless Surface Detection
. . . . . . . . . . . . . . . . . .
22
1.4.3
Capable of Fine Positioning . . . . . . . . . . . . . . . . . . .
24
1.4.4
Compact and Lightweight . . . . . . . . . . . . . . . . . . . .
24
1.4.5
Stable Vehicle Control . . . . . . . . . . . . . . . . . . . . . .
25
1.5
Design Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
1.6
Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
Previous Work
2.1
2.2
29
Control Configured Spheroidal Vehicle
. . . . . . . . . . . . . . . . .
29
2.1.1
Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
2.1.2
Multi-DOF Propulsion . . . . . . . . . . . . . . . . . . . . . .
31
2.1.3
Compact Internal Propulsion System . . . . . . . . . . . . . .
33
2.1.4
Unstable Vehicle Dynamics
. . . . . . . . . . . . . . . . . . .
35
2.1.5
Feedback Controller Design
. . . . . . . . . . . . . . . . . . .
37
Existing Localization Technologies . . . . . . . . . . . . . . . . . . . .
38
2.2.1
Sonar
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
2.2.2
Infrared
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
7
.........39
2.2.3
GPS . . . . . . . . . . . . . . . . . . . . . . .
2.2.4
Vision-based Mapping ......................
40
2.2.5
Laser Rangefinders . . . . . . . . . . . . . . . . . . . . . . . .
40
41
3 Camera-Laser Sensor Measurement Principle
42
3.1
Horizontal Case ..............................
3.2
Vertical Case ......
3.3
Direct Distance Estimation ........................
3.4
Rotation Estimation
3.5
Perpendicular Distance Estimation . . . . . . . . . . . . . . . . . . .
48
3.6
Working Range vs Precision (Sensitivity at Large Distances) . . . . .
48
42
...............................
..........................
3.6.1
Large Working Range ......................
3.6.2
High Precision .................................
44
. 46
. 50
51
4 Implementation and Experimental Characterization of Camera-Laser
53
Sensor
4.1
Tuned CLS Design . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
4.2
Implementation ..............................
56
4.3
4.2.1
Hardware ...................................
56
4.2.2
Image Processing ..............................
58
4.2.3
Sensor Calibration .............................
58
Experimental Characterization of Sensor . . . . . . . . . . . . . . . .
62
4.3.1
Yaw Estimations ..............................
62
4.3.2
Perpendicular Distance ......................
62
67
5 Autonomous Surface Robot
5.1
Design .........................................
67
5.2
Dynamics and Stabilization Through Feedback Control . . . . . . . .
69
5.2.1
Quasi-stationary Rotation . . . . . . . . . . . . . . . . . . . .
69
5.2.2
Forward Translation
. . . . . . . . . . . . . . . . . . . . . . .
71
8
6 Control Implementation
73
6.1
Digital Control (Tustin's Method) ........................
73
6.2
Rotation Rate Feedback (Gyro) . . . . . . . . . . . . . . . . . . . . .
74
6.3
Rotation Angle Feedback (CLS) . . . . . . . . . . . . . . . . . . . . .
74
6.4
Comparison of Results .....
75
..........................
6.4.1
Gyro Control ...........................
75
6.4.2
CLS Control . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
7 Sensor Fusion
79
7.1
Introduction to Sensor Fusion . . . . . . . . . . . . . . . . . . . . . .
79
7.2
Gyro with Optical Control . . . . . . . . . . . . . . . . . . . . . . . .
80
7.3
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
8 Conclusion
85
8.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
8.2
Potential Research Directions
86
. . . . . . . . . . . . . . . . . . . . . .
9
10
List of Figures
1-1
A diagram illustrating a GE Boiling Water Reactor system . . . . . .
20
1-2
Three parameters of interest for a robot system facing a surface. . . .
22
1-3
Submerged weld seams. . . . . . . . . . . . . . . . . . . . . . . . . . .
23
1-4 A common pipe inspection trajectory. . . . . . . . . . . . . . . . . . .
25
1-5 Early design prototype of the laser-camera localization sensor. ....
26
2-1
30
Photographs of the outside of the CCSV prototype. . . . . . . . . . .
2-2 Photograph of the CCSV and its controllable motions.
. . . . . . . .
30
2-3
An illustration of the coordinate frame convention.
. . . . . . . . . .
31
2-4
A diagram illustrating the jet arrangement for the CCSV design. . . .
32
2-5
An illustration of the bistable fluidic amplifier concept. . . . . . . . .
33
2-6
An illustration showing the full pump-valve concept.
34
2-7
Photograph of a BAU prototype. Two fluidic valves are combined with
. . . . . . . . .
an orthogonal dual output port pump to generate forces in 4 directions. 35
2-8
A rendering and photograph of the CCSV maneuvering system.
2-9
An illustration of the Munk moment and how the stagnation points
create a turning motion on streamlined shapes.
3-1
. . .
. . . . . . . . . . . .
36
37
A diagram illustrating the camera-laser layout (a), the viewable image
frame of the camera (b), and the centerline where the left and right
lasers appear (c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3-2
43
A diagram illustrating the centerline where the top and bottom lasers
appear. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
44
3-3
Top view of a single-laser CLS (a), placed before a flat surface for distance estimation (b), and the resulting distances to points-of-interest
45
(c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3-4
The camera image with a laser dot reflecting off a surface and the pixel
46
distances to points-of-interest. . . . . . . . . . . . . . . . . . . . . . .
3-5
Top view of a CLS with yaw angle b relative to a surface (a), the
direct distances measured with the left and right lasers (b), and the
geometric relation of the yaw angle with the measured distances and
sensor parameters (c).
3-6
47
. . . . . . . . . . . . . . . . . . . .. . . . . . .
A diagram illustrating the direct laser distances, d, and d, and perpendicular distance, d-., measurable by the CLS (a), and derivation of
the perpendicular distance using the left laser values (b). . . . . . . .
3-7
49
Two configurations of the camera laser system. The laser angle in configuration A is nearly parallel to the camera's viewing angle, remaining
in view for larger distances. Laser B passes through the camera's entire
field of view over a much shorter range of distances. . . . . . . . . . .
3-8
CLS configuration A shows only slight changes in pixel location despite
large changes in distance . . . . . . . . . . . . . . . . . . . . . . . . .
3-9
4-1
50
51
Configuration B has a larger laser angle, decreasing the overall working
range of the sensor and improving the accuracy within that range. . .
52
. . . . . . .
54
The two tunable design parameters of the CLS, 1 and -Y.
4-2 A plot of the sensitivity of pixel location to sensor distance, for various
laser positions L. Maximizing improves sensor performance. . . . . . .
4-3
55
A plot of the sensitivity of estimated distance to pixel location, for
various laser angles, -y. Minimizing improves sensor performance. . . .
4-4 A photo of the CLS prototype for distance and yaw estimations. .
.
.
56
57
4-5 Images taken directly from the CLS camera at each end of its designed
working range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
57
4-6
Steps to locate the laser points. The CLS takes an image (a), converts to grayscale (b), places a threshold (c), finds contours (d), and
then finds the circles that enclose those contours e). All images taken
directly from the CLS. . . . . . . . . . . . . . . . . . . . . . . . . . .
4-7
58
Plot of the CLS-estimated distances before calibration using ideal parameter values and a plot of the actual distances.
. . . . . . . . . . .
60
4-8 Plot of the calibrated CLS-estimated distances, giving near-perfect distance readings.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
4-9 Photos of the CLS taking yaw measurements at various angles. . . . .
62
4-10 Box plot of yaw estimation error.
. . . . . . . . . . . . . . . . . . . .
63
4-11 Box plot of perpendicular distance estimation error. . . . . . . . . . .
65
5-1
Size comparison of the CCSV inspection vehicle and the CLS test vehicle. 68
5-2
CAD model of the robot (a) and the assembled vehicle (b). . . . . . .
5-3
A diagram of the four vehicle jets acting about the center with moment
armc. ........
5-4
...................................
70
Closed-loop yaw-control block diagram with angle (4) and rotation
rate (4) feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5-5
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
74
Closed-loop yaw-control using a gyro and the CLS for direct rotation
rate (4) and angle (4) feedback, respectively.
6-3
72
Closed-loop yaw-control using a gyro for direct rotation rate (4) feedback and angle estimation (4) through integration.
6-2
71
Forward translation open-loop pole-zero diagram (a) and the root-locus
plot with a PD controller.
6-1
68
. . . . . . . . . . . . .
75
Step response of gyro-based yaw estimation feedback to a 30* disturbance input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
6-4 Step response of CLS-based yaw estimation feedback to a 300 distur-
7-1
bance input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
Example sensor fusion diagram to estimate yaw angle, 4. . . . . . . .
80
13
7-2
Closed-loop yaw-control using a gyro for direct rotation rate (t) feedback fused with the CLS angle measurements (4V). . . . . . . . . . . .
82
7-3 Step response of fused gyro-CLS yaw estimation feedback to a 300
disturbance input.. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
82
List of Tables
2.1
Summary of Maneuvering Primitives...................
32
4.1
Summary of Yaw Experiments .
64
4.2
Summary of Perpendicular Distance Experiments. ............
....
15
.....................
64
16
Chapter 1
Introduction
1.1
Underwater Robots for Cluttered Environments
Modern societies have become increasingly dependent on water based infrastructures
such as power systems, ports, piping systems and water treatment plants. As these
systems age, they require repairs and inspections with increasing frequency. Since
many of these systems are essential to public safety, there exist strict protocols for inspection. For example, for Boiling Water Reactor (BWR) nuclear powerplants, there
exist very strict and specific visual inspection protocols. For many of these applications, it is difficult, costly, dangerous, and sometimes even impossible to send humans
to inspect and assess. In addition, for many systems, inspections cannot be performed
with the system running and must instead be performed during a shutdown. These
shutdowns are not only inconvenient and disruptive, they can be extremely costly economically. As a result, the inspection of cluttered aquatic environments is a rapidly
growing area of research and technical innovation.
Underwater robots are already being developed and deployed for the inspection
of ports [1], [2], dams [3], water piping systems [4], shipwrecks [5], and nuclear power
plants [6], [7].
Some key challenges for these types of robots relate to accessing
and inspecting complex environments where small size and high maneuverability are
required. In addition, collision avoidance is an important characteristic for vehicles
operating in these constrained and highly sensitive environments.
17
1.2
Localization of Underwater Robots
Localization plays an important role in robotic and mobile system control. Accurate
localization feedback, such as position and orientation measurements relative to the
environment, can help avoid collisions or incorrect positioning of the robot, the result
of which can cause physical damage to the vehicle or environment, or compromise
its mission or purpose. For these reasons, navigation and collision avoidance have
remained a key issue of robotic systems. Several solutions exist to provide accurate
localization for land-based robots. Three of the most common methods include human navigation, track-following, and sensor feedback. Human navigation involves a
human operator controlling the robot's movement who has a clear line-of-sight with
the vehicle or otherwise has an understanding of the robot's location at all times. A
track-following robot can be commanded to follow a pre-defined, collision-free trajectory that has been checked and planned ahead of time, or the robot may be physically
constrained to run on a track that has been designed through the environment. Finally, sensors are capable of monitoring the robot's position and motion, in some form
or another, and this information can be used to detect would-be collisions and adjust
the robot's trajectory accordingly.
Unfortunately the first two of these methods do not work well for underwater
robots. As these robots operate within a submerged environment, a human operator may not always have a clear view of the vehicle. Furthermore, a pre-determined
trajectory will not always provide satisfactory results due to the constant and random disturbance from the water waves, currents and general drift present in fluid
environments. Construction of a physical track for the vehicle to ride along is often
not possible for the inspection of sensitive or spontaneous environments. For these
reasons, most underwater robots rely on sensor feedback to control their location, yet
much work is still being performed to find sensors that are as reliable underwater as
they are on land.
18
1.3
Nuclear Power Case Study
The challenging nature of underwater infrastructure inspection is perhaps best illustrated with boiling water nuclear plants. Boiling water reactors serve as a good
case study example because they are complex water-filled systems, highly regulated,
and must satisfy strict inspection protocols. In addition, nuclear power plants are
clearly areas where direct inspection using human workers is something that must be
avoided.
As Figure 1-1 illustrates, the inspection of the reactor environment is a very
challenging problem. The system can be treated roughly as a 15m diameter, 40m deep
pool of water filled with nozzles, guides, pipes and tubes that must all be navigated
and inspected. An illustration of the inside components of a BWR plant can be
found in Figure 1-1, which provides a popular diagram of a General Electric (GE)
reactor assembly [8]. As Figure 1-1 shows, the environment is very complex with
small areas such as the top guide (item 12 in the figure) placing restrictions on size
and access. Deploying and then finely maneuvering tools within this environment is
clearly challenging and requires robots that are relatively small and nimble as well as
robust.
The sensitive equipment within a nuclear reactor also demonstrate a need for obstacle detection and collision avoidance. If an inspecting vehicle were to accidentally
hit one of the tuned sensors or other delicate machinery monitoring the core, the
resulting physical damage or displacement of the sensor could make the reactor unusable and defeat the purpose of the initial inspection. Collisions may also produce
debris, which could incite larger problems in the system. Nuclear power plants are
subject to foreign material exclusion (FME) rules that stipulate that no outside materials can be left within the plant after an inspection. This means the robot must
be able to survive collisions without breaking, and as an added precaution should
actively avoid obstacles.
As mentioned previously, nuclear powerplants are shut down during inspection
(usually during the refuel cycle). This means that there is a clear economic incentive
19
BWR/6
REACTOR ASSEMBLY
1. VENT AND HEAD SPRAY
2. STEAM DRYER LIFTING LUG
3. STEAM DRYER ASSEMBLY
4. STEAM OUTLET
5. CORE SPRAY INLET
S. STEAM SEPARATOR ASSEMBLY
7. FEEDWATER INLET
8. FEEDWATER SPARGER
9. LOW PRESSURE COOLANT
INJECTION INLET
10. CORE SPRAY LINE
11. CORE SPRAY SPARGER
12 TOP GUIDE
13. JET PUMP ASSEMBLY
14. CORE SHROUD
15. FUEL ASSEMBLIES
16. CONTROL BLADE
17. CORE PLATE
18. JET PUMP/ RECIRCULATION
WATER INLET
19. RECIRCULATION WATER OUTLET
20. VESSEL SUPPORT SKIRT
21. SHIELD WALL
22. CONTROL ROD DRIVES
23. CONTROL ROD DRIVE
HYDRAULIC LINES
24. IN-CORE FLUX MONITOR
GENERAL *IECTRIC
Figure 1-1: A diagram illustrating a GE Boiling Water Reactor system.
20
for rapid inspections. Inspections that are slowed by sluggish or unreliable equipment
can cost power companies and inconvenience thousands of people and businesses.
Therefore, it is not surprising that developing robotic systems which can enter the reactor environment, maneuver precisely, and obtain visual data are an area of growing
innovation and research.
1.4
Functional Requirements of Sensor
Robots intended for use within nuclear reactor environments are subjected to strict
requirements. We can consider a few key functional requirements, outlined below, as
a valuable starting point for the research and development of a prototype localization
sensor for one of these direct inspection robots. These functional requirements are
very general and summarize the needs for a wide class of underwater localization
sensors.
Therefore, sensors that meet these requirements will likely be generally
applicable to a wide class of emerging inspection applications.
1.4.1
Measures Three Degrees of Freedom
Visual inspections using cameras are the most common inspection methodology of
BWR plants. During these inspections the vehicle must deliver cameras precisely to
a specified site, then perform a controlled motion to thoroughly inspect the area. We
can then consider our problem as controlling the placement of the onboard camera so
the operator may sufficiently evaluate the images and plant conditions. Specifically
we are concerned with three of the camera's degrees of freedom: its distance and
rotations relative to a surface. Figure 1-2 illustrates these parameters.
Consider an inspection vehicle as represented by the black dot. The black, bodyfixed coordinate frame represents the robot's initial orientation as it's a certain distance, d, from a flat surface and perfectly perpendicular to said surface. This is the
perpendicular distance from the surface and remains unchanged if the robot rotates
about its center. Now imagine the robot rotates counter-clockwise about its z-axis,
(yaw, 0), then clockwise about its y-axis (pitch, 0). The robot now has a normal
21
Surface
Pitch, 0
Yaw,
z
d
y
Figure 1-2: Three parameters of interest for a robot system facing a surface.
heading vector, n. While the distance measurement alone is enough to avoid obstacles,
additional control of the yaw, and pitch is required for proper visual inspection.
Roll, rotation about the x-axis, is generally not useful for visual inspection tasks
as it adds no improvement over the viewable area and instead rotates the image which
could become disorienting for the operator. In terms of maneuverability, rolling does
not help symmetric vehicles fit into confined areas.
1.4.2
Featureless Surface Detection
It has been found that cracks within the BWR plant frequently begin forming at
the welded joints of dissimilar metals [9]. For this reason submerged weld seams and
their surrounding areas are most often visually inspected, as shown in Figure 1-3.
Depending on the types of metals joined, these areas can be smooth and reflective
in texture, or rough like concrete. Similarly, depending on how the weld lines were
finished, they can be smooth, almost seamless, or can have a distinct direction or
22
flow. A proper underwater localization sensor should perform well with all of these
types of surfaces. It must detect reflective, nonreflective, smooth, and rough surfaces,
and provide accurate distance and angle measurements for each.
Figure 1-3: Submerged weld seams.
Structurally, most of the internal surfaces within the reactor are featureless. They
are often uniform blocks of metal with no color and only occasional markings. Furthermore, the camera must be kept fairly close to the surfaces for inspection. This
means that throughout most of the inspection, a nondescript image is displayed from
the camera of either a uniform wall, or a weld seam on a uniform wall. As a result,
the sensor cannot rely on special markings or unique identifiers during the inspection
task.
23
1.4.3
Capable of Fine Positioning
Due to the cluttered and sensitive environment, precision maneuvering of the robot is
required. The ideal localization sensor should offer such fine position control to ensure
collision avoidance and safely guide the robot around obstacles or stop it when too
close. Additionally, proper visual inspection -is dependent on fine positioning and
correct camera placement. The camera must be positioned the correct distance away
from the surface to offer the best view, and rotations must be accurately controlled
to ensure all areas of the plant can be viewed. As a starting point, our group has set
the requirement of one millimeter accuracy in distance measurements and two degree
accuracy in angle measurements.
Cluttered environments also set a functional requirement for the sensor's range.
Because of the numerous sensors, structures, and equipment within the BWR, a
large working range is unnecessary. Instead the sensor should work at close ranges
due to the fine-positioning nature of the inspection task and the average distance
between objects in the reactor. Our group again has set a working range of 30 to 300
millimeters distance estimation, and -30 to +30 degrees angle estimation, with the
understanding that the inspection vehicle is manually driven to the inspection site.
Once there the robot takes over for automated distance control and visual scanning.
1.4.4
Compact and Lightweight
For accessing a confined area the vehicle must be compact.
To this end, sensor
components must be internal, miniaturized, and battery size must be minimized. As
mentioned previously, the top guide shown in Figure 1-1 limits the maximum size
of the vehicle to about one square foot. Additionally the sensor can't be too heavy
or the robot may sink to the bottom. With more mass comes more momentum,
and the harder it is to accelerate, decelerate, and generally control the robot. Power
consumption also tends to scale with larger sensors, requiring more or denser batteries.
24
1.4.5
Stable Vehicle Control
The sensor measurements and the frequency of new measurements must be properly
incorporated into the vehicle's control system to ensure stable motion and positioning.
Here the rate of the sensor could potentiality be an issue. If the sensor is too slow, the
information received may no longer apply to a moving vehicle and instead cause the
robot to move further in an unwanted direction. Such instabilities can cause collisions
or make it impossible to visually focus on an area for inspection. Figure 1-4 shows a
proposed use case of automated orientation control during a visual inspection task.
After the robot has been manually driven to the desired pipe, the localization sensor
then takes control. It maneuvers the robot around the outer surface of the pipe,
maintaining a specified distance and looking inward at its surface.
Figure 1-4: A common pipe inspection trajectory.
1.5
Design Concept
There are several proximity sensors available today for underwater use. These include
sonar, infrared (IR) proximity sensors, global positioning systems (GPS), vision-based
methods, and laser rangefinders. Unfortunately none meet all the requirements outlined above. In response, we propose a camera-laser localization sensor that meets the
above functional requirements. Figure 1-5 shows an early prototype of this visionbased localization sensor.
Four laser diodes have been placed in a ring around a
25
camera. The laser beams are angled to cross into the camera's field of view, and an
onboard processor analyzes the images. It locates the laser dots and computes the
distance to each one. In this way we create our own features to detect on an otherwise
uniform, blank wall. Next, with a simple geometric measurement principle, pitch and
yaw rotation angles can be estimated from these individual distances.
Figure 1-5: Early design prototype of the laser-camera localization sensor.
To further reduce size, we use the onboard camera already in use for visual inspection as the optical sensor for laser detection. In all, this solution allows the robot
to remain compact, lightweight, and measure fine distances and orientation angles.
Furthermore, because the laser beam is emitted from the sensor, reflections can be
made on all types of surfaces and even in poor lighting conditions, making it widely
applicable for a variety of use cases.
In our view there are two main technical challenges that make designs like this
sensor concept difficult to realize. The first is the design of a compact camera and laser
system that can reliably measure perpendicular distance and orientation angles within
the desired range (30 to 300 millimeters and -30 to +30 degrees) and with the desired
accuracy (+1mm, +2*).
The second major technical challenge is associated with
maintaining stable vehicle control despite the low sampling rate of our implemented
sensor.
26
1.6
Thesis Overview
This work will focus on the analysis, design and evaluation of this new localization
sensor concept. We discuss the development of a novel camera-laser system tuned
specifically for fine orientation measurements. We will also discuss how this system
can be used as an enabling technology to achieve very precise control of small, underwater robots. We will also analyze in depth the nature of the robot's control
feedback and propose a unique and realizable stabilizing controller. Additionally full
prototypes of the sensor and robot are constructed and used as a test-bed for these
new concepts and ideas.
Chapter 2 of this thesis focuses on previous work and available technologies. We
describe a new type of robotic underwater vehicle designed specifically for the inspection of critical infrastructures such as boiling water reactor nuclear power plants. The
vehicle can access confined areas, maneuver precisely, and move easily in several directions. We then survey existing proximity sensors and localization methods, noting
the issues inherent with each for such complex inspection tasks.
Chapter 3 explains the underlying measurement principles of the proposed proximity sensor and discusses an important limitation regarding working range and precision.
Chapter4 describes an optimal design around the limitation presented in Chapter
3 to meet our range and accuracy requirements. Experiments are used to evaluate
sensor performance and the results are shown to correspond well to the design goals.
Chapter 5 discusses the design of an autonomous surface vehicle (ASV) for use in
testing the proximity sensor. Designs are based off the omnidirectional submersible
vehicle detailed in Chapter 2. The dynamics of the ASV are analyzed and a feedback
controller is developed to achieve stable planar orientation control.
Chapter 6 converts the stabilizing control system from Chapter 4 into the digital
domain to be implemented on the ASV. Experiments are conducted using yaw-rate
feedback from a gyroscope sensor, then with yaw-angle feedback from the camera-laser
sensor. In both cases the robot is shown to stabilize even in the face of substantial
27
disturbances. However limitations of each are identified, showing a tradeoff between
system performance time and accuracy.
Chapter 7 introduces the idea of sensor fusion, combining the gyroscope and
optical sensors to achieve an overall better system performance. The fused stabilized
vehicle is shown to provide substantially improved performance, especially with regard
to sensing changes in the environment.
The thesis concludes with Chapter8 which provides a final overview and possible
directions for continuing research.
28
Chapter 2
Previous Work
2.1
Control Configured Spheroidal Vehicle
Our group has identified the main functional requirements of a submersible robot for
the inspection of critical infrastructures: motions in multiple directions, bidirectional
motions, maneuverability at a range of speeds, and robust to collisions. Based on
these requirements we developed a conceptual design which is a novel and innovative
approach to the challenge of infrastructure inspection [10. Specifically, we developed
a vehicle that is completely smooth and spheroidal in shape, shown in Figure 2-1.
Figure 2-2 displays the vehicle's controllable motions. The vehicle propels itself and
maneuvers using water jets that can be modulated and switched between various exit
ports. The symmetric and smooth nature of the shape allow for bi-directional motions, high maneuverability, and robustness to collisions. The use of jets for propulsion
rather than propellers means that the risk of a spinning propeller breaking during a
collision or becoming tangled is removed. In addition, the absence of stabilizers such
as fins means that there are no components that can snag on obstacles.
We describe this approach as the Control Configured Spheroidal Vehicle (CCSV).
We use this title because we emulate the Control Configured Vehicle ideas from aeronautical engineering [11]. The vehicle is designed specifically to achieve multi-degreeof-freedom (DOF) motions and high fidelity feedback control performance. Finally,
just like many modern CCV type vehicles, our design is open loop unstable and uses
29
feedback control instead of passive stabilizers to achieve superior performance.
(a) Top View
(b)Front View
Figure 2-1: Photographs of the outside of the CCSV prototype.
Figure 2-2: Photograph of the CCSV and its controllable motions.
There were two main technical challenges of this CCSV design concept.
The
first was the design of a propulsion and maneuvering system that can fit within a
small, streamlined shell. The most popular approach for underwater vehicles is to
use propeller thrusters, but these have to sit in the ambient fluid to operate properly.
In addition, combining several external propellers to achieve multi-DOF propulsion
results in several propellers on the outside of the vehicle, making it less hydrodynamically efficient and more difficult to position precisely. The second major technical
30
challenge was associated with the presence of hydrodynamic stability. This instability
is generally dealt with by adding fins at the tail of the vehicle. However, these are not
only large appendages, but they will destabilize the vehicle if the vehicle direction is
reversed, making bidirectional motions challenging.
2.1.1
Nomenclature
Throughout this thesis we will be using terminology and nomenclature derived from
the field of Ocean Engineering. Specifically we will use kinematics and dynamics
based on a body-centered coordinate system shown in Figure 2-3. This system was
developed by the Society of Naval Architects and Marine Engineers in 1952 and is
prevalent in the underwater vehicle literature [12], [13].
As the figure illustrates, u, v, w represent translational velocities about the x, y,
and z axes respectively. These motions are also described as surge, sway, and heave,
respectively. Similarly, rotational velocities about the x, y, z axes are referred to as
p, q, r respectively. These motions are also known as roll, pitch, and yaw.
y
X
Z
Figure 2-3: An illustration of the coordinate frame convention.
2.1.2
Multi-DOF Propulsion
As we described earlier, the vehicles we seek to design require motions in 5 directions
(surge, sway, heave, yaw, pitch). In Figure 2-4 we provide an illustration of the co31
ordinate system as well as our proposed jet propulsion design. Figure 2-4 and Table
2.1 show an arrangement of water jets that allow us to achieve these motions. We
number the jets in order to simplify implementation, discussion, and to provide physical intuition. While Figure 2-4 provides a visual illustration of the jet configurations,
Table 2.1 provides an in depth summary of how various jet combinations can be used
to achieve the desired motions.
-Jet 1
-Jet 2
+Jet 3
+Jet 4
sA
x
S%
x
%
%
+t*,
+Jet1
+Lt
-Jet 3
4-Jet 4
Figure 2-4: A diagram illustrating the jet arrangement for the CCSV design.
Table 2.1: Summary of Maneuvering Primitives.
Second Jet
DOF First Jet
+Jet 2
+Jet 1
+u
2
-Jet
-Jet 1
-u
-Jet 2
+Jet 2
+v
-Jet 1
+Jet 1
-V
-Jet 2
+Jet 1
+r
+Jet 2
-Jet 1
-r
+Jet 4
+Jet 3
+w
-Jet 4
-Jet 3
-w
-Jet 4
+Jet 3
+q
+Jet 4
-Jet 3
-q
32
2.1.3
Compact Internal Propulsion System
An integrated pump-valve maneuvering system is developed by combining powerful
centrifugal pumps with compact Coanda-effect valves. This system is used to design and construct a compact, multi-degree-of-freedom (DOF) prototype vehicle. To
achieve precision orientation control, high speed valve switching is exploited using
a unique Pulse Width Modulation (PWM) control scheme. Dead zones and other
complex nonlinear dynamics of traditional propeller thrusters and water jet pumps
are avoided with use of integrated pump-valve control.
We constructed a propulsion system with powerful DC motor based centrifugal
pumps. The pumps are used to generate high velocity water jets that can be used to
propel the robot. We choose centrifugal pumps due to their mechanical and electrical
simplicity, small size, and commercial availability. To achieve the desired 8 jets, the
most obvious approach would be to use a centrifugal pump for each one. However,
such an approach increases the size and weight of the robot substantially.
In order to achieve bidirectional forces 1800 apart, we draw inspiration from the
field of fluidic technology. One such component that is particularly relevant is the
"bistable fluidic amplifier". This device which emerged as early as the 1960s was used
to switch the direction of a powerful input jet by modulating two small control ports
[14]. The system exploits the Coanda Effect (discovered by Henri Coanda in 1932),
or the tendency of fluid jets to attach themselves to curved surfaces [15], [16], [17].
Exit E,
\Exit
E2
fiIEntrained
luidu
Control
Port C
closed to
ambient
T
Control
Port C2
open to
Control
Control
PortC
C2
open to
ambient
ambient
Input flow Q
closed to
ambient
Input flow Q
Figure 2-5: An illustration of the bistable fluidic amplifier concept.
33
As Figure 2-5 illustrates, the device can be used to switch a high velocity fluid jet
between two output ports. The device sits in the ambient fluid, and an input flow
Q
is injected at the input. The control ports, C1 and C2 are used to switch the direction
of the jet. If control port C is closed while control port C2 is open to the ambient
fluid, a small amount of fluid will be entrained through port C2 and the jet will bend
and exit through exit E1 . If the control port C1 is closed and control port C2 is closed
the jet will then switch and exit through exit E2 . Depending on the dimensions and
jet parameters, very high switching speeds can be achieved using this type of system
[18].
We develop a pump-plus-fluidic-valve system by attaching one of these fluidic
valves to the output of a centrifugal pump. A visual illustration of this system is
provided in Figure 2-6. The pump draws fluid inward radially (blue arrows) and then
injects the jet into the fluidic valve. If C1 is open and C2 is closed, the output water
jet will exit in the positive X direction (jet labeled in red).
Fluid Force F on Control Volume
Output Jet
X
C, Open C
C2 Closed
Inlet Flow
Inletflg-
Inlet Flow
Inlet Flow 1
Reaction Moment MR Exerted On
Control Volume
Figure 2-6: An illustration showing the full pump-valve concept.
The use of fluidic valves allows us to reduce the number of pumps from 8 to
34
4.
Using dual-output port pumps can reduce this number to 2.
We combine 2
fluidic valves with one orthogonal dual-output pump to create what we call a Bi-axis
Actuation Unit. This propulsion unit can generate two sets of in-line force (4 total)
using only a single pump. The pump impeller direction can be used to switch the
exit nozzle between two outputs, which in turn are switched using a fluidic valve. A
photograph of a fully assembled BAU is provided in Figure 2-7.
Figure 2-7: Photograph of a BAU prototype. Two fluidic valves are combined with
an orthogonal dual output port pump to generate forces in 4 directions.
A detailed view of the maneuvering system is provided in Figure 2-8. The BAUs
are clearly visible, and it is important to emphasize how the construction of these
geometries would have been very difficult without 3D printing technologies. Additional photographs are provided in Figure 2-1 and illustrate the jet exits as well as
the vehicle dimensions.
2.1.4
Unstable Vehicle Dynamics
The hydrodynamic equations of motion for the maneuvering of a general 6DOF underwater vehicle are complex, coupled, and nonlinear [13]. However, for the spheroidal
vehicles that we describe in this work, the equations of motions simplify considerably
due symmetry. A key factor in the vehicle dynamics is the Munk moment. The Munk
moment was first observed for dirigibles in the 1920s, and is the tendency of streamlined bodies to rotate so that they are oriented perpendicular to the flow. This effect
35
-jet
4
+-jet
+Jet 2
BAU 2
Pump2
I
BAU2
Figure 2-8: A rendering and photograph of the CCSV maneuvering system.
36
stems from the asymmetric distribution of stagnation points on the vehicle body. This
leads to local pressure gradients that create a pure moment. A simplified illustration
is provided in Figure 2-9, which shows how the high and low pressures, P at each
end of the vehicle body create a pure moment. This moment tends to destabilize the
vehicle, and prevent it from moving straight.
Low P
HHigh P
High P
Figure 2-9: An illustration of the Munk moment and how the stagnation points create
a turning motion on streamlined shapes.
The Munk moment can act to create pitching moments, Km, as well as yaw
moments, Nm. The moment is a function of the free stream velocity, U,,, and the
angle between the flow and the vehicle. The moment is also related to the shape
of the vehicle. Many simplified models use the difference between the added masses
associated with translations to approximate the Munk moment.
2.1.5
Feedback Controller Design
Since the Munk moment leads to destabilization, methods have been developed for
countering it. We formulated a nonlinear hydrodynamic model of the vehicle, then
linearized the dynamics. With this linearized model, we explored using feedback control to stabilize the system. Full state feedback, which is a powerful technique, is
not the most practical for this application. Measuring the sway velocity, v, is a chal37
lenging task, the velocities are small making the use of differential pressure difficult.
Accelerometers are another option but they are sensitive to coupled motions and drift.
An observer can also be constructed, but this introduces additional computational
and analytical complexity to the problem.
A simpler approach is to use only the yaw rate, r, and the yaw angle, Vb. These
measurements can be obtained using affordable and simple off the shelf components
such as compasses or Inertial Measurement Units (IMU). This is a well known technique which is used for ships that do not have the the sensors for full state feedback
[19]. We used the SISO model to design the appropriate stabilizing control system.
Specifically, we designed a PD controller to track a desired heading,
2.2
od(t).
Existing Localization Technologies
As stated earlier, several underwater proximity sensors have been developed and are
well-suited for several other applications. This sections explains why these available
options fail to meet our specific criteria for fine, underwater positioning in cluttered
environments.
2.2.1
Sonar
One of the most prevalent technologies in underwater localization is sonar. High
frequency multi-beam sonar systems are capable of capturing complete 3D digital
point-cloud representations of underwater environments. There are, however, physical
limitations to the resolution capability of these technologies for understanding small
yet important features of structures such as cracks and erosion in concrete structures
or welds and dents in metallic marine infrastructure [20], [21], [22].
Active sonar ranging is based on emitting a pulse of sound energy to a target
surface and measuring the time taken for an echo to return. Based on the speed
of sound in water, the distance to the target surface can be calculated. 3D sonar
systems assume that the contact point is in the center of the beam of sound energy
when converting the return to a 3D point. Sound energy expands as it propagates
38
through water, however, and a sonar pulse returns not only from the center of the
beam but also from any surface in its cone-shaped acoustic beam. The precision of
sonar systems is limited because they must approximate a relatively large footprint
area with a single point. Details such as cracks that are smaller than the footprint
cannot be resolved, and sudden changes in structure such as edges cannot be located
more precisely than the size of the footprint.
Sonar also suffers from sound wave scattering and interference, which would become a major problem in cluttered environments. Each object within the acoustic
beam can reflect the sound wave, creating a mix of signals from different directions
and adding noise to the measurements. For these reasons sonar is most often used in
large, open environments to detect relatively large structures.
2.2.2
Infrared
IR sensors emit a beam of infrared light from an LED, and measure the intensity of
light that is bounced back using a phototransistor. They are cheap, compact modules
and with some care, can be waterproofed. Unfortunately the intensity data acquired
with IR sensors depends highly on the surface properties and the configuration of
the sensors with respect to the surface. Therefore the sensor cannot provide reliable
distance estimates across all types of surface textures and angle orientations. However
work has been done to improve position estimation without the need to determine
the surface properties [23]. Ultimately, even with these improvements, the minimum
measurable distance is too large and they lack of orientation angle feedback.
2.2.3
GPS
In general, most BWR plants do not receive a GPS signal in the reactor due to
certain sections being underground and the total assembly encased in thick concrete.
Furthermore, GPS fails to provide location and time information under water since
the electromagnetic signals from the orbiting satellites are heavily damped in water
and hence can not be detected by the receiver in most cases of interest [24]. Finally,
39
on land the accuracy of most GPS systems is often only reliable to the order of one
meter at best [25].
2.2.4
Vision-based Mapping
A lot of research has been conducted for various vision-based localization systems,
particularly in robotics [26], [27], [28]. The problem with these methods is they usually require multiple sensing components to create a three-dimensional point cloud of
the environment, then heavy computing power to convert this to a map representation. Stereovision, however, is one vision-based technique that has been successfully
miniaturized into a compact form factor. This type of sensing uses two cameras and
locates key points in both images. One can then compute the distance to these points
based on the geometry of the two cameras relative to each other and other physical
properties. Alternatively one camera may be used if paired with inertial sensors or
other image processing techniques. It can then track key points as the camera moves
in space [29], [30]. The drawbacks of these types of vision methods is that they require
discernible features in the camera image which can then serve as reference points for
distance calculations. As mentioned in the functional requirements, close-proximity
visual inspection often means looking at and scanning along featureless surfaces.
2.2.5
Laser Rangefinders
Laser rangefinders are capable of very precise measurements, even underwater [31],
[32]. Unfortunately most commercial products are too large and heavy for our purpose. However, we have identified the benefits of single-point distance measurements,
particularly for computing orientation angles, and have adopted a similar technique
in our sensor design.
40
Chapter 3
Camera-Laser Sensor Measurement
Principle
In reviewing the current methods we identified two promising technologies, laser
rangefinders and stereovision, and built off the core concept of using lasers and an
optical sensor to measure a distance at a specific point. This paper details the development of a new device, a single, compact and lightweight camera-laser sensor (CLS)
capable of real-time position and orientation estimation relative to the environment.
We designed this sensor as an aid for the human operator or for complete autonomous
control of the robot at close proximities.
The CLS consists of a camera and several point lasers for visual estimation. Using
the reflected laser points in the camera image, the sensor computes its distance and
orientation from the reflecting surface. The camera-laser system currently estimates
three degrees of freedom: perpendicular distance, d, yaw, i6, and pitch, 0, relative to
a planar surface. These three parameters are shown in Figure 1-2 for a robot system
with unit normal heading vector, n.
This chapter continues with a description of the CLS measurement principle for
distance and angle estimations and ends with an informative look at sensor sensitivity
issues at larger distances.
41
3.1
Horizontal Case
The sensor estimates orientation by locating specific points in the camera image and
relating them to physical points in the environment. This section goes through the
formulation of distance and angle estimation based off these points. Before we begin,
we can simplify the discussion by first identifying two measurement cases. Assuming
a camera-laser layout as shown in Figure 1-5 with a laser to the left and right of the
camera, then one above and below, these two cases can be divided into horizontal
and vertical.
The horizontal case is capable of measuring perpendicular distance and yaw, not
pitch. It only uses the camera's xy-plane, the coordinate frame in Figure 3-1-a, and
the left and right point lasers. Figure 3-1-a shows the spheroidal robot (ellipse) with
four point laser sources (four outer dots) mounted on the front. At the center of these
lasers sits the camera (center dot). The camera has a specific vertical and horizontal
viewing angle, allowing it to only see objects that fall within its rectangular image
frame. Figure 3-1-b shows this image frame as a dashed rectangle, which grows in
area at greater distances. Finally, the camera's xy-plane bisects the image through
its horizontal center (dashed line in Figure 3-1-c). If the left and right lasers are in
line with the camera's xy-plane, then all laser dots in the image will appear along
this horizontal bisecting line. As will be shown, yaw angles can be estimated based
on where the laser dots appear on this line. Pitch rotations, however, do not affect
the left and right laser dot placement.
3.2
Vertical Case
Similarly the vertical case is capable of measuring perpendicular distance and pitch,
but not yaw. It only uses the camera's z-plane and the top and bottom point lasers.
The z-plane bisects the image through its vertical center (dashed line in Figure 3-2),
where the top and bottom laser dots will appear in the image. Pitch angles can be
estimated based on where the laser dots appear on this line, and yaw rotations do
42
Ia
YC
xc
zc
Ilb
Y
ZC
1:
Figure 3-1: A diagram illustrating the camera-laser layout (a), the viewable image
frame of the camera (b), and the centerline where the left and right lasers appear (c).
43
not affect the top and bottom laser dot placement. In the following sections, we will
present the principles to estimate distance and yaw rotation using the horizontal case
before providing the corresponding equations for pitch estimation.
X
-
Zc
Figure 3-2: A diagram illustrating the centerline where the top and bottom lasers
appear.
3.3
Direct Distance Estimation
Consider a CLS that consists of a camera with a single laser placed to its left. Figure
3-3-a shows an overhead view of this camera with a laser pointer placed a distance
I away and with a tilt angle y towards the camera's field of view. The camera has
a horizontal viewing angle d. These three parameters, , -y, and 3, are fixed and
determined during the sensor design. Now consider placing the CLS a distance d
away from a flat, perpendicular surface, Figure 3-3-b. The laser beam passes into
the camera's view and hits the surface, creating a laser point. We can then write
equations for the distance from the camera-center to the laser point,
lasergobal = dtan'y - 1,
44
and the distance from the camera-center to the edge of the camera's view,
edgeglobal
dtan 0,
=
in terms of the three sensor parameters and the perpendicular distance, d, Figure
3-3-c.
ab
Surface
d I
y
LaserLsr
-
I
~
I
I d tan y - 1
dtany-
ta
Figure 3-3: Top view of a single-laser CLS (a), placed before a flat surface for distance
estimation (b), and the resulting distances to points-of-interest (c).
When looking at the image taken by the camera, Figure 3-4, we see a red laser
point reflected on the surface. Similarly, we can find the pixel distance from the
image-center to the laser dot,
laserpixe
=
Px,
and the pixel distance from the image-center to the edge of the image,
edgepixei
=
45
Pxmax-
image
Ximage
(2pa,
Pxmax
2Pyma,,) pixels
Figure 3-4: The camera image with a laser dot reflecting off a surface and the pixel
distances to points-of-interest.
Finally we can relate the global distances to the pixel distance,
laserglow
edgegow
_
laseriel _ dtan f - I
~ dtan ,6
edgepxe
_
Px
p...
Rearranging this equation brings us to two formulas. The pixel location of the laser
dot in the image as a function of distance,
p., (d) = Pma (tan7- )
tan#
d
(3.1)
and the estimated distance of the vehicle given the laser dot's pixel location,
tan
des(px) =
(3.2)
3.4
tanly-
p
.
Rotation Estimation
Once capable of measuring direct distances using a single laser, we can then add
a second laser in the horizontal plane to measure rotations in the horizontal plane
(yaw). For symmetry we add a laser to the right of the camera. Figure 3-5-a shows
this CLS setup with some yaw rotation angle,
40, relative to a surface.
Using the sensor
parameters (the lasers' tilt angles and distances from the camera) and Equation 3.2,
46
we can calculate the perpendicular distances from the camera to the laser points on
the surface, Figure 3-5-b. Note that although the surface itself is now at an angle
relative to the CLS and there is no single perpendicular distance from the sensor to
the surface, the sensor's measurement principle is such that we can find perpendicular
distances to single points anywhere on that surface.
Surface
dr
[L
Ef
dr
l,*
Right
Laser
Camera
d
4-
d
d, tany, + d, tany, -i
-
r
Figure 3-5: Top view of a CLS with yaw angle 0 relative to a surface (a), the direct
distances measured with the left and right lasers (b), and the geometric relation of
the yaw angle with the measured distances and sensor parameters (c).
Finally, using these two distance measurements and the CLS parameters, we can
determine a geometric formula for the yaw rotation angle, Figure 3-5-c. Specifically,
the estimated yaw rotation is given by
Pest(di, d,) = arctan
I
-
I
I ditan -y + d, tan yr- li - l, 1
(3.3)
Following the same principles for a top and bottom laser in the vertical case, the
47
estimated pitch rotation is given by
G.t(dt, db) =
3.5
arctan
Id.
1dt tan-y+ dtanryb- l - l(3
(3.4)
Perpendicular Distance Estimation
So fax we have only shown how to estimate direct distances, that is the distance
from the sensor to the reflected laser point, d, and d, in Figure 3-6-a. It would be
most useful in inspection applications to estimate the sensor's perpendicular distance
to the surface, centered on the camera, d 1 . After determining the yaw angle, the
perpendicular distance can be estimated using Equation 3.5. Figure 3-6-b derives
this formula using the geometric properties of the left laser.
d-,l = (d, tan y, - i) sin b + d, cos '.
(3.5)
An identical formula can be derived using the right laser and its distance measurement, Equation 3.6. In practice, we compute both perpendicular distances and
average the two together to reduce noise effects.
d-,r = (lr - dr tan y) sin p + dr cos 0.
3.6
(3.6)
Working Range vs Precision (Sensitivity at Large
Distances)
There is a critical relationship to be aware of regarding the laser angles. The camera
has a set viewing angle, only capable of seeing objects within this region. It is important to consider when the lasers enter and leave the camera's view, as it determines
the working distance of the sensor. In addition, the larger the working distance (the
laser staying in view for a larger range of distances), the less precise the sensor readings may be. In essence, there exists a tradeoff between working range of estimation
48
hh .'
Ia
d,
tdr
v
IK
R2
R1/
R1 = d, tan yj - 11
R2 = R 1tan*
dL = ,R2 + dj )cos
(
d,
Yi,
W4
1
171
Figure 3-6: A diagram illustrating the direct laser distances, d, and d,, and perpendicular distance, d , measurable by the CLS (a), and derivation of the perpendicular
distance using the left laser values (b).
49
and precision of estimation.
Let us illustrate this idea using a series of figures. Figure 3-7 presents two, differently configured camera-laser systems. The system in Figure 3-7-A is configured for
a large working distance; the laser beam remains in the camera's view for distances
greater than some small threshold, and very gradually exits the field of view, if at
all. On the other hand, the system in Figure 3-7-B has been configured for a smaller
working range by increasing the tilt angle of the laser. As a result, the laser beam
passes through the field of the camera much more quickly and exits more abruptly.
A
B
ILefti
I
LasesrrCamera
Figure 3-7: Two configurations of the camera laser system. The laser angle in configuration A is nearly parallel to the camera's viewing angle, remaining in view for
larger distances. Laser B passes through the camera's entire field of view over a much
shorter range of distances.
3.6.1
Large Working Range
Consider configuration A an arbitrary distance d, from a surface. If d, is large enough,
the resulting camera image sees the laser point further to the right in the image due
50
to the left laser beam having more distance to pass across the camera.
Let's then imagine a second surface a distance d 2 away, significantly greater than
dj, Figure 3-8. When we compare the location of this new laser point in the camera image, we see that there is only a small difference in pixel location, Ap, which
corresponds to a large difference in distance, Ad. If the laser angle is too gradual,
laser points at further distances appear close together in the camera image causing
large variability in distance measurement. Furthermore the measured pixel location
can vary slightly due to the lasers, slight angles between the surface and sensor, and
variances of the image processing program. As a result, at large enough distances,
when Ap becomes small enough, this can lead to noisy, inaccurate measurements.
A
Camera
Image A
Surface 2
1
2I
AP
Ad
Surface 1
d2
d,
mera
aae
Figure 3-8: CLS configuration A shows only slight changes in pixel location despite
large changes in distance.
3.6.2
High Precision
Looking at configuration B now, the system has a smaller working range, so d, cannot be too large. Within this range, however, distances can be measured much more
51
accurately. The camera image in Figure 3-9 shows larger pixel differences, Ap, corresponding to smaller differences in distance, Ad, as compared to configuration A.
B
Camera
Image B
---Surface 2
---- -------Ap
Surface 1
dz
d,
Lsr Camera
Figure 3-9: Configuration B has a larger laser angle, decreasing the overall working
range of the sensor and improving the accuracy within that range.
52
Chapter 4
Implementation and Experimental
Characterization of Camera-Laser
Sensor
This chapter discusses methods taken for optimizing the sensor design, then covers
the physical implementation of a sensor prototype, and finally presents experimental
results.
4.1
Tuned CLS Design
The CLS has an inherent tradeoff between working range and precision of estimation.
With this in mind we can now design a CLS optimized to work well within our desired
range of 30 to 300mm. To this end we first identified the tunable design parameters,
shown in Figure 4-1. The first parameter is the distance from the laser to the camera,
L. The second is the tilt angle of the laser, -y. The camera's viewing angle,
#,
is
dependent on the camera and unchangeable without adding alternate lenses.
Looking back at Equations 3.1 and 3.2, we can see how these two parameters
each affect our distance estimate. Specifically we can calculate how the sensitivity
of the measurement estimation equations change due to laser position, 1, and angle,
-y. Taking the partial derivative of the pixel location estimate, Equation 3.1, with
53
Left
Laser
Camera
Figure 4-1: The two tunable design parameters of the CLS, I and -Y.
respect to the sensor distance gives us
P =
ad
where c =
PXMaX
(4.1)
cd2'
. This partial derivative reveals how the pixel location changes due
to distance, which we would like to maximize for accurate measurements. Of the
tunable parameters, it is only dependent on laser position, and because it is directly
proportional, maximizing this equation is trivial. Figure 4-2 plots the sensitivity at
different distances for various values of L.
After choosing a reasonable laser distance while keeping in mind vehicle size constraints, we can then analyze the sensitivity of the distance estimate to the pixel
location of the laser dot,
dest
0 p,
C
(tan -y - cp)
2
(4.2)
(4
This reveals how the distance estimate changes based on the laser point's pixel location. Minimizing this formula will help reduce sensor noise. Having set the laser
distance, 1, we can now tune the laser angle, -y. Figure 4-3 plots the sensitivity at all
the pixel locations for various values of y.
Figure 4-3 supports the claims discussed in Chapter 3 regarding the sensitivity
issue. When the laser angle is too gradual, the sensitivity greatly increases and causes
high variance in sensor estimations.
54
dP dD
4,-
3
0
30
20
0250
0
-350.43
-300
10. 200
10
50d
d
Figure 4-2: A plot of the sensitivity of pixel location to sensor distance, for various
laser positions 1. Maximizing improves sensor performance.
55
dD dP
0.50.4
*~0.3
C
~02
0.10
0
-- .
. .--
.
030
10
-200
x
-40
20
0Gamma
Figure 4-3: A plot of the sensitivity of estimated distance to pixel location, for various
laser angles, y. Minimizing improves sensor performance.
4.2
Implementation
A prototype of the sensor was built to test the horizontal measurement principles
presented in Chapter 3. It consists of a single-board computer, two point-lasers, and
a custom-designed plastic housing. It was tested and calibrated for a dry, in-air
environment.
4.2.1
Hardware
The prototype camera-laser sensor is shown below in Figure 4-4. A Raspberry Pi
serves as the brains of the sensor, running the vision and control code and interfacing
with the camera and lasers. The two point lasers are placed mm away from the
camera and have a 300 inward angle. The camera module has a viewing angle of 27'
horizontally and 200 vertically. It is set to capture images at a resolution of 400x300
pixels. This specific CLS was designed to operate between 27mm and 304mm. Figure
56
4-5-a is an image taken from the camera 25mm away from a wall just as the lasers
cross into view. Figure 4-5-b is taken 294mm away and shows the left and right lasers
crossing each other and beginning to exit the camera's view.
Figure 4-4: A photo of the CLS prototype for distance and yaw estimations.
294mm from wall
25mm from wall
Figure 4-5: Images taken directly from the CLS camera at each end of its designed
working range.
This sensor uses only two lasers to measure distance and yaw rotation. When
using three or more lasers, however, it is important to consider having at least three
different laser tilt angles. If they all shared the same tilt angle there would exist a
certain distance from a surface where all lasers overlap each other. The camera would
see a single dot on the surface, and the sensor could only get one single distance
estimate despite which laser it activates, a singularity point. It could not get any
57
more information from that specific distance to improve its estimate, nor could it
get any information to estimate a yaw or pitch angle. Varying the angles to at least
three unique values ensures you will have enough information to fully estimate the
orientation at any distance.
4.2.2
Image Processing
To accurately locate the centers of each reflected point, an image processing program
was written to activate each laser and search the camera images for a bright, nearly
white spot surrounded by red. Figure 4-6 shows the steps. The original image (a) is
converted to grayscale (b). Then a threshold is applied to turn the brightest pixels
black, and everything else to white (c). From here we find the borders of the black
blobs (d, plotted as green rings), then find the circles that enclose those rings (e,
plotted as green circles). This process runs at 3 Hz.
C
*e
Figure 4-6: Steps to locate the laser points. The CLS takes an image (a), converts
to grayscale (b), places a threshold (c), finds contours (d), and then finds the circles
that enclose those contours e). All images taken directly from the CLS.
4.2.3
Sensor Calibration
The sensor operates under the assumption that we know certain physical parameters
of the system. These are the distances from the camera to each laser, 1, the tilt angle
58
of each laser, -y, and the camera's horizontal and vertical viewing angles, 8h and 3,,.
These parameters appear in all of our estimate equations, Equations 3.2, 3.3, and
3.4. While these are all ideally known (measured in the CAD model as they were
designed), slight variations and tolerances from manufacturing and assembly cause
these values to differ in reality. We can account for these differences by calibrating the
system. As an example, consider Equation 3.2 for estimating distance when activating
the left laser, rewritten below.
dest (P) ='e
tan 'Yleft-
The variables
'left,
ieft, 8h, and p.m,
ft tn6
w&Y
are all constant physical parameters. If we
lump them together into three new constants, a 1 =tan Yeft, a 2 = taflh, and a3 =ileft,
we can get the simplified, lumped-parameter formula below. The lumped-parameter
formula shows more clearly how we can estimate distance given only a pixel-location,
PX_
dest(px) =
a,
-
a3
a2px
(4.3)
Next we placed the sensor at known distances from a surface and recorded the
measured pixel values. For example, we placed the sensor 80mm from the surface,
fired the left laser, and recorded the pixel location of the point as seen by the camera. This was done for both the left and right lasers, from 25mm to 300mm, every
10mm. We then plotted, from this table, the estimated distance (det) as a function of pixel location (px) and found the equation-of-best-fit in the form of Equation
EQ2b. This gave us the constants a,, a 2 , and a3 . Figure 4-7 shows plots of the
distances we expected to measure (the actual distance the sensor was placed at), and
the experimentally estimated distances before calibration. The two were fairly close.
The experiment was repeated using the calibrated lumped parameter values and
showed a near perfect match, Figure 4-8.
59
Let
aser
300
290
5
150
--5D-1
actual
-100 -50 0 50100 150 200
Pixel Reading
RigM Laser
3M
2M
250-
--
-J
+
estimated
actual
100-
50
-00
-1
3-1004
000
100
150
200
Pixel Reading
Figure 4-7: Plot of the CLS-estimated distances before calibration using ideal parameter values and a plot of the actual distances.
60
Let Laser Calibrated
250
211
S
a
U
-U
100
s0
+
estimated
actual
-N
-10
-100
-50
0
so
100
ISD
20
Pixel Reading
Right Laser Calibrated
MM--
+ estimated
-actual
C
re
0
100
ni
-150
I
-100
-50
I
0
50
100
p
150
20D
Pixel Reading
Figure 4-8: Plot of the calibrated CLS-estimated distances, giving near-perfect distance readings.
61
4.3
Experimental Characterization of Sensor
This section summarizes the experiments conducted to determine the accuracy of the
CLS.
4.3.1
Yaw Estimations
To test the camera-laser sensor's yaw-estimation, the sensor was placed in front of a
wall with yaw angles between 300 to -30*, with 10* intervals, Figure 4-9. Table 4.1
summarizes the results of the experiments, detailing the number of samples in each
test and the average of the estimations. A box plot of the estimation error is given in
Figure 4-10. On each box, the central mark is the median, the edges of the box are the
25th and 75th percentiles, the whiskers extend to the most extreme data points not
considered outliers, and outliers are plotted individually. The sensor remains within
+20,
meeting our functional requirement.
30*
-=10*
Figure 4-9: Photos of the CLS taking yaw measurements at various angles.
4.3.2
Perpendicular Distance
The CLS was then placed in front of a wall at perpendicular distances between 30mm
to 300mm, with 10mm intervals. Table 4.2 summarizes the results of the experiment,
detailing the number of samples in each test and the average of the estimations.
A box plot of the estimation error is given in Figure 4-11. For the most part the
62
Estimation error at different angles
2.5
.....
2
..............................................................
1.5 ...................................................................................
t
1 .................................................................................
tM
0
0.5 ..................................................................................
..................................................................................
0
M
.9 -0.5 .................................................................................
-1 ..................................................................................
-1.5 ..............................................................
........
-2 ..................................................................................
-2.5 ..........................................................................
-30
-20
-10
0
10
20
Figure 4-10: Box plot of yaw estimation error.
63
30
Table 4.1: Summary of Yaw Experiments.
Average of
# of Actual Angle
(degrees)
Estimates
(degrees)
Samples
-28
-30
48
-9
-10
49
0
0
49
9
10
49
18
20
49
28
30
49
sensor remains within
1mm, again meeting our functional requirement. At 270mm
to 300mm, the sensor readings begin to vary
# of
Samples
46
47
49
45
41
43
41
44
46
44
43
41
45
45
5mm.
Table 4.2: Summary of Perpendicular Distance Experiments.
Average of
Actual
# of
Average of
Actual
Dist. (mm) Estimates (mm) Samples Dist. (mm) Estimates (mm)
169
170
39
29
30
180
180
44
39
40
190
190
38
50
50
200
200
40
60
60
210
210
49
70
70
220
220
49
80
80
230
230
43
90
90
240
240
48
100
100
250
250
43
110
110
262
260
42
119
120
272
270
47
130
130
283
280
43
140
140
291
290
41
149
150
296
300
43
160
160
64
Estimation error at different distances
4 ---------------------------------------------------------
+
---------------------------- --
. . . . ... . . . . . . . ... . . ... . . ..........
. . . . .
2
. . . . . .
.
3
1
+~RT
T
0
#
-------------.---------------------.-----------------_-----.----------.- -------.--.-S~11
+
E
+
+
0o
-1
-2
-3
..... ......... ... ... ... .I.......................................
-- ------------------------------------------------I --------40
60
80
100
120
140
iQ
180
200
220
........--------..-.
240
260
280
00.L
------. ---. .......-------------------------------- ---------I- ------.-
-4
-5
40
60
80
100
120 Crrect
tance (
m
240
260
280
Figure 4-11: Box plot of perpendicular distance estimation error.
65
300
66
Chapter 5
Autonomous Surface Robot
We next built a water-surface robot as a test platform for the sensor. This allowed
us to characterize the sensor in dynamic conditions, specifically in a fluid environment, and to develop a feedback control structure. This chapter discusses the design,
dynamics, and basic stabilization control of the surface vehicle.
5.1
Design
The autonomous surface robot was designed to emulate the CCSV inspection vehicle
described in Chapter 2. While it is slightly larger to support the CLS, it retains the
spheroidal geometry and uses the same aspect ratio for its elliptical profile, Figure
5-1.
When designing the CCSV, this aspect ratio was determined to improve vehicle
stability. Similarly, the angle of the propulsion jets was also optimized, and the CLS
test vehicle adopts this design choice. Under these guidelines a CAD model was
designed, and the robot was built, Figure 5-2.
67
CLS
CCSV
a'
b'
b
a'=73mm
b'=54mm
a=85.8mm
b=63.5mm
a
- = 1.35
b
Figure 5-1: Size comparison of the CCSV inspection vehicle and the CLS test vehicle.
Figure 5-2: CAD model of the robot (a) and the assembled vehicle (b).
68
5.2
Dynamics and Stabilization Through Feedback
Control
As it was modeled after the previous design, the new robot shares very similar vehicle
dynamics [10].
While the 6DOF equations of motion for underwater vehicles are
usually highly coupled and nonlinear, we see some simplifications from spheroidal
geometry; the inertia, added mass, and drag matrices are all diagonal. The center of
mass is assumed to be at the center of the robot. We assume quadratic drag to be
the dominant drag force and neglect linear damping. Under these assumptions, the
vehicle dynamics within the xy-plane are decoupled from the other DOFs. Within
the xy-plane, however, the surge, sway, and yaw dynamics are in general coupled
and their dynamic behaviors vary depending on the vehicle speed as well as on the
direction of motion.
5.2.1
Quasi-stationary Rotation
For the control analysis we will focus on a simplified 1 DOF example. Specifically we
will analyze the yaw angle, i. Heading control is critical for many applications as it is
necessary for navigation, maneuvering around obstacles, and aiming sensors such as
cameras. The CLS surface vehicle is capable of turning in place without translating.
The jets can be activated to create a pure yaw moment. This means that the idea of
decoupling the yaw dynamics from the other motions is realistic and realizable.
When rotating in place, the vehicle has no dominating velocity, and all three
translational velocities are kept relatively small. In this case, the vehicle is nearly
stationary and is slowly adjusting its xy-position or orientation: quasi-stationary
dynamics. Due to the small velocities, the quadratic drag, centrifugal terms, and
Munk moment all vanish.
The open-loop system consists of the four jet forces creating moments about the
vehicle's center. Figure 5-3 shows these four jets acting about the symmetric moment
arm, c.
69
Jet
Jet3
IX
Y
C
%
1
Jet 2
Jet 4
Figure 5-3: A diagram of the four vehicle jets acting about the center with moment
arm c.
If we superimpose the four jet forces as jets, we can write the rotational equation
of motion as
(Izz + m 66 )
(5.1)
= cFjets,
where I,, is the vehicle's moment of inertia in the z-direction, and M6 6 represents the
added inertia associated with yaw. Looking at the pump schematics, we can rewrite
the jet force as a series of gains based on a desired rotation angle, Fet,=2 KpumpKFde,We can now take the Laplace transform of Equation 5.1 to get the open-loop transfer
function
,O(s)
2cKp,,,pKF 1
2
Izz + M66 S
'des(S)
_
a
S2'
where a is a lumped constant parameter. This system has two poles at the origin and
is marginally stable.
To improve performance, let's assume we have yaw and yaw-rate feedback (40 and
4)' Figure 5-4. The closed-loop transfer function becomes
4'(s)
4'd,(s)
aK,
_
S2+ aKs +
(5.3)
aKp
The system can now be stabilized for any number of combination of PD gains,
Kp and K4. We can now look at the forward motion dynamics for more insight on
70
O
Ia 1
Odes
+
S
S
Figure 5-4: Closed-loop yaw-control block diagram with angle (,0) and rotation rate
(i) feedback.
how to set our gains.
5.2.2
Forward Translation
As the vehicle traverses a relatively long distance along the vehicle x-axis, the surge
velocity will grow and dominate the dynamics. We assume that the vehicle is moving
at a constant longitudinal cruising speed, Ur, and all other velocities (v, w, p, q, r)
are small. The linearized surge dynamics decouple completely from the sway-yaw
dynamics. The linearized surge dynamics are trivial while the sway-yaw dynamics
are considerably more complex. When moving forward, the rear horizontal jets (jets
2 and 4 in Figure 5-3) generate a thrust force, while the front two (jets 1 and 3) are
inactive. The linearized sway-yaw dynamics are then given by the following statespace expression.
v
d
dt
r
I
-M-"
-Uc(m22-mll)
Izz+m6 6
M+M22
0
0
0
0
-n
m+m22
C
n
m+m22
-C
1
0
Izz+Mfl
IzZ+m 64
II
1/
FJ2
(5.4)
Here we use the standard notations for added mass, where mi
and M 2 2 represent
the added mass associated with surge and sway respectively. Fi, i = 1...4, are the
propulsive forces of the four jets.
After simplifying the added masses, we can convert Equation 5.4 to a single trans71
fer function
2sKpmpm, + 1.7maUcKF
(
-
48)_
s(mYIzs
'Odes(S)
2
r)
- Uc2mma)
This system has both an unstable and marginally stable pole, Figure 5-5-a. We can
now design our PD controller gains from the quasi-stationary rotational dynamics to
simultaneously stabilize the forward translation dynamics. We chose gains such that
the poles were negative and real to both stabilize the system and avoid unwanted
oscillations.
a
Ib
Im(s) AW(s)
Im(s)'
AV,(s)
ZI
zC
Re(s)
Re(s)
00'1
U~
P2
P1
P2
P3
PI
P,
Figure 5-5: Forward translation open-loop pole-zero diagram (a) and the root-locus
plot with a PD controller.
72
Chapter 6
Control Implementation
In this chapter we implement and analyze the performance of the feedback compensator designed in Chapter 5.
6.1
Digital Control (Tustin's Method)
The previous chapter developed and tuned a control system based on analog feedback
of a linearized dynamic system. Because we plan on using digital sensors for our state
feedback, we must now design a digitized equivalent of the continuous compensation.
We are particularly interested in discrete integration techniques. Tustin's method is
one simple digitization technique which uses trapezoidal integration to approximate
compensator integration [33]. Suppose we have the transfer function
U(s)
1
E(s)
s'
which is integration. Digitization gives us
u(kT) = u(kT - T) +
I kT_
e(t)dt,
(6.1)
where T is the sample period and k can be any integer.
For Tustin's method, the task at each step is to use trapezoidal integration, that
73
is, to approximate e(t) by a straight line between the two samples. Writing u(kT) as
u(k) and u(kT - T) as u(k - 1) for short, we convert Equation 6.1 to
T
u(k) = u(k - 1) + -[e(k - 1) + e(k)].
2
(6.2)
We can now use Equation 6.2 in our digital controller to integrate sensor measurements.
6.2
Rotation Rate Feedback (Gyro)
The test vehicle features an IMU consisting of three accelerometers and three orthogonal gyroscopes to provide measurements of accelerations in three dimensions and
rotation rates about the three axes. We first implemented the controller using the
gyroscope for real-time, 50Hz, yaw rotation rate feedback,
allowed us to estimate the yaw angle,
4,
4.
Integrating this signal
and we included a basic algorithm to remove
bias. Figure 6-1 shows the block diagram of this system.
Odes
K
+
IP
a
+
S
ITK4
s
tgyro
S
Figure 6-1: Closed-loop.yaw-control using a gyro for direct rotation rate
and angle estimation (40) through integration.
6.3
(4)
feedback
Rotation Angle Feedback (CLS)
We also tested the vehicle using the camera-laser sensor to measure yaw angle directly,
while still using the gyro for yaw rate feedback. Because the CLS has a much slower
sampling rate of 3Hz, we implemented a zero-order hold (ZOH) on the CLS signal,
74
allowing us to approximate a continuous reading and match the gyro's frequency.
Figure 6-2 shows the block diagram of this setup.
Kipa
Odes
+
K-
@gro
pop t
Figure 6-2: Closed-loop yaw-control using a gyro and the CLS for direct rotation rate
()
and angle (/) feedback, respectively.
6.4
Comparison of Results
Both implementation methods had their pros and cons. This section discusses each
in more detail.
6.4.1
Gyro Control
While in theory the determination of bias leads to the determination of true angular
velocity, in practice the mathematical integration leads to unbounded growth in angle
errors with time due to the noise associated with the measurements and the nonlinearity of the sensor. The vehicle was capable of driving relatively straight, maintaining
a general heading angle, and very quickly rejected disturbances. Over time however
the estimation drift became much more apparent and continued to worsen.
Figure 6-3 shows the vehicle response to a disturbance input. It was commanded
to remain perpendicular to a surface (0=0) but immediately began drifting at a rate
of -0.83 */sec. An external sensor is needed to correct the error accumulation of the
IMU. Despite this, the system rejects the 300 disturbance input quickly and smoothly.
The transient response showed roughly 2% overshoot and a settling time of about 3.3
seconds.
75
Step Response of Gyro-Only Control
40
-
-
30-
Psi Gyro
- - Psi Desired
-0.83 deg/s
20
10-
0
-:---- -
- - - --
--
- - ---
--
----
-10-20-
-10
-5
0
5
Time (s)
10
15
20
Figure 6-3: Step response of gyro-based yaw estimation feedback to a 300 disturbance
input.
6.4.2
CLS Control
The optical-based response also performed well in some regards, mainly that it remained within E2.50 of the desired angle, Figure 6-4. However its transient response
to a 300 disturbance now had a 66% overshoot and a settling time of about 9.3 seconds.
The large overshoot, settling time, and steady-state oscillations can be attributed to
the low sampling rate of the optical sensor. The zero-order hold was implemented
to continuously supply a measurement value to the system. Otherwise the controller
couldn't update its state estimate quickly enough. Unfortunately this also means the
controller believes the vehicle is stationary with a set yaw angle throughout the whole
sampling period, which it may then try to overcompensate for.
As an example, consider Figure 6-4. At t=Os, the sensor reports a 30* yaw angle, and the controller commands the robot to rotate and correct the error. In the
middle of this rotation, the sensor then reports a 200 angle. In reality this angle is
continuously decreasing, but due to the ZOH, the controller believes the robot has
76
Step Response of Optical-Only Control
40
Psi Optical
- - - Psi Desired
+I-2.5deg
30
20
10-
0-
- -
- - - - - -
---
-
-10-
20
0
51
0
Time (s)
15
20
Figure 6-4: Step response of CLS-based yaw estimation feedback to a 300 disturbance
input.
briefly stopped at 200 for 0.3sec (the CLS sampling period). This error builds up
during those 0.3 seconds, and the controller sends a command to the pumps to rotate
with more force. As this happens throughout the system response, the vehicle overshoots its target and exhibits small steady-state oscillations. A faster sampling rate
is needed to correct these overshoots.
77
78
Chapter 7
Sensor Fusion
In this chapter we discuss fusing the data signals from the CLS and gyro to better
estimate the yaw angle at all times.
7.1
Introduction to Sensor Fusion
Techniques to integrate multiple sensor signals for estimating otherwise difficult signals are referred to as sensor fusion, an important concept in robotics. The resulting information is in some sense better than these sensors can provide individually.
The estimate can be more accurate, more complete, or more dependable. Stereoscopic vision uses sensor fusion, as it calculates depth information by combining twodimensional images from two cameras at slightly different viewpoints. However the
data sources for a fusion process are not required to originate from identical sensors
[34].
There are many reports of using an IMU to improve the accuracy of other sensors,
particularly with GPS [35], [36]. In [37] a GPS/IMU integration is proposed for an
autonomous land vehicle, estimating the position and velocity of the vehicle feeding
a multisensor Kalman filter directly with the accelerations measured from the IMU.
The integration of a doppler velocity log sensor with an IMU for an underwater vehicle
was studied in [38], [39] using a Kalman filter.
79
Gyro with Optical Control
7.2
As shown in the previous chapter, integrating the gyro signal gives us the yaw angle,
with bias. This yaw angle is also measured by the CLS directly.
gyro signal and the CLS signal must be related.
Therefore, the
Exploiting this relationship we
can obtain a better estimate of the yaw angle than that obtained from either sensor
alone. The CLS-based yaw measurement is limited to steady state when the robot
is not rotating, and simply integrating the gyro signal produces drift that can grow
over time. However, the gyro signal provides a reliable estimate when the robot is
rotating. Therefore, the two sensor modalities can supplement each other.
In our case, the CLS-based measurement is valid only for steady state or at low
frequencies, while the gyro-integration is valid for high frequencies. It is rational
Figure 7-1 shows a
to combine both sensors sharing different frequency ranges.
frequency-division sensor fusion filter that integrates the CLS-based yaw measurement Vpt with the gyro signal 4'r,.o.
ogyro
+
+
S
S
*opt
Figure 7-1: Example sensor fusion diagram to estimate yaw angle,
4.
The gyro signal goes through an integrator block before reaching the yaw estimate
4,
while the optical reading is fed into the filter with Proportional-and-Integral
control. The output estimate is given by
(S)
=
+ (kps + kj)o'pt(s)
s2 + ks + k1
sogy,o(s)
Note that, at low frequencies, (s -+ 0), only the CLS-based measurement
(7.1)
0pt
shows up at the output estimate. Although the gyro signal contains some bias or drift,
it is filtered out in the output estimate. In contrast, at high frequencies, (s -+ oo),
80
the CLS signal vanishes.
Since the gyro sensor input is a derivative: 4.o = syro , we can rewrite the
transfer function, Equation 7.1, as
s 2 Pgyro(s) + (kps + ki)Oopt(s)
s 2 + kps + k1
(7.2)
so at high frequencies, the gyro signal dominates the output estimate.
Note that, if both the CLS measurement and gyro-integration signal were perfect
and consistent, i.e.
4
'gro = 'opt
=
perfet,
s2 + kps + k1
s) s2 + kps
kp+ekfe(S)
+, k1
the output estimate would become
=
perfeet(S).
(7.3)
Therefore, the sensor fusion filter transmits the correct sensor signal to the output
estimate for all frequencies.
Adding this frequency-division filter to our system, we get the block diagram
shown in Figure 7-2. This increases the order of the total system and introduces two
new gains. To set these gains, we modeled the entire system in Matlab where we
could graphically tune the gains to achieve an overdamped step response, ideally.
7.3
Results
Figure 7-3 shows the same disturbance test conducted previously, but now with a
fused CLS-gyro controller. Like the CLS-only controller, it removes estimation drift
and remains within
2.4*. However, now thanks to the high-frequency gyro readings,
the overshoot has dropped to 33% (as opposed to 66%) and the transient settles in
about 6.0 seconds (compared to 9.3sec).
The CLS calculates the yaw angle of the vehicle with respect to an external surface
with great accuracy but at a low frequency. In parallel the gyro measures the inertial
angular velocities of the vehicle at a high frequency but is susceptible to bias and error
build-up. Both sensor data are fused in a system that is based on a combination lowand high-pass filter. The experimental results prove that the yaw angle estimated by
81
Odes
KO
Vehicle
-7S
Figure 7-2: Closed-loop yaw-control using a gyro for direct rotation rate (4'2) feedback
fused with the CLS angle measurements (4').
Step Response of Fused Control
40
Psi Fused
---
I
-
30
M
+/-2.4deg _
20
0
-------
-----
-
10
-.
.
..
-10
-
-20
0
5
10
15
Time (sl
20
25
30
Figure 7-3: Step response of fused gyro-CLS yaw estimation feedback to a 30' disturbance input.
82
the fused controller can be successfully used as the feedback input in a closed loop
heading control system. In addition, the experimental results show that the system
is able to calculate a satisfactory estimate of the yaw angle even though there is no
update measurement from the CLS for a period of approximately 0.3 seconds, which
is a significant delay considering an error convergence requirement of
in a dynamically unstable environment.
83
2* precision
84
Chapter 8
Conclusion
8.1
Overview
In this report we have presented and verified the design and development of a unidirectional optical orientation sensor for fine positioning of highly maneuverable underwater inspection robots. The sensor consists of a configured camera-laser system
to geometrically estimate distances of single points on a surface. By aggregating and
analyzing several data points from multiple lasers while considering robot dynamics,
from on-board inertial measurement units, an estimate of the robot's distance and
orientation angles can be determined.
A prototype sensor based on these principles was designed and evaluated. The sensor was shown experimentally to achieve highly accurate distance estimates ( 1mm)
at close ranges within 270mm and yaw rotation estimates of
2* within the range of
30*, making it incredibly useful for fine distance and orientation positioning. This
type of sensor design is capable of the close-range measurements needed for underwater infrastructure inspection and provides a valuable case study for the techniques
and tools outlined in this paper. Using this sensor, an underwater robot exploring a
complex environment can estimate its orientation relative to a surface in real-time,
allowing the robot to avoid collisions with the sensitive environment, or maintain a
desired orientation while autonomously tracking objects of interest.
We have also exhibited the successful integration of a gyro with the proposed
85
camera-laser sensor on an autonomous surface vehicle. The fused estimate of the two
sensors resulted in improved dynamic performance than either sensor alone. The optical sensor corrects the unbounded position error of the gyro measurements with the
added benefit of external feedback to avoid collisions in dynamic environments. The
gyro provides high frequency orientation estimation in between optical measurements,
greatly reduces transient behavior and generally smoothens vehicle motion.
In summary, the five functional requirements outlined in Chapter 1 have been met.
The proposed camera-laser sensor currently measures distance and yaw and can easily
be extended to detect pitch, the final degree-of-freedom of interest. The sensor is also
capable of making measurements on featureless surfaces by creating its own points
of interest, and it meets the fine accuracy requirements within the desired working
range for fine positioning. The device is compact and lightweight, particularly as it
uses the same camera already on board for visual inspection. Finally its measurement
readings can provide stable motion control of an inspection vehicle when fused with
high-speed inertial sensors.
8.2
Potential Research Directions
This remains an area of exciting research, and there exist many directions for future
work. The most critical areas for further development on the CLS are 1) distance
control and 2) pitch estimation.
As we have mentioned in previous chapters, closed loop distance control is a necessity due to the challenges with focusing a camera or other sensors on an underwater
feature. Distance control would follow the same basic framework as presented in
this thesis, except the controller must fuse optical distance readings with accelera&
tion measurements from the IMU. One potential problem with this is the noise and
coupling present in accelerometer readings and the difficulty in distinguishing true
linear acceleration from angular acceleration and gravity effects. Furthermore the
acceleration measurements would require a double integrator to estimate distance,
compounding the accumulated error.
86
Designing the sensor for pitch estimation is less challenging as it follows the same
principles as the horizontal case. However because pitching lowers parts of the vehicle
underwater, a fully submersible vehicle must be designed.
Again this vehicle can
follow the same design principles as the CCSV, but special attention must be made
when designing the housing for the optical sensor. The lasers and camera must have
an unobstructed view of the environment while still being sealed from the water.
Depending on how quickly the water attenuates the laser beam, stronger laser diodes
may be needed to ensure the sensor still works throughout the necessary range. Aside
from this, after being calibrated underwater the CLS should work as fine as in air. It
is possible the laser beams could be deflected due to waves or temperature differentials
in the water. This should not affect the sensor estimation, however, since the CLS
computes distances at specific points. After the beam is deflected and lands on a
surface, the CLS computes the distance to that visible spot.
87
88
Bibliography
[1] J. Ramirez, R. Vasquez, L. Gutierrez, and D. Florez, "Mechanical/naval design
of an underwater remotely operated vehicle (rov) for surveillance and inspection
of port facilities." Proc. of the 2007 ASME InternationalMechanical Engineering
Congress and Exposition, 2007.
[2] J.-K. Choi, H. Sakai, and T. Tanaka, "Autonomous towed vehicle for underwater
inspection on a port area." Proc. of the 2005 IEEE InternationalConference on
Robotics and Automation, 2005.
[3] P. Ridao, M. Carreras, D. Ribas, and R. Garcie, "Visual inspection of hydroelectric dams using an autonomous underwater vehicle," Journal of Field Robotics,
vol. 27, no. 6, pp. 759-778, 2010.
[4] A. Halme, M. Vanio, P. Appelqvist, P. Jakubik, T. Schonberg, and A. Visala,
"Underwater robot society doing internal inspection and leak monitoring of water
systems," Proc. of the 1997 SPIE, vol. 3209, 1997.
[5] B. Bingham, B. Foley, H. Singh, R. Camilli, K. Delaporta, R. Eustice, A. Mallios,
D. Mindell, C. Roman, and D. Sakellariou, "Robotic tools for deep water archaeology: Surveying an ancient shipwreck with an autonomous underwater vehicle,"
Journal of Field Robotics, vol. 27, no. 6, pp. 702-717, 2010.
[6] K. Koji, "Underwater inspection robot - airis-21," Nuclear Engineering and Design, vol. 188, 1999.
[7] B.-H. Cho, S.-H. Byun, C.-H. Shin, J.-B. Yang, S.-I. Song, and J.-M. Oh, "Keprovt: Underwater robotic system for visual inspection of nuclear reactor internals," Nuclear Engineering and Design, vol. 231, 2004.
[8] Fourwinds, "West coast usa in danger if japan nuclear reactor meltdown." Website, 2011. http://www.fourwinds10.net.
[9] J. R. Davis, Corrosionof Weldments, ch. 10: Weld Corrosion in Specific Industries and Environmetns, pp. 183-188. ASM International, 2006.
[10] A. Mazumdar, M. Lozano, A. Fittery, and H. Asada, "A compact, maneuverable,
underwater robot for direct inspection of nuclear power piping systems," Robotics
and Automation (ICRA), 2012 IEEE International Conference on, pp. 28182823, May 2012.
89
[11] R. Poyneer, "Design and evaluation of a multi-surface control system for the ccv
b-52," Journal of Aircraft, vol. 12, no. 3, pp. 135-138, 1975.
[12] J. Anderson and N. Chhabra, "Maneuvering and stability performance of a
robotic tuna," Integrative and Comparative Biology, vol. 42, no. 1, pp. 118-126,
2002.
[13] M. Triantafyllou and F. Hover Maneuvering and Control of Marine Vehicles,
2003. Cambridge, MA: MIT Department of Ocean Engineering.
[14] J. Kirshner Design Theory of Fluidic Components, 1975. New York: Academic
Press.
[15] A. Metral and F. Zerner, "Leffet coanda," PublicationsScientifique et Techniques
du Ministere delAir, 1948.
[16] R. Willie and H. Fernholz, "Report on first european mechanics colloquim on
coanda effect," Journal of Fluid Mechanics, vol. 23, no. 4, pp. 801-819, 1965.
[17] E. Natarajan and N. Onubogu, "Application of coanda effect in robots- a review,"
Mechanical Engineering and Technology, vol. 125, uo. 11, pp. 1111-1121, 2012.
[18] Y. Xu, I. Hunter, and J. M. Hollerbach, "A portable air jet actuator device for
mechanical system identification," IEEE Transactionson Biomedical Eingineering, vol. 38, 1991.
[19] T. Fossen Handbook of Marine Craft Hydrodynamics and Motion Control, 2011.
United Kingdom: Wiley and Sons.
[20] L. Bjorno, "Developments in sonar and array technologies," Underwater Technology (UT), 2011 IEEE Symposium on and 2011 Workshop on Scientific Use
of Submarine Cables and Related Technologies (SSC), pp. 1-11, April 2011.
[21] X. Liu, W. Zhu, C. Fang, W. Xu, F. Zhang, and Y. Sun, "Shallow water high
resolution bathymetric side scan sonar," OCEANS 2007 - Europe, pp. 1-6, June
2007.
[22] R. Wicks, "SONAR Versus Laser for Underwater Measurement: A comparison
study." CIDCO Connference, 2013.
[23] C. Yuzbasioglu and B. Barshan, "A new method for range estimation using
simple infrared sensors," Intelligent Robots and Systems, 2005. (IROS 2005).
2005 IEEE/RSJ InternationalConference on, pp. 1066-1071, Aug 2005.
[24] G. Taraldsen, T. Reinen, and T. Berg, "The underwater gps problem," OCEANS,
2011 IEEE - Spain, pp. 1-8, June 2011.
[25] N. Drawil, H. Amar, and 0. Basir, "A solution to the ill-conditioned gps accuracy
classification problem: Context based classifier," GLOBECOM Workshops (GC
Wkshps), 2011 IEEE, pp. 1077-1082, Dec 2011.
90
[26] H. Morioka, S. Yi, and 0. Hasegawa, "Vision-based mobile robot's slam and
navigation in crowded environments," Intelligent Robots and Systems (IROS),
2011 IEEE/RSJ InternationalConference on, pp. 3998-4005, Sept 2011.
[27] F. Li, R. Tang, C. Liu, and H. Yu, "A method for object reconstruction based
on point-cloud data via 3d scanning," Audio Language and Image Processing
(ICALIP), 2010 InternationalConference on, pp. 302-306, Nov 2010.
[28] D. Ionescu, V. Suse, C. Gadea, B. Solomon, B. Ionescu, and S. Islam, "An
infrared-based depth camera for gesture-based control of virtual environments,"
ComputationalIntelligence and Virtual Environments for Measurement Systems
and Applications (CIVEMSA), 2013 IEEE InternationalConference on, pp. 1318, July 2013.
[29] M. Zaman, "High precision relative localization using a single camera," Robotics
and Automation, 2007 IEEE International Conference on, pp. 3908-3914, April
2007.
[30] Google, "Project tango." Website, 2014. https://www.google.com/atap/projecttango/.
[31] C.-C. Chang, C. Y. Chang, and Y. T. Cheng, "Distance measurement technology
development at remotely teleoperated robotic manipulator system for underwater constructions," Underwater Technology, 2004. UT '04. 2004 International
Symposium, pp. 333-338, April 2004.
[32] C. Wang, S. Shyue, H. C. Hsu, J. S. Sue, and T. C. Huang, "Ccd camera calibration for underwater laser scanning system," OCEANS, 2001. MTS/IEEE
Conference and Exhibition, vol. 4, pp. 2511-2517, 2001.
[33] G. Franklin, D. Powell, and A. Emami-Naeini Feedback Control of Dynamic
Systems, 2010. New Jersey: Prentice-Hall.
[34] G. Franklin, D. Powell, and M. Workman Digital Control of Dynamic Systems,
1990. United States: Addison-Wesley.
[35] S. Sukkarieh, "Low cost, high integrity, aided inertial navigation systems for
autonomous land vehicles," 2000. PhD Thesis, University of Sydney.
[36] E. Shin, "Accuracy improvement of low cost ins/gps for land applications," 2001.
PhD Thesis, University of Calgary.
[37] F. Caron, E. Duflos, D. Pomorski, and P. Vanheeghe, "Gps/imu data fusion using
multisensor kalman filtering: introduction of contextual aspects," Information
Fusion 7, pp. 221-230, 2006.
[38] C. moo Lee, S.-W. Hong, and W.-J. Seong, "An integrated dvl/imu system
for precise navigation of an autonomous underwater vehicle," OCEANS 2003
Conference Proceedings, 2003.
91
[39] C.-M. Lee, P.-M. Lee, S.-W. Hong, and S.-M. Kim, "Underwater navigation
system based on inertial sensor and doppler velocity log using indirect feedback
kalman filter," InternationalJournal of Offshore and PolarEngineering, vol. 15,
p. 8895, 2005.
92
Download