Mobile Stereo Vision System for Accurate Target

advertisement
Developing a Mobile Stereo Vision System for
Accurate Target Triangulation
Alex Z. Zhu
Duke University
Department of Electrical and Computer Engineering
Department of Computer Science
Advisor: Dr. Michael Zavlanos
Contents
1 Introduction
2
2 Robotic Operating System
3
3 iRobot Create
3
4 OptiTrack System
3
5 Closed Loop Motion Controllers
4
6 Stereo Camera System
5
7 Camera Calibration
6
8 Target Detection
6
9 Target Triangulation
7
10 Next Best View Algorithm for Stereo Vision
8
11 Algorithm Simulations
8
12 Compensating for a non-holonomic base
9
13 Algorithm Implementation
10
14 Future Work
11
15 Acknowledgements
12
1
Abstract
In order for mobile robots to truly operate autonomously of human operators, they must be able
to accurately observe their environments and make informed decisions based on this information.
One of the fundamental tasks that a robot must perform in order to understand its surroundings
is target localization that is, determining the location of objects around it. Using this information, the robot can then generate a map of its surroundings, find specific objects, and even
determine its own location.
In this report, I will demonstrate an autonomous mobile robot system that is capable of recognizing unique markers within its environment using a stereo camera system, and obtain triangulation
data accurate to within a centimeter of the ground truth.
I then use this stereo camera system to implement a stereo vision algorithm which aims to reduce
the quantization noise of the system by solving the next best view problem.
1
Introduction
One of the major problems in robotics is in determining a robot’s state with respect to its surroundings. In
order for a robot to move around and interact with its environment, it must know where it is, and where
the objects around it are. This process, known as simultaneous localization and mapping (SLAM), has been
heavily researched, and there are many solutions using a variety of sensors, such as sonar [1], stereo vision
[2] and laser range finders [3].
In this work, I focus on the target triangulation aspect of sensing, where the robot must accurate find the
position of objects around it with respect to itself. Using a calibrated stereo vision system, this can be
calculated from the difference in pixel coordinates of the object in each camera’s image. However, there
exists a substantial amount of error introduced into this triangulation estimate due to the discrete nature of
pixels. Because of this error, the reliability of traditional stereo vision systems drops quickly with distance [7].
My contribution through my project has been to develop a mobile stereo vision platform that is able to
autonomously triangulate targets in its environment. With this system, we plan to design and implement
algorithms that reduce the error in stereo measurements over long distances, to improve the viability of
stereo vision in robotic sensing. We will then be able to implement existing algorithms for triangulation and
SLAM, as well as experimentally prove work done in this field in the future. While there are many options
on the market for research grade cameras [4, 5, ?], the current options all feature a relatively small baseline,
which limits their use when detecting distant targets. As our goal is to utilize the stereo vision system in
outdoor environments, tracking distant targets will be crucial for our robot to navigate. Through this work,
I have developed a large code base for the lab of libraries and packages that allow any future researcher to
easily use most of the robotic systems in the lab with minimal effort.
One problem of note is the next best view problem, which requires a robot to determine the next best
position from which to observe a target in order to minimize triangulation error [8]. In this report, I will
demonstrate an implementation of a solution to the next best view problem for stereo vision systems.
Through my research, I have learned an enormous amount of material on the fields of robotics and computer
vision, and developed a large number of skills with regards to implementing physical autonomous systems.
I will also take with me a large code base which will undoubtedly be useful in my future research.
2
2
Robotic Operating System
The Robotic Operating System (ROS) is a framework developed by Willow Garage to facilitate the creation
of robotic systems. ROS consists of a central master node which facilitates communication between all of the
other nodes. ROS provides services such as a communication infrastructure where each individual node can
publish information over a bus (topic) which is accessible to all other nodes connected to the same master.
It also has a high level of support for coordinate transforms, which are frequently used in robotics, and also
has a large number of libraries with useful functions for a number of robotic systems.
All of these features have allowed us to create very modular structures in our code, with each robotic subsystem such as the cameras being represented as their own node. Thus, each subsystem can be switched in
and out based on the application, allowing us to achieve an extremely high level of reuse in our code, with
each component communicating with each other through the ROS master.
Currently, all of our systems have packages created by us in ROS, and can be controlled simply by running
the relevant package as a ROS node and communicating with it over a topic.
3
iRobot Create
Our main experimental robotic system is the iRobot Create
(Figure 1), which is a development robot developed by iRobot
that is very similar to their commercial Roomba units with the
vacuum component removed. These robots have three degrees
of freedom in their movement, being able to move forwards or
backwards while rotating, or rotating on the spot. Thus, they
are not fully holonomic, but can reach any position within a
2D plane with a combination of pure rotations and pure translations (or a combination of both).
Figure 1: iRobot Create
Creates have been widely distributed by Willow Garage under their Turtlebot [9] systems, and so there is a
lot of support for these robots in ROS. As a result, a lot of the low level controls of the robots (ex. motor
speeds) have been abstracted away by ROS, and there exists an API which allows us to control the robot
with a liner and an angular velocity. In this work, we have utilized this package to create a Create controller
object which will move the Create to any position given a global position using the equations given in the
next section.
4
OptiTrack System
The OptiTrack system in our lab consists of 24 infra-red cameras (Figure 2) that shine IR light onto IR
reflecting marks placed onto each of our robots. By triangulating each marker from every camera and performing least squares minimization on the resulting positions, we are able to obtain live pose data for each
robot with sub-millimeter accuracy and a frame rate of up to 100Hz. This data allows us to have very
accurate ground truth positions for each robot and target, as well as allowing us to implement a number of
simple feedback controllers to move our robots within the room.
The OptiTracks are capable of outputting this position data
over WiFi using the VRPN network [22]. This information
is then received by a ROS node, and published by the master. We have developed a ROS object to then pull this data
from the master, and distribute it to each node in our system. Thus, each of our robots is able to know exactly where
the others are, allowing us to triangulate objects in the global
(OptiTrack) frame, maneuvre around other robots, and perform high frequency closed loop controls on motion.
3
Figure 2: OptiTrack Camera with Hub
5
Closed Loop Motion Controllers
In order to autonomously control the Creates, we have implemented a simple potential function based
controller to convert a position target into a set of linear and angular velocity commands with which we can
directly control the robot. First, we define a potential function over the 2D space over which the robot can
travel with the equation:
~ x, ~y ) = (~x − cx )2 + (~y − cy )2
φ(~
Where:
~c =
cx
cy
is the target position to which the robot must travel (Figure 3).
Figure 3: Potential Function for Autonomous Navigation Centered at [1,1]
We can then perform gradient descent from the robots current position within this potential field, which
allows us to follow the gradient of the function to the nearby minima, which is the target position for the
robot. We also normalize this velocity by a constant k, which controls the linear velocity of the robot as it
moves.
Thus, the velocities that we must give the robot are:


(x−cx )
k∗ √
2
2
vx
(x−cx ) +(y−cy ) ) 
= 
(y−cy )
vy
k∗ √
2
2
(x−cx ) +(y−cy ) )
Finally, we convert the velocity vector into a linear and angular velocity, which can directly be fed into the
Create’s motor controller to control its motion.
We model the Create as a unicycle, which has a kinematic model of:
~x˙
~y˙
=
v · cos(θ)
=
v · sin(θ)
θ̇
=
ω
4
By computing the inverse of this equation, we can obtain the linear and angular velocities for the Create:
v 2 cos2 (θ) + v 2 sin2 (θ)
v
6
= ~x˙ 2 + ~y˙ 2
q
~x˙ 2 + ~y˙ 2
=
cos(θ)
=
~x˙
q
~x˙ 2 + ~y˙ 2
sin(θ)
=
~y˙
q
~x˙ 2 + ~y˙ 2
tan(θ)
=
~y˙
~x˙
~y˙
~x˙
!
θ
= atan2
ω
¨~y˙ + ~y¨~x˙
~x
= θ̇ = q
~x˙ 2 + ~y˙ 2
Stereo Camera System
For our camera system, we had a number of requirements. Firstly, they needed to be small enough to be
mounted on our robots, while maximizing resolution for maximizing the maximum depth that the robot
could perceive. They also needed a wide field of view to maximize the amount of overlap between the two
cameras, a global shutter to allow for true synchronization between the two cameras and a high frame rate
for observing fast moving targets.
After a significant amount of research, we decided to
use the Point Grey Flea 3 USB 3.0 cameras [10] (Figure 4).
These cameras meet all of the above requirements, and also support USB 3.0 technology, which allows for easy integration with most modern systems while
providing a high data transfer rate.
The cameras also
provide a GPIO port through which we have been able
to synchronize our camera pair with an external trigFigure 4: A Pair of Flea 3 USB 3.0 Cameras
ger.
Point Grey also provides a C++ API for communicating with their cameras, named FlyCapture [11]. This
API contains functions to connect to the cameras, manage synchronization, capture images, and many other
functions. We have used FlyCapture as a basis to develop a set of drivers for controlling the cameras through
ROS. With our drivers, we are able to connect to any FlyCapture compatible camera, perform image acquisition and send the resulting images as ROS messages to any ROS node. Through ROS, we have also
been able to offload the process of undistorting and rectifying the images to another ROS node. With this
package, we have been able to integrate any new cameras easily simply by running the relevant ROS nodes.
In order to rigidly mount the cameras and fix their positions, we have designed an acrylic base for each
camera to be connected to, with a distance between the cameras of 20cm. This distance, known as the
baseline, determines the maximum distance before triangulation estimates begin to fail due to quantization
noise, and so we have tried to make it as large as possible.
5
7
Camera Calibration
In order to implement an accurate stereo vision system, we
must first calibrate the cameras to remove radial distortion
[13] from the images and rectify the image pair so that they
fulfill the epipolar constraint [?]. To perform this rectification,
we used the MATLAB camera calibration toolbox developed
by Dr. Jean-Yves Bouguet [15]. The package provides an automated calibration procedure which computes the instrinsic
parameters for each camera by analyzing a the corners of a
checkerboard pattern (Figure 5) across a large number (15+) Figure 5: Checkerboard Used for Camera
images. We then perform rectification in a similar manner by Calibration
combining the calibration results of both cameras.
From this method, we obtain 4 matrices that represent the cameras’ intrinsic parameters, distortion coefficients, rectification matrix and projection matrix after rectification. We can then pass these parameters into
the stereo image processing package [17] provided by ROS to perform the necessary operations.
With this system in place, we are able to quickly calibrate new cameras to obtain the relevant matrices, and
simply stream images to the ROS image processing node which streams back a pair of rectified images.
The second form of calibration that we must perform is extrinsic calibration, or otherwise known as hand
eye calibration. Traditionally, this is used to determine the transformation between the hand holding the
camera and the camera itself. In our case, our estimate of the camera position from the OptiTracks is initially
determined by eye. Thus, there is around one degree of error in the rotation measurement, and potentially
a few centimeters of error in the position measurement. These errors, while small, can cause much larger
errors when projected out into space, especially for further away targets.
To perform this calibration, we used an add-on to the Bouguet toolbox developed at ETH Zurich [16], which
determines this transformation between the OptiTrack camera position and the true camera position.
8
Target Detection
In our experiments, we initially used colored table tennis balls
(Figure ??) as targets. The balls spherical shapes allowed for
quick calculations of their centroids from any viewing position,
and their unique colors allowed us to use color segmentation
based techniques to uniquely identify and track each ball individually. In our image processing stages, we have used the
OpenCV [12] (Open Computer Vision) C++ library to perform
each processing step.
Figure 6: Colored balls used as markers
After the cameras have taken their images, we convert the images from the Red-Green-Blue (RGB) space into
the Hue-Saturation-Value (HSV) space. In the HSV space, we have one variable that uniquely determines
what we perceive as the color of the images, allowing for much easier segmentation of specific colors. By
determining the hue value that corresponds to the color of each ball, we are able to zero out any pixels that
are outside of this color range using the OpenCV inRange function. This results in a binary image with 1s
in pixels that match our desired colors.
6
Due to other elements in our lab environment that may share
similar hues to our balls, we found that this process also includes a number of outlier pixels that do not belong to the
targets. Thus, we implemented the Hough Circle Transform
in OpenCV to identify any circles in the binary image. By
tweaking the parameters of the transform, our algorithm can
uniquely identify the circle that most likely represents our target (Figure 7) or, if such a target does not exist, will return
Figure 7: Detected Ball after Hue Threshan error value that the rest of the algorithm can use to ignore
olding
that target for this iteration.
Because our camera images have been fully rectified [?], any point in space that is projected onto the left
camera must lie in the same row (have the same y value) in the right camera. Thus, we can limit our search
in the right image to only the row of pixels in which we found the target in the left image. This process
allowed us to greatly reduce the time for target detection, while greatly reducing the number of false positives.
We have found that this tracking algorithm identified the targets correctly with a high degree of consistency,
and had a localization uncertainty of 5cm from 3m away. Due to variable lighting conditions, darker areas of
our targets may not be identified as being a part of the target, and so the perceived centroid of the target will
shift, as the shape of the ball is now smaller. We have taken this detection error into account by increasing
the uncertainty of each localization step.
However, our currently applications for this stereo system assume minimal or zero detection noise, and so the localization
uncertainty introduced into tracking the balls due to not highlighting the correct center of the ball exceed our limits. Thus,
we have had to switch to another type of target. Currently, we
are using the ARToolKit [18] (Figure 8 produced by the HITL
lab at the University of Washington. This package allows us to
track any number of unique markers, with sub-pixel detection Figure 8: Marker used by the ARToolKit
accuracy. However, due to the planarity of the targets, we are
unable to view the targets from the side or behind them. Thus, we must restrict our robots motion to
positions in front of the targets.
9
Target Triangulation
As our images have been rectified, the triangulation process is highly simplified compared to general multiview triangulation.
Given the image coordinates of the target in each camera, as well as the baseline and focal lengths of both
cameras, we can calculate the position of the target in the cameras’ reference frame using the following
equations [19]:
 b(x +x ) 
L
~rrelative

= 
R
2(xL −xR )
by
xL −xR
bf
xL −xR


This relative translation can then be transformed into the global frame using the position and rotation of
the cameras with respect to the global coordinate frame:
~rglobal
=
Rcameras · ~rrelative + ~rcameras
7
10
Next Best View Algorithm for Stereo Vision
One major source of error in stereo vision, particularly with
distant objects, is quantization error. Because of the discrete nature of images, a pixel does not represent an infinitely small point, and so a pair of pixels on from a
stereo camera actually represents a volume when projected
out in space, as can be seen in Figure 9.
This error
grows as you get further from the cameras, as each 3D
volume grows larger, and is also longer than its width,
meaning that depth estimates are worse than lateral estimates.
We have worked on implementing an algorithm that tries to
minimize this error by solving the next best view problem with
regards to quantization error. The next best view problem is
defined as choosing the next best position from which a robot
should make the next observation given the last observation. Figure 9: Quantization Error in Stereo ViThe algorithm [19] was designed by Charlie Freundlich, a grad- sion
uate student in our lab, with Dr. Zavlanos and Dr. Philippos Mordohai at the Stevens Institute of Technology.
This algorithm proposes a solution to the problem by finding the next best position that minimizes the
triangulation uncertainty of the next observation. It first computes the next best relative position for the
cameras by performing gradient descent on a potential function which represents the trace of the triangulation uncertainty. It then calculates the velocity and rotation that would realize this relative position within
the camera frame by performing another gradient descent step on a potential function which aims to minimize the distance between the targets’ relative position in the real world and the desired relative position.
Finally, it computes the next best position by performing a series of Euler and Runge-Kutta updates on
the position and rotation respectively. With each iteration of the algorithm, observations are fused using a
Kalman filter to iteratively improve the triangulation estimate.
Heuristically, we would expect the robot to perform two kinds of general motions. Firstly, the robot should
move closer to the targets, as this will reduce the volume of space that the target could fall in. Secondly, the
robot should move around the targets. As the error is longer than it is wide, moving around the targets will
overlap the widths of the error, which are fused with the Kalman filter, to produce a circular error equal to
the width.
11
Algorithm Simulations
At present, we have all of the components for the mobile stereo vision system running, and have the code for
the next best view algorithm written in C++. We have tested the code in simulation by moving a virtual
robot and obtaining virtual image coordinates by projecting the target positions into each camera. When
running the simulations, we have been able to obtain excellent results, with triangulation errors dropping to
sub-millimeter levels within 20 iterations of the algorithm.
In Figure 10a, we can see that the robot (arrows) succesfully moves towards and around the targets (circles)
while maintaining them within its field of view. From Figure 12b, we can also see that the triangulation
error drops below a millimeter in around 20-30 iterations.
8
(a) Simulated Robot Path
(b) Simulated Triangulation Estimates
Figure 10: Results from NBV Simulations
12
Compensating for a non-holonomic base
Unfortunately, the algorithm described in the previous section was designed for a fully holonomic system,
and so frequently gives velocity vectors that point up or perpendicular to the robots viewing direction. Due
to the limitations of the Create, we were unable to realize a large number of motions required by the algorithm. Initially, we tried to work around this limitation with two approaches.
Firstly, we would simply ignore any velocity in the z direction, as these were unrealizable by the Create.
We were careful not to simply set the z velocity to 0 and normalize the resulting vector to obtain the same
length, as this would scale vectors with a large z component by much more. Instead, we ran the position
update iterations of the algorithm until the desired position in the xy plane was achieved, and then moved
the robot towards that position.
In order to realize motions that required the Create to move sideways with respect to its viewing direction,
we broke down the desired motion into a series of pure rotations and pure translations. Thus, in order to
move the Create directly to the side, we would rotate the Create to face the next desired position, move
directly towards that position, and then rotate back towards the desired viewing direction. As this is a much
less efficient process than simply moving the robot sideways, this approach required much more time to move
between observations, and thus required a larger minimum distance between each observation. In order to
reduce the error introduced due to this larger distance, we attempted to take simulated images by projecting
the estimated target position on the cameras, and passing these coordinates through the algorithm once
more. Unfortunately, this process ended up being very time intensive, and the higher distance introduced a
high level of error which made the output of the algorithm fail.
After attempting the previous method, we decided to attach the cameras onto a servo platform to allow
them to rotate independently of the robot. This way, the Create can move perpendicular to the viewing
position, although it still cannot realize translations to the side.
We have mounted the cameras to a Robotzone SGP5485ABM-360 360 Rotation Gearbox with a Hitech HS-5485HB servo
(Figure 11) obtained from ServoCity. We chose to use the gearbox as the individual servo would not be able to support the
torque exerted by the relatively long camera platform, and normal servos do not rotate beyond 180 with positional feedback,
and so a gear system was required to provide 360 rotation.
Figure 11: Servo Gearbox
We control the servo using a Polulu Micro Maestro servo controller. Polulu provides a standard serialbased API [20] for communicating with the servo, but we have had to write our own driver to wrap the
communication with this API in a ROS-friendly interface. This servo driver node allows other nodes to send
an Action Library [21] goal that commands the servo to move to a certain angle.
9
13
Algorithm Implementation
Currently, we are able to run our implementation of the next best view algorithm on our mobile stereo vision
system. The robot is moves in a path (Figure 12a) similar to those generated in simulation, and maintains
the targets within its field of view. From this, we have concluded that our code is mostly correct. However,
we are observing a strange bug in the Kalman filter, where the filtered triangulation estimate’s error spikes
much higher than the measurement error at the beginning of the experiment, and thus remains higher than
the measurement error for the majority of the experiment.
In Figure 12, we can see that there is an initial large spike in the filtered error, before it starts to fall and near
the measurement error. However, given the definition of our Kalman filter, the filtered error should be an
average of the measurement error and the past filtered error (which is initialized as the initial measurement
error), and so should not rise higher than the measurement error. Because of this behavior, we have been
unable to prove that our implementation of this algorithm achieves the next best view.
We are working on isolating this issue by removing all other sources of error from the system, and closely
examining the code. One interesting observation that we’ve made is that this spike does not occur if we use
ideal projected image coordinates instead of the experimental ones. We believe that this issue stems from
the fact that the covariance estimate for each observation only takes into account the quantization error
[19], as the algorithm assumes no other sources of noise in the system. In our real world system, however,
we encounter many other sources of noise, such as noise due to the camera, lighting, position estimates,
detection errors and many more. While we have tried to minimize these errors where ever possible, it seems
that they are still overwhelming the quantization noise, and thus making our Kalman filter updates highly
inaccurate.
(a) Experimental Robot Path
(b) Experimental Triangulation Estimates
Figure 12: Results from NBV Simulations
10
After comparing the image coordinates to the images, our current hypothesis is that one main source of
external noise comes from how we gather the pose data for the cameras. We do not have an exact location
for the true optical center of the cameras, and so this is introducing a few millimeters and degrees of error
into the pose. We are working improving our technique for handeye calibration [16], which should hopefully
reduce this source of error. Thus, this project is still ongoing, but should hopefully see a resolution within
the next week.
14
Future Work
Once we demonstrate that the system is capable of finding the next best viewing position, and obtains
sub-millimeter localization errors as per the original paper, we plan to extend the experiments to multiple,
moving targets. In the long term, we hope to use this error correction technique to perform dense stereo
mapping, by tracking a large number of points over a surface. This will allow us to perform very accurate
3D reconstruction of objects, as well as implementing simultaneous localization and mapping. We also plan
to mount a stereo camera onto a quadrotor, to allow us to perform reconstruction with a fully holonomic
robot, and to observe the entire surface of an object, rather than just the sides.
Aside from these experiments, we also now have a large number of resources available for future projects.
The systems that we have designed will be used in future projects in fields such as controls, communications,
networks and many more.
11
15
Acknowledgements
Throughout my work, I have been very lucky to have the guidance and advice from a number of highly
talented individuals.
My advisor, Dr. Michael Zavlanos, has been extremely patient and helpful from day one, and has guided
me from knowing nothing about the field to developing a full system on my own. I could not have done
this without him, and he has provided me with an excellent entry point into the field of robotics, something
which will undoubtedly shape my career from here.
Dr. Philippos Mordohai has been very generous in helping me learn a lot of the computer vision concepts
that I have had to know, as well as in helping me develop and debug my vision related issues.
Charlie Freundlich, who developed the algorithm that has occupied most of my time, has also provided a lot
of support with regards to the algorithm, and helped me understand a lot of its complexities.
Dean Absher has also been invaluable to my experience, for helping connect me with Dr. Zavlanos through
the Pratt Fellows program.
12
References
[1] Leonard J.J., Durrant-Whyte H.F, ”Simultaneous Map Building and Localization for an Autonomous
Mobile Robot”, IROS 1991, Nov 3-5, pp 1442-1447.
[2] Konolige K., Agarwal M., ”Real-time Localization in Outdoor Environments using Stereo Vision and
Inexpensive GPS”, Springer Tracts in Advanced Robotics Volume 39, 2006, pp 179-190.
[3] Cole D.M., Newman P.M., ”Using Laser Range Data for 3D SLAM in Outdoor Environments”, ICRA
2006, Orlando, Florida.
[4] Point Grey Research, ”Bumblebee2”, Internet: http://ww2.ptgrey.com/stereo-vision/bumblebee-2”, Apr
14 2014.
[5] Videre Design, ”Products”, Internet: http://users.rcn.com/mclaughl.dnai/products.htm”, Apr 14 2014.
[6] Surveyor Systems, ”Stereo Information”, Internet: http://www.surveyor.com/stereo/stereo info.html”,
Apr 14 2014.
[7] L. H. Matthies and S. A. Shafer, Error modelling in stereo navigation, IEEE Journal of Robotics and
Automation, vol. 3, no. 3, pp. 239250, 1987.
[8] Connolly C., ”The Determination of Next Best Views”, ICRA 1985, pp 432-435.
[9] ROS, ”Turtlebot”, Internet: http://www.turtlebot.com/, Apr 14 2014
[10] Point Grey, ”Point Grey Flea 3 USB 3.0”, Internet: http://ww2.ptgrey.com/USB3/Flea3, April 14,
2014
[11] Point Grey, ”FlyCapture API Overview”,
Internet: http://www.ptgrey.com/support/downloads/documents/flycapture/API%20Overview.html
April 14 2014
[12] ”OpenCV”, Internet: http://opencv.org/, April 14 2014
[13] Tomasi C., ”Image Formation”, Compsci 527 Fall 2013, Duke University.
[14] Hartley R.I., ”Theory and Practice of Projective Rectication”, International Journal of Computer Vision,
35(2) pp. 115-127, 1999.
[15] Bouguet
J.Y.,
”Camera
Calibration
Toolbox
for
http://www.vision.caltech.edu/bouguetj/calib doc/, April 14 2014.
MATLAB”,
Internet:
[16] Wengart C.,
”Fully automatic camera and hand to eye calibration”,
Internet:
http://www.vision.ee.ethz.ch/software/calibration toolbox/calibration toolbox.php, April 14 2014.
[17] ROS, ”stereo image proc”, Internet: http://wiki.ros.org/stereo image proc”, April 14 2014.
[18] Kato H., Billinghurst M., ”Marker Tracking and HMD Calibration for a Video-based Augmented Reality
Conferencing System”, 2nd IEEE and ACM International Workshop on Augmented Reality, pp. 85-94,
1999.
[19] C. Freundlich, M. Zavlanos, and P. Mordohai, A hybrid control approach to the next-best-view problem
using stereo vision, in IEEE International Conference on Robotics and Automation (ICRA), 2013, pp.
4478 4483.
[20] Polulu, ”Cross Platform C Example”, Internet: http://www.pololu.com/docs/0J40/5.h.1, April 14 2014.
[21] ROS, ”actionlib”, Internet: http://wiki.ros.org/actionlib, April 14 2014.
[22] Taylor R.M., Hudson T.C. et al., ”VRPN: A Device-Independent, Network-Transparent VR Peripheral
System”, in VRST ’01 Proceedings of the ACM symposium on Virtual reality software and technology,
2001, pp 55-61.
13
Download