Senior Design II Documentation

advertisement
University of Central Florida
IRIM
INEXPENSIVE ROBOT WITH
INTERIOR MAPPING
Senior Design Documentation
TEAM#: 27
HUNG LE
ALVARO MONTOYA
IBRAHIM SHOFIQUEL
JAIME QUINTERO
i
Table of Contents
1.
2.
Introduction .......................................................................................................................... 1
1.1.
Executive Summary.................................................................................................... 1
1.2.
Motivation and Goals ................................................................................................. 1
Project Details ...................................................................................................................... 2
2.1.
2.1.1.
Designed Features .............................................................................................. 2
2.1.2.
Additional Features ............................................................................................ 3
2.2.
3.
Project Requirement and Specification ................................................................ 2
Milestones ..................................................................................................................... 3
Research ............................................................................................................................... 4
3.1.
Existing or Similar Projects...................................................................................... 5
3.1.1.
Mapping Solutions for Low Cost Mobile Robot .......................................... 5
3.1.2. Building Mobile Robot and Creating Applications for 2D Map Building
and Trajectory Control ....................................................................................................... 7
3.1.3. ROS – based Mapping, Localization and Autonomous Navigator using
a Pioneer 3-DX Robot and their Relevant Issues ....................................................... 8
3.1.4. Development of a 3D mapping using 2D/3D Sensors for Mobile Robot
Locomotion........................................................................................................................... 9
3.2.
3D Scanning Devices ............................................................................................... 10
3.2.1 PrimeSense Sensors .............................................................................................. 11
3.2.2 Cubify Sense ............................................................................................................ 12
3.2.3 Kinect Variations ..................................................................................................... 13
3.2.4 Lidar Sensors ........................................................................................................... 14
3.2.5 Stereo Camera ......................................................................................................... 16
3.2.6 Ultrasonic Sensor ................................................................................................... 16
3.2.7 IR sensors ................................................................................................................. 17
3.2.8 Sensors Comparison (14 -1)................................................................................. 17
3.3.
3D Scanning Libraries ............................................................................................. 19
3.3.1 Point Cloud Library ................................................................................................ 19
3.3.2 3D Display and Development Library ................................................................ 21
3.3.2.1 Libfreenect............................................................................................................. 21
3.3.2.2 Kinect SDK 1.8 ...................................................................................................... 22
3.3.3 Image Processing Library ..................................................................................... 22
3.3.3.1 OpenCV .................................................................................................................. 22
ii
3.4.
Robot Platform ........................................................................................................... 23
3.4.1 J-Bot v2.0 .................................................................................................................. 23
3.4.2
RobotGeek Geekbot Bare Bones Kit ........................................................... 23
3.4.3 Lynxmotion Tri-track Chassis Kit ....................................................................... 24
3.5.
Motors .......................................................................................................................... 25
3.5.1.
Hobby motor ....................................................................................................... 25
3.5.2.
Gearhead motor ................................................................................................. 26
3.5.3.
Voltage range ..................................................................................................... 27
3.5.4.
Motor control ...................................................................................................... 27
3.5.5.
Load capacity ..................................................................................................... 29
3.5.6.
Torque .................................................................................................................. 29
3.6.
Locomotion sensor .................................................................................................. 29
3.6.1 Motor Encoder ......................................................................................................... 30
3.6.2 Accelerometer .......................................................................................................... 34
3.6.3 Magnetometer .......................................................................................................... 36
3.7.
3.7.1.
Tiva C.................................................................................................................... 37
3.7.2.
MSP430G2 ........................................................................................................... 39
3.8.
Wireless Components.............................................................................................. 41
3.8.1.
Sub-GHz Radio Frequency Transceivers.................................................... 42
3.8.2.
Bluetooth BLE .................................................................................................... 43
3.8.3.
Internet of Things (Wireless Fidelity) ........................................................... 46
3.9.
4.
Microcontroller .......................................................................................................... 37
Power ........................................................................................................................... 47
3.9.1.
Nickle-metal hydride battery .......................................................................... 47
3.9.2.
Lithium ion battery............................................................................................ 47
3.9.3.
Lithium Iron Phosphate battery..................................................................... 47
3.9.4.
Power set up....................................................................................................... 48
3.9.5.
Voltage Regulation ........................................................................................... 49
Design .................................................................................................................................. 53
4.1.
Hardware Design ....................................................................................................... 53
4.1.1.
Robot Platform ................................................................................................... 53
4.1.2.
Power Requirements ........................................................................................ 53
4.1.3.
Computational Requirement .......................................................................... 54
iii
4.1.4.
PCB Design ......................................................................................................... 54
4.1.5.
Motion .................................................................................................................. 59
4.1.6.
On Board Computer ......................................................................................... 60
4.2.
4.2.1.
Software Design Overview ............................................................................. 61
4.2.2.
Robot Operating System (ROS) .................................................................... 62
4.2.3.
Point Cloud Library (PCL) ............................................................................... 67
4.2.4.
3D Display ........................................................................................................... 73
4.2.5.
Simultaneous Localization and Mapping (SLAM) .................................... 74
4.2.6.
Navigation ........................................................................................................... 77
4.2.7.
Autonomous System........................................................................................ 77
4.2.8.
Accelerometer Communication..................................................................... 81
4.2.9.
Magnetometer Communication ..................................................................... 82
4.2.10.
Encoder Communication ............................................................................ 83
4.2.11.
Wireless Communication ............................................................................ 84
4.2.12.
Programming Languages ........................................................................... 85
4.2.13.
IDE ..................................................................................................................... 86
4.2.14.
Linux ................................................................................................................. 87
4.2.15.
Windows .......................................................................................................... 87
4.3.
5.
6.
Software Design ........................................................................................................ 61
Design Summary ....................................................................................................... 88
4.3.1.
Design Overview ............................................................................................... 88
4.3.2.
Software System ............................................................................................... 89
4.3.3.
Hardware Summary .......................................................................................... 90
Construction....................................................................................................................... 94
5.1.
Robot Platform Assembly ....................................................................................... 94
5.2.
3D Scanner Mounting .............................................................................................. 95
5.3.
Software Installation ................................................................................................ 96
Testing and Evaluation .................................................................................................... 96
6.1.
Unit testing.................................................................................................................. 96
6.1.1.
Robot Platform Testing ................................................................................... 96
6.1.2.
Power and Regulation Testing ...................................................................... 98
6.1.3.
Input Output Testing ........................................................................................ 98
6.1.4.
Circuit Testing.................................................................................................... 99
iv
6.2.
Software Testing ....................................................................................................... 99
6.2.1.
6.3.
7.
8.
3D Scanner Testing .......................................................................................... 99
Performance Testing .............................................................................................. 102
6.3.1.
Known Environment ....................................................................................... 102
6.3.2.
Unknown Environment .................................................................................. 103
Financial and Administrative ....................................................................................... 104
7.1.
Bill of Materials ........................................................................................................ 104
7.2.
Budget ........................................................................................................................ 106
Conclusion ........................................................................................................................ 107
9. Standards and Realistic Design Constraints .............................................................. 108
9.1 Realistic Design Constraints ..................................................................................... 108
9.2 Standards ....................................................................................................................... 108
10. Reference ........................................................................................................................... 108
11. Copyright Permissions ................................................................................................... 112
v
1. Introduction
1.1.
Executive Summary
While the robot has been used in the industry for over a decade, unmanned robot
are being develop for military purpose and autonomous vehicle are on the roll,
there is however little development for domestics purpose. The main reasons for
this slow development are due to the price and development cost of the robot, thus
only major corporations or government can afford the cost of robot. However, as
the rise of technology, where faster processor being develop, sensors technology
become cheaper, there is no doubt that robot would become a much more
common sight as much as the progression of cellphone and recently the smart
devices.
Still, building a robot is a complex process, much less having the robot to move,
navigate within the environment, or specifically building a map of the environment.
The process of building a robot often require extensive knowledge wide range topic
of mechanical design, electrical system, and software programming. The online
instructions are often complex or detailed specifically type of reader, thus building
the robot can become time consuming, costly, and frustrated.
Unfortunately, modern engineers often focus on one single aspect of the robot thus
making the task building a functional robot even more discouraging. Usually, a
software engineer would buy an expensive mobile platform and try to make their
software work with it, while an electrical engineer would try to build their own robot
and often struggle getting the program to function with their custom-made robot.
The project Inexpensive Robot with Interior Mapping (IRIM) is expected to address
this issue by providing a simple mobile robot platform, which allows software
engineers to spend less time and money for the robot and focus on the software
aspect while letting electrical engineers to design their own robot with minimal
need of software knowledge.
1.2.
Motivation and Goals
What does it mean to be engineers, it is only by develop the technology that serve
ourselves, people, and environment that make us engineers. As engineers, it is
our desire to make life become more convenience, to make house become home,
to entertain people, and to have fun doing engineering work.
The origin of the project came from a discussion with a fellow student majoring in
computer science, who desired a functional mobile platform that can be control
from a remote computer, and a friend majoring in electrical engineering, who liked
to have a program built in a robot that can be extend and improve. As it turn out,
software focus students often find themselves lacking the actual platform to study
the dynamic environment of robot, while electrical students frequently having
trouble writing a program to accommodate their robot.
The goal of this project is to assemble a robot chassis with electrical circuit that
maximize extensibility, thus allowing any electrical engineer to modify and extend
1
the design while still have a functional robot. The project at the same time also
makes transparent the electrical system, and hardware communication from
control software thus allow software developer to experiment with the robot without
the need to understand the electrical system within the robot.
2. Project Details
2.1.
Project Requirement and Specification
This project composed of two main components:
One is a mobile robot platform with differential drive that capable to perform basic
two-degree-of-freedom maneuver: movement along the y-axis (back and ford),
and rotation along the z-axis (yaw.) Preliminary specifications of the mobile robot
platform are provided in table below (Table 2.1.a)
Robot Platform Specification
Width
10 inches
Length
12 inches
Height
4 inches
Average Speed
2 feet/second
Max Speed
5 feet/second
Average Rotation Speed
0.35 radians/second (~20
degrees/second)
Max Rotation Speed
1.05 radians/second (~60
degrees/second)
Table 2.1.a - Robot Platform Specification
The other main component is a 3D Scanning device that capable to provide depth
image or point cloud data. The expected specifications of the device are listed in
table below (Table 2.1.b)
3D Scanning Specification
Width
≤ 5 inches
Length
≤ 12 inches
Height
≤ 4 inches
Minimum Range
5 feet
Max Range
10 feet
Table 2.1.b – 3D Scanning Device Specification
2.1.1. Designed Features
The designed features are requirements that define the project’s characteristic and
integrity. They are the features that being expected and must be satisfied before
any other additional features. The designed features are listed below:

The robot platform is expected to be able to drive with at least two degrees
of freedom (back and ford, and yaw)
2

The robot platform is designed to be control from distance either through
remote controller (radio wave), handheld devices (Bluetooth), or through
computer (Wireless connection). The remote feature allows users to fully
manipulate the robot when they desired or in case of the robot autonomous
system failed to function.
 The robot platform is designed to avoid falling off the cliff (such as down of
the staircase). The feature is mean to protect the robot from getting damage
but expected to be override by users’ remote control should a need arise.
 In cases of handheld devices, or computer controlling, the robot is designed
to display its vision (camera image) to user. The feature allow user to control
from distance without having the robot within users’ field of vision.
 The robot platform is designed be autonomously mapping the environment
and navigate within given environment. The robot should have at least two
autonomous mode. The exploration mode, in which the robot will try to map
the environment with as much detail as possible, and the navigation mode,
in which the robot is given a destination (with in the mapped environment)
by users and the robot will move to the destination without collide to any
obstacles (including human) along the way.
 The robot software should be maximize the extensibility. The project was
design to be used by hobbyists, students, and engineers as the mean to
test their personal idea, thus the robot software should allow others to
integrate their design into the robot with as little work as possible.
2.1.2. Additional Features
The additional features are feature that are less important and are not required to
have. The additional features are mean to add more depth and functionality to the
robot. The additional features range from developers additional display to extra
autonomous mode, which are the features that normally could be extend from the
robot by users. Few are listed below:



2.2.
The robot will display 3D data of the known environment. This allow user to
have better vision of the environment.
The robot will able to detect motion. This allows the robot to act as a small
mobile security camera during the period in which the environment is
supposed to be free of motion (at night or during working hours).
The robot will be able to be controlled by user over long distance through
wireless connection. This will allow users to leave for a long distance trip
and still able to monitor their house at any position (more favorable than a
stationary security camera).
Milestones
However, this project is complex in term of functionality and performance quality,
the project is however composed of multiple modules that can be study,
research, design, and implement separately. This is very important for the project
as well as future extension, as the main goal of the project is to allow engineers
of different discipline to try out their ideas independently and able to put the
3
pieces together with minimal effort. The following figure (Figure 2.2) is the
estimated schedule for the project in the following semester (Senior Design 2)
Month
December
Goals
 Buy all materials
 Prepare for following semester
 Allocate jobs
January
 Begin building subsystems
 Begin programming software
 Begin building testing circuits
 Begin testing communication
February
 Continue programming
software
 Finish building circuit boards
 Begin programming motor
control board
March
 Finish software programming
 Get boards manufactured
 Begin unit testing
 Begin integration testing
April
 Finish unit testing
 Finish integration testing
May
 Turn in and present
Table 2.2.a: Milestone chart for senior design 2
3. Research
As stated before, under the description of the project this autonomous robot is
expected to work indoors, due to the fact that the robot is gathering its data from a
Kinect and a Kinect is not suitable to work in outdoor environment because the
projector inside works with IR light which if taken outside it would not recognize
anything.
People have experimented with this mapping technology for quite a long time
already; therefore there are many projects similar to this senior design idea
presented. This section is going to discuss the similarities and differences of this
project versus some previous projects done in the past.
Analyzing previous projects will provide vital information about what parts are
useful and inexpensive as well as the most common challenges other groups faced
during the implementation phase of the project, thus facilitating the realization of
the project and narrowing down the scope of the research. Since many features
are common from project to project these are going to be analyze in detail in this
section.
4
3.1.
Existing or Similar Projects
There are many research projects and prototype projects done in the area of 3D
mapping. In this section, some of the most closely resemble projects to the one
discuss in this paper are going to be analyzed, the goal is to compare and contrast
the many features such as the hardware parts and software programs used to
realize the project. The analysis based on other projects would allow for
improvement on certain phases where other groups failed or did not have the
means to realize the desired results.
3.1.1. Mapping Solutions for Low Cost Mobile Robot
The 2D mapping mobile robot is a thesis for a master’s degree in Computer
Science. The student was aiming to provide an inexpensive yet reliable and robust
way to map, localize, and achieve path planning with a mobile robot. The student
used TI’s OMAP4430 Panda board to power his robot and realize all the functions
planned.
This particular project focused on the mapping and path planning section of
robotics. The project was realized in a way to have a fully functional robot indoors
and outdoors environments by using the Panasonic time of flight camera which if
adjusted the right way can be suitable for outdoor environment due to the high
tolerance of ambient light, as well as indoor environment.
The paper show and review the different hardware parts that may provide the best
results and yet be of low cost. Various sensors are discussed in detail focusing
mainly in the pros and cons based on this, the project used the Time of Flight or
TOF camera, which has a decent performance in relation to the cost of the
hardware. The author of the paper also researched the various algorithms for path
planning and formats for mapping. The student used a Grid-based map, which
provides a high-resolution image; advantages and disadvantages are also
discussed in detailed.
In order to realize path planning a clear and neat map is necessary to generate the
optimal paths the robot is to travel by.
The image below was generated by the TOF camera, that is the raw data, after
this data is acquired then modifications such as filtering noise and removing
redundant points is done to get a clean imagine of the obstacles and facilitate the
path planning task (Figure 3.1.a).
5
Figure 3.1.a: Imagine captured by the Kinect (this is known as raw data)
(Printed with courtesy of Xuan Wang author of the research paper)
After the author used an algorithm called featured-grid hybrid map to filter out the
undesired points and to make the image clearer the processed image looks a lot
simpler and more suitable for path planning which was one of the author’s main
goals in his thesis (Figure 3.1.b).
Figure 3.1.b: This image shows the processed by an algorithm raw data
captured by the Kinect (Reproduced with permission from Xuan Wang
author of the paper)
6
3.1.2. Building Mobile Robot and Creating Applications for 2D Map Building
and Trajectory Control
This particular project focused on mapping and planning the robot trajectories
based on the information captured by the IR sensors. This particular project was
broken up into three different stages to have better organization and to ensure all
the aspects of the planned project were taken care of. The first stage of the project
was the designing of the robot in order to provide a flexible mobility in case of
obstacles in the path of the trajectory and the positioning of the different
components on the platform. The second stage was to ensure the construction of
the map to be accurate for the robot to create a correct trajectory or path. The third
and last stage of this project was to focus on the path generation based on the
information collected by the IR sensors.
This project and the project described on this paper relates in terms of the
communication from robot to PC, which are accomplished by wireless
communications. Both of the projects have a system of master and slave, that is,
the computer being the master in this system sending commands based on the
results given by the IR sensors to the microcontroller which is the slave who
execute the different commands which are transmitted in serial. Since the project
use IR, sensors the different functions of the robot are carry out in an indoor
environment.
Figure 3.1.c: Robot using IR sensors to capture images to construct the
map. (Reproduce with permission given by Senka Krivić one of the authors
of the paper).
The collection of the data in order to generate a map is carry out by the IR sensors,
likewise, the mapping described in this paper is realized by a pair of IR sensors,
one is to capture the imagine and the other sensor provides the depth of the
imagine, resulting in valuable data to be transform to a point cloud map for
navigation purposes.
7
3.1.3. ROS – based Mapping, Localization and Autonomous Navigator
using a Pioneer 3-DX Robot and their Relevant Issues
Three authors realized this project; the main goal was to improve on research and
previous works done with autonomous robots used primarily for mapping and
localization.
The group focused mostly on the software side of the project. It did not have much
design in the implementation of the hardware since it was realized with a pioneer
3-DX robot which already comes with an embedded controller, motors with
encoders ready to use as well as ultrasonic sensors in the front and rear end; the
robot would only require to be programmed and connected with a computer in
order to control the robot. In this case, the group decided to add a laser for range
finding which works better than the sensors already mounted by the manufacturer
(Figure 3.1.d).
Figure 3.1.d: The image above shows the artificial environment created by
the researchers for testing purposes (Reproduced with the permission of
Safdar Zaman coauthor of the research paper)
The project was implemented using the Robot Operating System or ROS, which is
an operating system exclusively for usage to operate a robot. ROS is based on
Linux and has many features to share different resources with the different parts
of the robot. It is also in charge of the synchronization of the hardware parts
resulting in an optimal communication. This operating system is able to
communicate with other computers through packages, which contains executable
programs, which are in charge of data collection from the sensors.
The robot operating system provides various packages for the implementation of
essential functions for a robot. The process of programming the robot to realize
the functions of mapping, localization, and autonomous navigation are facilitated
by the different packages provided by the ROS and the group of experimenters
made used of this advantage.
8
The next image (Figure 3.1.e) shows the result of the artificial map without being
modified (Raw data).
Figure 3.1.e: Mapping of the indoor artificial environment (Reproduced with
the permission given by Safdar Zaman, a coauthor of the paper)
3.1.4. Development of a 3D mapping using 2D/3D Sensors for Mobile Robot
Locomotion
This particular project focused on the simultaneous localization and mapping
algorithm (SLAM). The main sensor to generate an image in order to process a
map is the PMD camera. This device is able to capture 3D images and fast enough
to be considered real time capturing; this is essential whenever the project is
aiming at developing a map as the robot is in motion. Since this camera only
generates images in gray scale, the author used a RGB camera to provide a
colored map.
The PMD camera is capable of generating depth images thanks to the time of flight
feature, that is, the time the light takes to travel to a particular object and back to
the receiver. The depth of the images is very accurate due to the information every
pixel provide back to the camera.
The authors of the project also achieve a realistic 3D map by merging the image
generated by the RGB camera and the PMD camera (Figure 3.1.f). This process
involved the matching of depth registers to pixels in the 2D image. The researchers
admit there is loss of data in the process but not as significant to affect generation
of the map.
The four-wheeled robot has a small platform just to provide the space for the PMD
and RGB cameras, a microcontroller and an embedded PC, significantly lighter
than the robot being develop in this paper.
9
Figure 3.1.f: The resulting map from the RGB and PMD cameras
(Reproduced with permission given by C. Joochim and H. Roth authors of
the original article)
3.2.
3D Scanning Devices
Nowadays, the technology of modeling real world objects has advanced and
improved at higher speeds than expected and this is mainly due to the introduction
of the 3D printers to the market. Even though 3D printers have been introduced for
quite a while many people are still unaware of how this devices work.
The 3D scanners are powerful electronic devices, which are employed to realize
many tasks, such as modeling objects, facilitate improvements of already created
models, and to accurately model the real world. In this project, the intention is to
employ a scanning device to create a map about the surroundings of the robot.
This map is then used for localization, path planning, and generation of 3D images.
These devices are capable of generating multiple measurements; they make use
mainly of lasers to generate in most cases point cloud maps. Compiling all these
measurements and with help of special software to refine the output of them, many
applications can be realized.
There are various scanning devices in the market today; they are classified as
short range and middle to long-range scanners. Based on the distance range the
scanner has its way of scanning objects, that is, if the scanner is a short range
then it usually employs a single laser method. The single laser method points
towards the object and then a sensor detects the laser when it bounds off the
surface of the object, and since the angle between the sensor and the laser is
known the distance can be calculated. Another method is called structured light, in
this case, the laser projects a grid, and the sensor calculates the distance by the
deformation of the edges of the lines. Both of the methods have their pros and
cons, the single laser method known, as the laser triangulation is resistant to
10
ambient light, meaning that a scanner with this implementation is suitable for
outdoor environment. The disadvantage is the low accuracy it poses. The
structured light method is more accurate but it may require specific lighting to
accurately function. (1)
In the case of the long-range sensors, these are sensors, which detect objects
from a distance of two meters, or more they employ different methods. There are
two main methods, which are the most popular. The first method is the pulse
based; this method calculates the distance of the object from the sensor by
measuring the time it took to signal to travel from the laser to the sensor. Since the
speed of light is known, this eases the task of calculating the distance. The other
method is known as phase shift, as its name suggests the usage of difference of
the phase from the laser being sent out and the receiving laser is compare to
calculate the distance. Pulse based method is suitable for long range sensing
whereas, phase shift is more suitable for medium range, but more accurate than
the pulse based sensors. (1)
In this section, various scanning devices are analyzed to check the advantages
and disadvantages of their functionality and the reason for the choice made when
finding the right scanning device to fulfil the expectations of the project and to
stay within the financial limits.
3.2.1 PrimeSense Sensors
PrimeSense sensors are robust and high performance sensors that generate 3D
images. These sensors work with structure light technology, that is, the sensors
projects a surface of pixels on to any surface and as these pixels deform the
objects or anything on top of the surface becomes recognizable by the sensor and
the distance from the camera to the object or the depth of the image is known using
this strategy.
The company PrimeSense contributed with Microsoft in a first Kinect for the Xbox
360. The technology as to find the depth of the image is technology implemented
in the PrimeSense sensors. However, this section focuses on the most powerful
sensors manufactured by the company. (3)
There are at least three types of sensors, which are popular, and they have been
used to realize a series of projects. PrimeSense manufactured a long-range sensor
along with a short-range sensor. The long-range sensor known as the carmine
1.08 is able to scan from a distance of 0.8 meters to 3.5 meters, and poses a
resolution of about 1.2 cm at 30 frames per second.
The short-range sensor is called, Carmine 1.09. This sensor may be able to
capture objects from a distance of 0.35 meters to about 1.5 meters with an average
depth resolution of 0.5 cm. Mainly the software supporting these sensors is
OpenNI SDK, but Windows SDK for Kinect may also be used. (4)
The following table shows some of the advantages and disadvantages of the
PrimeSense Sensors (Table 3.2.a):
11
Pros
A more compact device
Cons
Lower drivers quality
Does not required power supply
only USB connection
Does not work with USB 3.0
The size of the sensor is small
Price out of budget
Lightweight sensor easy to
handle
Table 3.2.a: Pros and cons of Primesense sensors
The following table summarizes most of the specifications for the Carmine 1.08
and 1.09: (Table 3.2.b) (2)
Feature
Operating
Temperature
Data Interface
Field of
View(Horizontal,
Vertical,
Diagonal)
Data Format
Depth Image
Size
Dimensions
Maximal Power
Consumption
Maximal Frames
per Second Rate
Carmine 1.08
5 – 40
Carmine 1.09
10 – 40
Unit
°C
USB 2.0/3.0
57.5, 45, 69
USB 2.0/3.0
57.5, 45, 69
N/A
Degrees
16
640 x 480 (VGA)
16
640 x 480 (VGA)
Bit
Pixel x pixel
18 x 2.5 x 3.5
2.25
18 x 2.5 x 3.5
2.25
cm
Watt
60
60
N/A
Table 3.2.b: Carmine specifications
As the table displays the specifications, both of the sensors are similar in almost
every aspect.
3.2.2 Cubify Sense
Cubify sense is a 3D scanner with high portability, and lightweight. This sensor
was developed to be a handheld device and friendly for daily base usage. The
device has a rectangular shape and in the middle, there is a rectangular shaped
hole to allow the user to grab on to it while scanning.
The device has to be connected via USB to the computer, both USB 2.0 and 3.0
are supported, and it uses a special software to display the images and sent all the
data that is being collected while scanning.
12
The software, which is called sense, compensate for all the gaps that the original
scan created due to a fast scan or a non-ideal distance from the object during
scanning process. In addition, the software can tell the user if he/she is doing a
scan too far away or too close from the object. 3D sense was developed originally
with windows support only; however, 3D systems expanded platform support to
Macintosh recently. (6)
The following table displays the technical specifications for the sensor: (Table
3.2.c) (5)
Table 3.2.d weighs the pros and cons of the Cubify Sensor.
Feature
Specifications
Unit
Platform Supported
Windows 7(32 and 64
bit), Windows 8(32 and
64 bit), and Mac OS X
10.8 or later
5.08(W) x 7.08(H) x 1.3
(D)
0.35 – 3
2.25
N/A
Dimensions
Inches
Operating Range
Meters
Maximum Power
Watt
Consumption
Field of View
45, 57.5, 69
Degrees
(Horizontal, Vertical,
Diagonal)
Data Format
16
Bit
Operating Temperature
10 - 40
°C
Depth Image Size
240 x 320
Pixel x Pixel
Maximum Image
30
Frames per second
Throughput
Table 3.2.c: Technical specifications for Cubify Sense
Pros
Cons
High portability
High price for the budget
Multi-platform support
Table 3.2.d: Pros and cons of Cubify Sense.
3.2.3 Kinect Variations
In this section, the different versions of the Kinect are analyzed and many features
are compared. The Kinect for windows and the Kinect for Xbox are similar in
hardware as well as the software support for applications. The hardware of both
devices are almost identical, Kinect for windows has a shorter USB cable, which
is more reliable when used across multiple computers. Both of the devices work
with the same APIs, Kinect for Windows SDK, OpenKinect SDK, and OpenNI SDK.
(7)
It is the Kinect for windows, which offers a variety of extra features, which the Xbox
Kinect does not contain. The differences are a few but yet the user has to be aware
13
of the features not included with the Xbox Kinect. When it comes to choose the
right device the user should consider the functions his or her project is aiming to
accomplish, based on this an analysis should be made comparing the features
both of the Kinects contain, and finally the person should make his/her choice after
a thorough evaluation.
The differences between the Windows Kinect and the Xbox Kinect lied mainly in
the software aspect. As previously, mention the Windows Kinect offers more
features, one of the features is the near mode, any object may be as close as 40
centimeters, and the Kinect will allow this without compromising accuracy and
small degradation. In terms of skeletal tracking the windows Kinect is more
powerful, since it can detect the person’s head, neck, and arms while seating down
or standing. The options for camera settings are broader in the windows version.
The windows Kinect has a feature called Kinect fusion; this feature facilities the 3D
mapping of objects. The process to generate a 3D map involves the collection of
the depth of the images taken from different viewpoints. The camera pose is track
when the sensor is moved, later the tracking collected is used to relate the frames
and thus, generating the 3D model. (8)
In the aspect of price, the windows Kinect is a more expensive device, due mainly
to its commercial license. In other terms, if a developer wants to create his or her
own applications and sell them, they may do so, unlike the Xbox Kinect that does
not provides any license therefore, it cannot be used for commercial purposes. The
price for the windows Kinect is at $250 while the Xbox Kinect stands at about $70.
(7)
The Xbox Kinect should be considered for experimental and non-commercial
purposes. Most of the researchers when in a budget the best option is the Xbox
Kinect, which has most of the functionality as the Windows and at a lower price.
3.2.4 Lidar Sensors
Lidar is a method, which uses a transmitter, which sent out a laser pulse, then the
laser bounds off the molecules and aerosols in the atmosphere and a receiver
would detect the laser pulse after bouncing off the molecules or whatever objects
the laser has hit in its way. When the laser is captured back by the receiver the
range from the transmitter to the object can be calculated using the speed of light
the receiver would know how far and how long the pulse has traveled. (9)
Lidar technology has been used for many purposes, to scan the surface of the
earth and provide 3D maps for scientists to study, to collect information about the
atmosphere such as level of pollutions, or even the law enforcement to check the
speed of a car (Figure 3.2.a). (10)
The device, which employs this method usually, poses a laser, a scanner, and a
GPS which main function is to receive the pulse in order to calculate the range.
14
A sensor, which uses Lidar technology, is the HDL-64E. This sensor has powerful
capabilities and thus, it has many uses, from autonomous navigation for vehicles,
and navigation for vessels to obstacle detection. This sensor would provide an
excessive power to the experimental autonomous robot described in this paper.
The performance of the robot would be exceptional but the concept of creating a
mapping robot at an inexpensive price would not apply anymore if the robot would
use this powerful sensor. (11)
The following table displays the specifications for the HDL-64E sensor (Table 3.2.e).
Specification
Value
Unit
Field of View
360
Degree
Field of View Update
5 – 15
Hertz
Lasers/Detectors
64
N/A
Operating Temperature
-10 – 50
°C
Degree vertical Field of
26.8
Degree
View
Wavelength
905
nm
Voltage
15 +- 1.5 at 4 amps
Volt
Weight
29
lbs.
Dimensions
10 Height 8 Radius
Inches
Spin Rate
300 – 900
Rotations per Minute
Table 3.2.e: HDL- 64E sensors
The table displays the different specifications for the sensor. It is heavy and outside
the size range for the platform of the robot, this experiment intends to use. (11)
Figure 3.2.a: The image shows the point cloud of the surroundings
generated by the HDL-64E S2. (Reproduced with the permission by
velodyne.com)
15
3.2.5 Stereo Camera
The camera is designed with two lenses in order to provide a human-like way to
capture its surroundings also known as a binocular view. It has many uses due to
the different features it poses.
It identifies every segment of a scene from different cameras, after all the different
views are collected, by a process called triangulation which consists of determining
the location of a point in a 3 dimensional space from other points given the angles
they make to the specific point. (12)
Pros (13)
Cons (13)
Easy integration with other machines
Fast process of collected data to 3D
images
Relatively low price
No lasers or projectors required
Full field of view from a single image
It needs calibration before use
Data is ambient light sensitive
The accuracy of the distance is not exact
Table 3.2.f: Pros and cons of a stereo camera
3.2.6 Ultrasonic Sensor
Ultrasonic sensor mainly use is to measure distances from moving or stationary
objects, they are suitable for this type of measurements since the method
employed is based on a sound wave rather than light as most of the sensors work.
Since the sensor has a relative low price, it is employed for many tasks.
Some of the most typical applications for the ultrasonic sensors are:




Robot Navigation
Security Systems
Monitoring Levels of liquids
Parking Assistance Systems
This sensor sends a pulse also known as an ultrasonic burst and then it receives
the pulse again, based on the time it took to the pulse to return the distance can
be measured. In order to make use of the different features this sensor poses, a
microcontroller should be used. The interfacing part is relatively easy since the
communication between the sensor and microcontroller is through a single I/O pin.
Table 3.2.g details the specifications of the sensor. (14)
Feature
Value
Unit
Range
Power Requirements
Operating Temperature
Dimensions
Communication
0.083 to 10
Feet
5 at 35mA
VDC
0 – 70
°C
0.81 x 1.8 x 0.6
Inches
Positive Transistor –
N/A
transistor pulse
Table 3.2.g: Ultrasonic sensor specifications
16
3.2.7 IR sensors
IR technology is widely used in a variety of wireless devices; this method is mostly
used in sensing. In the electromagnetic spectrum IR, lights are at the top of
microwaves with higher frequencies and below visible light.
IR sensors are listeners of IR light, which is invisible to the naked eye. These
sensors are most commonly used in TVs and DVDs, since they have to sense the
IR signal from the controller which contains the emitter.
These small sensors are used for a variety of applications such as: (17)




Security – Movement, Fire Alarms, etc.
Computers – Mouse and keyboards.
TVs, DVDs – Remote control.
Industrial – Measurement, motor encoders.
The following table (3.2.h) displays the different specifications based on the
TSOP38238 sensor manufactured by Vishay semiconductors: (16) (15) (18)
Feature
Value
Unit
Sensitivity Range
800 – 1100
Nanometer
Operating Power
3–5
Volt
Frequency Range
35k – 41k
Hertz
Dimensions
5.20 L x 6.98 W x 7.96 H
Millimeters
Weight
0.43
Grams
Operating Temperature
-25 to 85
°C
Range
Power Consumption
0.010
Watt
Table 3.2.h: Specifications of TSOP38238. (19)
Pros
Cons
Low cost
Popular in use
Prone to miss small objects
If fixed straight ahead sensors may
miss objects not in the way
Lightweight, small size
Table 3.2.i: Pros and cons of TSOP38238
3.2.8 Sensors Comparison (14 -1)
The following table compares the sensors being discussed in the previous
sessions with the exception of IR sensor since the PrimeSense carmine 1.08 and
the Xbox Kinect are equipped with IR sensors.
17
Sensors
Prime
Sense
Cubify
Sense
Xbox
360
Kinect
Lidar
Stereo
Camera
Ultra
sonic
Field of
View
57.5° H,
45° V,
69° D
640 x
480
USB 2.0
/ 3.0
45° H,
57.5° V,
69° D
240 x 320
43° V,
57° H
360°
40° H
40° H
N/A
752 x
480
FireWire
1394a
N/A
USB 2.0
640 x
480
USB 2.0
Size
18 W x
2.5 H x
3.5D cm
12.9 W x
17.8H x
3.3 D cm
7.2W
x12H x
11.5D in
10H x
8r in
47.4W x
36 H x
157D
Weight
0.5lbs
Not
Available
3lbs
29lbs
342g
22H x
46W x
16 D
mm
9g
Range
0.8 –
3.5 m
0.35 – 3
m
2m
Not
available
2cm –
3m
Platform
Support
Mac OS
x,
Window
s, Linux,
Win 7(32
bit or 64
bit) Win 8
(32bit or
64 bit)
Windows
XP
service
pack 1
N/A
Price
$294.99
$399.00
$1895
$29.99
Resolutio
n
Data
Interface
RS232
50m on
paveme
nt
120m
for cars
Win7,
Linux,
Win 8,
Win,
Win
Mac
Embedd
OS x,
ed 7 or 8 Android
, IOS
$99.99 $75000
1 I/O
pin
Table 3.2.j: Types of sensors
The previous table compares the different technical specifications for every sensor.
Some of these sensors such as the ultrasonic sensor do not provide any imaging
data. Likewise, in the platform support section of the table is not specified any
operating system since the compatibility is based on what type of microcontroller
is interfaced with. The ultrasonic sensor sole purpose is to send information to a
microcontroller about the time it took for the sound wave to return to the receiver.
In the table, the PING ultrasonic distance sensor is used as a reference. For the
18
PrimeSense sensor, the carmine 1.08, which is the long-range sensor version of
the carmine models, was used for information purposes. The LIDAR sensor used
for comparison was the HDL-64E S2 which uses LIDAR technology, it is a very
robust sensor with many powerful features as described in section 3.2.4. The
bumblebee2 stereo camera was used as a reference for the stereo camera
section. After a thorough analysis and comparison of the sensors, a decision was
made according to the requirements of the project and the budget.
The final choice for a depth sensor was the Xbox Kinect; this device provides the
required features to realize an autonomous robot and generate a 3D map for
display purposes.
Some of the sensors prices displayed are based on unofficial websites rather than
the official website of the manufacturer; such is the case of the PrimeSense
Carmine 1.08. Since the company was acquired by another company the website
stop selling any products, in this case the reference for the price was taken from
amazon.
A similar case happens when researching for the prices of the PING ultrasonic
sensor and the HDL 64E S2. Both of the sensors did not have their respective
prices under their official sites, and the reference for their prices was gathered from
unofficial sites selling the devices.
3.3. 3D Scanning Libraries
The different libraries created by communities of developers, or specific groups of
developers are designed with the purpose of facilitating the usage of software and
to provide the necessary tools to realize multiple applications.
With the rapid changes in technology and the new challenges that arise with them,
developers tend to think ahead of time when it comes to work on complex
technologies. To facilitate the process of developing and compiling new
discoveries of methods to realize a function on any specific language, the
communities of developers have created libraries, which provide the tools
necessary to process, modify, contribute, or create projects.
Software languages contained multiple packages, each of these packages are
made up of methods to realize the different applications the user desire to develop.
For instance, in the case of java the library provided by oracle is designed for many
applications to run on any platform regardless of the operating system.
3.3.1 Point Cloud Library
Point cloud library is an open source, large scale project which aims at providing
the tools for point could processing and modify 2D and 3D images.
This library is suited for commercial and experimental purposes. It contains many
modules, which makes the library extensive. To ease the management of the
19
library the different modules are used and deployed instead of deploying every
module. (20)
Some of the features the library provides for developing purposes are surface
reconstruction, model fitting, and segmentation, feature estimation, filtering of data
among other methods.
Our project aims to employ the data generated by the Xbox Kinect, namely the
point clouds generated from the surroundings, the data generated as the robot
navigate through a given environment is raw data. This raw data is distorted by
many factors such as ambient light, distance, or how fast the scan was performed
are just a few factors that may corrupt the data. In order to process and redefine
the raw data, the usage of the libraries is essential.
The point cloud library was developed to be highly portable, and it is composed of
modules. Each module aims to provide the tools for a specific task to manipulate
the raw data. The modularity of the point cloud library brings benefits for platforms
with limited resources, since the platforms would only use the modules need it for
the task.
Another advantage to use the point cloud library the hassle-free if working on
different platforms. The library has been used in platforms such as Linux, MacOS,
Windows, and mobile platforms as Android and iOS.
The following modules are the most popular in used:






Pcl_filters: This module helps to filter out undesired points in the cloud,
which may distort the data and complicated the process of developing
anything from it.
Pcl_io: The I/O module is clearly essential to work with any sensors. In this
project we are working with the Kinect and this module would ease the task
of reading and writing the data generated.
Pcl_keypoints: The key points in the data allows for representing the original
data when used with other features. We are considering the use of this
module in case of some data being lost in the transferring process from the
sensor to the microcontroller.
Pcl_features: the module contains data structures, which constructs
patterns based on the points close to a specific one.
Pcl_kdtree: The module facilitates the searches of the nearest neighbor’s
points from a specific point, we must use this module since is vital in the
processing of point cloud data.
Pcl_octree: Allows the user to create hierarchical trees from the data. These
trees speed up the process of searching for the closest points, which is an
important part of the processing. Some function such as serialization and
deserialization of the data allows the encoding of the trees into binary data.
20





We may consider using this module for speeding up the process of
formatting the data.
Pcl_segmentation: This powerful module is used when the point cloud data
has many isolated parts; segments and process then takes it independently.
The group may use this module depending on the raw data generated by
the Kinect. After some testing the decision of whether to use this or not can
be made.
pcl_sample_consensus: This module facilitates the recognition of certain
objects, which has common geometric shapes such as planes, cylinders,
etc. We may use this module if the processing times of the point cloud are
improved since the recognition of objects is faster. However, the real benefit
would be obvious in the testing part.
Pcl_recognition: The library contains this module for object recognition, this
module differ from the sample consensus in the way it recognizes objects.
By the use of different algorithms, the goal is achieved rather than
comparing the data with common geometrical shapes.
Pcl_surface: It allows the reconstruction of the point cloud if the data is
noisy. This library becomes useful especially when there are various scans
made on the same area or object and matching of points are needed to
modify the data and make it clear.
Pcl_visualization: The main goal of this module is to generate a fast
prototype and visualize the results of the algorithms. It has many features
to modify the raw data. We are going to use this module in the early stages
of the testing since the prototypes of the data are needed to estimate how
the point cloud is going to be displayed, and if the results are satisfactory.
3.3.2 3D Display and Development Library
In this section two of the development libraries are discussed, the libfreenect is an
open source project where many developers cooperate to expand and add
functionality
3.3.2.1 Libfreenect
A community of people is currently developing this library by the name of
OpenKinect, which aims at facilitating the use of the Xbox Kinect in different
platforms namely, Windows, Linux, and Macintosh. (21)
The library is an open project as the point cloud library, but this library has a
specific interest in employing the Xbox Kinect for many purposes other than
gaming. Open libraries such as this one can be modify by the community, this
community of developers are welcoming new developers who want to join to
expand the library for the benefit of not only the developers community but also for
the researchers who may want to implement the Xbox Kinect for useful functions.
In our case, we would explore the various classes and functions provided by this
21
library to realize an autonomous robot using the Kinect as our main sensor to
gather data about the surroundings.
Libfreenect contain all the necessary software for the initialization and
communication with the Xbox Kinect, this library also contain an API, which works
in Windows, Linux, or OS X. Some of the languages supported as of now is C,
C++, .NET, Java, Python are the most popular. Since the OpenKinect community
is focusing on this library, as others developers become interested in working with
other languages they are going to be available in the library. The API in the near
future is going to support more applications to expand the functionality of this
software.
3.3.2.2 Kinect SDK 1.8
The Kinect SDK provides the necessary APIs to develop applications on the Xbox
or windows platform. Microsoft developers designed the SDK with the windows
Kinect in mind but since the Xbox Kinect has a similar hardware, then development
and testing can be made in the Xbox platform but those applications are not
allowed to be used as commercial applications. The group sees this as an
advantage since we do not have to spend the money required to buy the windows
platform, which is expensive if we consider staying in budget and our usage for the
Kinect is strictly experimental. The difference in the price is mainly due to the
license the windows Kinect poses for commercial applications.
The SDK allows the developers to realize robust applications such as the
recognition and tracking of people by employing the skeletal view the sensor
provides, find the source of audio by implementing noise and echo cancellation,
develop voice commands applications using speech recognition, or determining
the distance of an object using the depth data are just some of the applications
capable of running in the Kinect.
The following is a brief list of the contents included with the download of the
windows SDK 1.8



The development kit comes with drivers and instructions to help the
developers in the realization of applications. (23)
Documentation for the programming of managed code, that is, code
executing under the control of the common language runtime, and the
programming of unmanaged code, which is code executing outside the
control of the CLR. (24)
Samples showing how to work with the SDK properly. (23)
3.3.3 Image Processing Library
3.3.3.1 OpenCV
OpenCV is a library, which aims at providing the tools necessary to work in the
field of computer vision. Essentially this library supports the same operating
systems than the point cloud library. It is cross platform and supports Windows,
Linux, Mac OS X, iOS, and Android. The languages supported are C, C++, Java,
22
and Python. Our project main goal is to use the raw data generated by the Xbox
Kinect and process the point clouds to redefined them and make them useful for
navigational purposes. Another function we would like to realize is the image
display of the map in real time, for the display purposes OpenCV is the library we
are considering to use, since most of the modules alleviate the time for image
processing. We are still discussing about the image processing. At this stage of
our project, the most important part is to focus on gathering the data necessary to
generate the point clouds and make this data available for navigational purposes.
3.4. Robot Platform
3.4.1 J-Bot v2.0
The J-Bot v2.0 is a four-wheel drive, two-tier rover robot. Four 3-12VDC high
quality micro-speed motors drive each tire on the J-Bot. The dimensions of the JBot are 200mm x 170mm x 105mm. The base is a hollow case to house the four
motors and a five-battery holder. Spacers to allow space for batteries, microcontrollers, boards, etc. separate the tiers. The height of the second tier can be
customized by adding extenders to the spacers. Both tiers have holes of different
size and shape to fasten attachments on to the plates as needed. The J-Bot v2.0
speed is listed as 90 cm/s, to put this in value in a better unit the J-Bot reaches
speeds of 2.95 ft/s. This is a great speed compared to the other platforms
considered for the project. (25)
The J-Bot v2.0 was chosen for this project because of its basic design that met all
of our requirements for the rover. The requirements were that the robot rover
should have a basic chassis design as to not over complicate the motor control. In
addition, having a basic design usually translates to a cheaper price for the robot.
Price is important because as the title of the project suggests the rover should be
‘inexpensive’. The next requirement is that the robot should be big enough to allow
space for everything needed to be put on the rover, most importantly the Microsoft
Kinect. The Microsoft Kinect is most important because, not only is it what we will
use to create a point cloud, but it will be the largest device on the robot rover. The
Kinect is about 11 inches at its widest point, which is the top piece, but the base is
about 4 inches wide. It was decided that the chosen platform should be
customizable, not just with space, but with parts as well. This will allow us to test
different vantage points for the Kinect to see which works best. In addition, the
motors can be changed for other motors if more power is required. Another factor
that worked in the J-Bot’s favor is the size and material of the wheels. The wheels
are about 1.5 inches wide and a rubber tire wraps the plastic rim. These are much
better than the Geekbot’s wheels. (25)
3.4.2 RobotGeek Geekbot Bare Bones Kit
The RobotGeek Geekbot is a two-wheeled, two tier robot. 6V RG-4C continuous
turn servomotors power each wheel on the Geekbot. The dimensions of the
Geekbot are 220mm x 190mm x 100mm. The body of the Geekbot is made up of
two tiers separated by pillar-like spacers similar to the ones on the J-Bot. The
height of the second tier can be changed by adding extenders to the pillars. The
tiers are made of Plexiglas. Similar to the J-Bot, the tiers have holes all over to
23
accommodate the addition of attachments. Although unlike the J-Bot, the holes are
only circular and organized in rows and columns across the Plexiglas. (1cm x 1cm)
The Geekbot speed is listed at 1.3 ft/s, significantly slower than the J-Bot. (26)
The RobotGeek Geekbot met most of the robot rover requirements set for this
project. Its simple design was desirable. The Geekbot’s simple design translated
to its price, which, for the project, was an important factor in the decision-making.
The Geekbot is also customizable another great quality and something required
for this project. In fact, the Geekbot’s selling point is its modularity and how easy it
is to customize. With an array of sensors, turrets, arms, guns, and controllers the
Geekbot is easily the most customizable platform of the bunch. The reasons the
Geekbot was not chosen was because of it two-wheeled chassis design, speed
included motors produce, and the material and design of the wheels. The twowheeled did not look as dependable and sturdy as a traditional four-wheeled
chassis design. The speed the motors included with the Geekbot kit left much to
be desired. With a greater top speed, the luxury of choice is created. Some
situations, like tight spaces or turns, may call for a lower speed. Higher speeds
may be useful in areas with ample room or on straightaways that have no obstacles
to avoid. Lastly, the wheels seems to be made out of the same material the decks
are which make me question its gripping ability on slicker surfaces like laminate
flooring, linoleum, or certain carpets. In addition, the width of the wheels is about
a quarter inch. Comparing the width of the Geekbot’s wheels to those of the J-Bot,
the Geekbot’s wheels are about 4-5 times thinner. (26)
3.4.3 Lynxmotion Tri-track Chassis Kit
The Lynxmotion Tri-Track chassis kit is a tank rover with two triangular tracks on
each end. The dimensions of the Tri-Track are 280mm x 25.5mm x 120mm, but
this is not very representative because the tracks take up a considerable amount
of space of the overall dimensions. The dimensions of the base (usable space on
the Tri-Track) are 130mm x 180 mm x 120 mm. The Tri-Track body is made lasercut Lexan structural components, and custom aluminum brackets. This robot rover
has excellent traction made possible by using heavy-duty polypropylene, rubber
tracks, and durable ABS molded sprockets. The motors included with the Tri-Track
are two 12VDC gear head motors. Unlike the previous two platforms, the Tri-Track
is two tiered but the height of the second tier cannot be changed. The body is
hollow, most this space is used to house the wiring for the motors and the
Sabertooth motor controller. There are extra holes on the Tri-Track plates available
to use for customization purposes. The price of the Lynxmotion is the highest of all
three platforms being compared. This is probably due to the complex design of the
tracks and the materials used for the rover (aluminum, rubber, and polypropylene).
(27)
The Lynxmotion Tri-Track was not chosen as the platform for this project because
it did not meet all the requirements needed and the ones it did meet were not
satisfactory. The design of the tracks was too complex. If a piece of the track were
to break, it would not be replaced as easily (or at all) as the J-Bot’s. Figure 3-1 and
Figure 3-2 show the track of Tri-Track mid-assembly, so one can gain an
understanding of the complexity of the design. The price of the Tri-Track is not
24
exactly what you would call inexpensive. The rover did not allow much room for
customization, the tiers could not be elevated if needed, this is important because
during prototyping and assembly things may need to be moved around, added on,
and removed to adjust to the situation. Customizability is also reduced due to the
lack of holes on the tiers. Although the overall size for the Tri-Track was within the
range we were looking for the usable space (base) was too small. (27)
Figure 3-4.a: Lynxmotion Tri-Track chassis assembly (Permission pending) (28)
Figure 3.4.b: Lynxmotion Tri-Track track piece (Permission pending) (29)
3.5. Motors
3.5.1. Hobby motor
Hobby motors are usually used in small vehicles and robots. Hobby motors can be
used to drive the propeller of a boat, the wheels on a car, the blades on a
helicopter, a fan. Hobby motor is a very general term that can encompass many
different kinds of motors with different power outputs. A better term for these would
be brushed DC motor and brushless DC motor. Brushed DC motors are an
internally communicated electric motor designed to be run from a direct current
source. Brushed DC motors have been around for a very long time now and were
used in many places in industry. A few places and uses that still utilize brushed
25
DC motors are electrical propulsion, paper machines, cranes, and steel rolling
mills. (30) (31)
Brushless DC motors are synchronous motor powered by an integrated switching
power supply. These motors have replaced many brushed DC motors in many
applications, but not all. The main reason for this is that brushed DC motor brushes
wear down and must be replaced. Brushless DC motors have been widely used in
hard disks, CD/DVD drives, and radio controlled cars, cordless power tools, and
cooling fans in electronic devices. To put this motors efficiency into perspective in
a radio controlled car a brushless DC motor outperforms nitro and gasoline
powered engines. A small brushless DC motor can produce twice as much torque
and about 4 times more horsepower than these engines. (30) (31)
Neither brushed DC motors nor brushless DC motors were used in this project.
Two main reasons made these motors inconvenient to use. Firstly, the shaft that
spins on these motor is a cylinder with no grooves. This makes attaching things
like wheels a challenge. Some kind of adhesive or epoxy may be used but it may
come loose with heavy use. This is definitely a problem because the robot rover
should work every single time it is used and require a little maintenance as
possible. Lastly, the spinning shaft on these motor is in-line with the motor making
mounting a problem if there are no holes to fasten the motor with a screw.
3.5.2. Gearhead motor
A gearhead motor is a DC motor that utilizes gears to produce more torque. These
motors also come in different sizes, shapes, and output capabilities. This kind of
motor is the kind that came with the J-Bot kit. One factor in choosing the J-Bot over
other platforms was these motors. They are powerful and they are ergonomic.
These motors produced the greatest speed of the platforms considered. These
motors are ergonomic because they are L-shaped which is convenient because
space is a limited, precious resource. This L-shape is more formally known as a
motor with a 90-degree shaft. The best thing about these kinds of motors is the
shape of the tip of their shafts. The shafts are cylindrical in shape but taper toward
the tip ending in a half cylinder shape. This conveniently allows for part like wheels
with the same shape to be attached with no worries about the shaft slipping in the
slot or adhesive breaking off. Figure 3-4 is an image of the geared motor that
comes in the J-Bot kit. (34)
26
Figure 3-4: gearhead motor included in J-Bot Kit (Permission Pending) (32)
3.5.3. Voltage range
The voltage range of a motor is the range of voltages that will make the motor run
well. Motors also have a nominal voltage, which makes the motor run at the highest
efficiency. Operating a motor outside of it voltage range you will run the risk of
burning out the motor if you apply too much voltage, or if you apply too little voltage
the motor will not turn due to low torque and it will also burn out. Robot motors
usually work around 3, 6, 9, 12, and 24 volts because batteries are usually used
to power robots and these values can be attained with batteries. (33) (35) (36) (37)
The motors that come with the J-Bot have a voltage range of 3V – 12V. The
specifications sheet give values for speed, current, torque, and stall current for
motors operating at 3V and 6V. These are probably the voltages the manufacturer
expects these motors to be used. Testing will be required to see how much voltage
is most appropriate for locomotion. This will depend on the overall load the rover
has to carry. Minimizing the load on these motors will be a priority. Alternatively,
changing the motors with stronger motors is also an option. This would most likely
require a different battery because the voltage range may be different with new
motors. (33) (35) (36) (37)
3.5.4. Motor control
Motor control is an important factor in directing where the robot moves. If you just
apply a voltage source to the motors, the motors will move forward indiscriminately.
This is not acceptable, the robot should be able to move forward, backwards, turn
left and turn right. Achieving this would be very difficult without a motor controller.
(38) (40)
Motor controllers control voltage and current supplied to motors. It is simple to
make a motor move forward, but not everything else is so simple. To move
backwards you need to reverse the flow of the current through the motor. This
would be a simple task if you wanted to do it manually with a switch but since this
27
robot requires automation this cannot be used. What can be used is an H-bridge.
An H-bridge is a circuit that contains four MOSFETS with the load (in this case the
motor) connected in the middle. By opening or closing the correct, you can make
the motor move forwards or backwards. By turning on A’ and A you will connect
the left lead of the motor to the power supply and the right lead to ground this will
make the motor turn forward. Conversely, if you turn on B’ and B the left lead of
the motor will be connected to the power supply and the right lead will be
connected to ground. Caution must be exercised when using an H-Bridge because
if A’ and B or B’ and A are opened then you will create a low-resistance path
between the power source and the ground. This is effectively making a short-circuit
between the two and you can fry the H-Bridge or something else in your circuit.
(38) (40)
H-Bridges can be bought as integrated circuits from different companies. The one
being used in this project is the L293. The L293, manufactured by Texas
Instruments, is a quadruple H-Bridge. The L293 works with a voltage range of 4.5V
– 36V that is perfect because this range covers the range of the motors on the JBot and will probably cover the range of other motors we might have to use. (38)
(40)
Figure 3-5: Typical H-Bridge design (Permission Pending) (39)
28
Turning is also a concern because it is not as straight forward as it may seem. With
the J-Bot there is to steering wheel or even the capability to turn the wheels. This
limits you to turning by making the wheels on one side run slower than the ones
on the opposite side. To turn right the wheels on the right side have to turn slower
than the ones on the right. To turn left the wheels on the left have to run slower
than the ones on the right. The greater the difference between the two sides the
sharper the turn. (38) (40)
All these things must be considered when choosing a motor controller and any ICs
that one may want to implement on the board. Also choosing the right
microcontroller for the job is important. One factor in choosing a microcontroller is
making sure you are familiar with the language. Many different architectures exist
all with their own syntax and ways of doing things. Choosing a microcontroller that
can get the job done and you are comfortable with can make coding instruction for
the motors easier. (38) (40)
3.5.5. Load capacity
Load capacity or max load is the maximum weight the motors can carry. This is an
important factor because you need to be careful when adding things to the robot.
Weight should be minimized, you do not want to over work the motors. A
requirement of the robot platform used was customizability. This is important here
because if it is not known how much load the robot will carry then being able to
exchange the motors for more powerful ones is imperative. (33) (35) (36) (37)
3.5.6. Torque
Torque is defined as the tendency for an object to rotate around a pivot, axis, or
fulcrum. A good way to understand torque is the rotational force around a point.
For example the torque of a wheel around the axel of a car. Torque is usually
measured with a unit of force being multiplied by a unit of length. Classically torque
is measured with the Newton-meter (N • m). A good way to picture this is: if you
have 1 N • m on a plank rotating around a point and you apply 1 N of force in the
opposite direction of rotation 1 m away from the point of rotation the plank would
hold up the object applying the force.
If a motor cannot put out sufficient torque, it will fail to move the load on it. This is
why finding the right motor for the robot rover and minimizing the load on it is
important. The motors on the J-Bot put out 1.92 kg-cm each. (33) (35) (36) (37)
3.6. Locomotion sensor
In order to properly utilize the waypoint algorithms, we will need to know the
position of our robot at all times. We would also need to know whether the robot
has reached the waypoint or not. The direction in which it should travel should also
be determined which would require knowledge of the current heading of the robot.
In addition, it is necessary to have a redundant system in place to let us know
29
whether the wheels have rotated the proper number of times in order to reach the
waypoint or not. For each of these conditions we need separate modules that can
provide the necessary data. Ideally we wish to receive these unique pieces data
simultaneously in order reduce unnecessary latencies due to receiving and
processing.
3.6.1 Motor Encoder
In order to make the robot move we will need an electric motor. The motor itself is
a simple electrical machine with a discrete nature. If powered on, the motor will
spin. If there is no power it, it will stop. Of course, the rate at which the motor shaft
rotates is variable in nature and directly related to current supplied to it. The
duration in which it rotates is also variable and dependent on mechanism that
applies current to the motor. In order to control these two variables: duration or
rotation (time) and speed at which the shaft/wheel rotates (velocity) we would need
to utilize a Motor Encoder. The motor encoder will help us verify whether the
motors have spun the right number of times with in the right number of seconds or
not. This also yields to us two important piece of information. It gives us the
velocity. By knowing the rate at which the wheels are turning and knowing the
wheels circumference, we can determine the velocity of the robot. This is very
much similar to an automobile’s speedometer. This velocity then provides us with
the current traveled distance once we factor in the time. With both the current
traveled distance and the desired distance, we can determine in real time the
current position of the robot. Normally to get feedback from motor we would need
rotor sensors. These rotor sensors would provide relevant data to determine the
angular velocity of the motor. We would also need to have detailed information of
the electrical makeup of the motor. The added hardware would make the design
more bulky and more costly. This affects development time since it would require
extra code to be developed that can receive and understand the data as well as
react appropriately which if coupled with the amount testing required to get
rudimentary data about rotation speeds under different loads would drastically
increase development time. The powering of such a system would also require a
larger power source. It would be more cost effective if the system intelligently
determined all the necessary and understood it. In order to handle these many
different functions the use of a robust software package would be ideal. One such
software package is the InstaSPIN-FOC and FAST that TI provides with its
PICCOLO motor controller MCU. (41) (42)
InstaSPIN-FOC (FOC = Field Orientated motor Control) is capable of identifying
major details about a motor within minutes. The software responsible for it is the
FAST technology. The acronym FAST is derived from the four key pieces of
information needed for effective motor control. Flux, flux Angle, motor shaft Speed,
and Torque. FAST is available in the ROM and provides the necessary data realtime by taking in various inputs from the motor. Thereby allowing function calls that
utilize this information to make real-time decisions. Below is pictorial diagram of
the FAST architecture and InstaSpin-FOC architecture respectively: (41) (42)
30
Figure 3.6.1.A: FAST Software Architecture (reprinted with Texas
Instruments Permission) (43)
Along with FAST, InstaSpin-MOTION, which utilizes both the SpinTAC Motion
Control Suite and FAST estimator both of which are embedded in ROM, helps in
controlling how the user wants their motor to spin. It also provides accurate speed
and position control. Accuracy of speed and position of great importance since this
robot will be implementing waypoint calculations for navigation purpose. In order
for the velocity and position values to be accurately determined, SpinTAC utilizes
four components: Identify Control, Move, and Plan. Identify focuses on estimating
the inertia (an Objects resistance to rotational acceleration around its axis). This is
important because when they load increases the amount torque increases as well.
Since FAST is able to control torque we are able to increase the torque when
necessary to achieve a desired velocity. However since there is always a
possibility that other forces can affect the velocity besides load, such as friction
(slipping), SpinTAC Control component is capable of proactively estimating and
compensating of such disturbances in real-time. Compared to PI controllers,
SpinTAC Control is better at handling and/or rejecting any disturbance to system
that may affect the motor, it is also has better performance in trajectory tracking.
The user can specify the reactivity of this system. SpinTAC Move component is
capable of computing the fastest route between two points by finding the
smoothest transitions between speeds. It allows the user to input starting
velocities/positions, ending velocities/positions, acceleration, and jerk (change in
acceleration).
31
Figure 3.6.1.B: InstaSpin-FOC Software Architecture Overview (reprinted
with permission from Texas Instruments) (44)
Since there is always a possibility for a system to make sharp movements between
two points this will cause the change in acceleration to be high which then causes
mechanical strain on the motor. To control this we can set our jerk to be infinite (as
is with jerk being any value), bounded (jerk cannot exceed or fall below set values
[discrete in nature] but still is judder-ish) or continuous (allows jerk to change but
gradually reducing the judder and controlling the jerk). Thankfully, TI has its own
proprietary curve the st-curve that sets the position, velocity, and acceleration to
be smooth and the jerk to be continuous. Below is a diagram of the different
available curves: (41) (42)
Figure 3.6.1.C Curves Available in SpinTAC Move (reprinted with
permission from Texas Instruments) (45)
SpinTAC also allows for pre-planning of how the system should behave i.e. state
diagrams. SpinTAC Plan component allows for easy to build state diagrams to
32
describe how the system should behave. This reduces development type by
increasing abstraction. In our case, we will not need this since this type of
predefined behaviors is not applicable. Below is a diagram of the InstaSpinMOTION:
Figure 3.6.1.D InstaSpin-MOTION Software Overview (reprinted with
permission from Texas Instruments) (46)
However due to the complexity of the software and the learning curve associated
with it, it would be more cost effective to implement physical encoders. The type
of encoder of choice is an optical encoder.
The optical encoder consists of a LED, a light detector, clear wheel with ticks, and
a motor. The basic idea is whenever the light detector notices that the tick mark is
blocking the light then it will produce a voltage that the microprocessor will
determine to be active high, since voltage will be the highest at that time. When
the light is not blocked then the voltage that is produced is at its lowest so the
microprocessor will translate the signal to be active low. Since the number of ticks
and circumference of the wheel are known ahead of time, the displacement per
tick can be determined. Moreover, if we set a timer than we will have both the
displacement and time frame which would allow us to determine the velocity.
Depending on how the desired distance is represented, with either a velocity and
time or a displacement, we can make sure the robot reach its destination. Below
is picture of an optical encoder: (41) (42)
33
Figure 3.6.1.E Encoder IR Detector and Encoder Wheel (Permission
Pending) (47)
3.6.2 Accelerometer
Although one of the above motor encoder is robust enough to handle disturbance
in real-time it is necessary to consider the fact that all calculation by processors
are done with binary computation which, when converted to base ten, has some
marginal error. To ensure proper arrival to the waypoint, we will need a redundant
system to handle this marginal error. The accelerometer is one such system. The
accelerometer provides the acceleration of an object. This is essential in
determining whether an object has covered a specified distance. The robot will
determine waypoints for it to arrive at. These waypoint calculations provide the
desired distance for the robot to travel. Considering the fact that all movements of
the robot will be linear in nature, we would be able to determine whether an object
has covered distance by considering the number of seconds that has passed
during the duration of the movement. The Microcontroller will keep a “traversed
distance” variable that will checked with the desired distance. The distance
calculated using the following formula: (48-54)
𝑡𝑓
𝑡𝑓
𝑠 = ∫ ∫ (𝑎)𝑑𝑡
𝑡𝑖
𝑡𝑖
Where “s” is the displacement and “a” is the acceleration. We can use the double
integration method to determine displacement. The problem with it is that overtime
the error increases and generally does not give the accuracy needed. The reason
for this is that the accelerometer has noise. This noise has a non-zero mean that
is that when averaged over a specific interval it does not yield to a zero value. This
unnecessary noise adds to the true acceleration to yield an inaccurate acceleration
that will be integrated not once but twice over a time interval. This type of process
is known as Dead Reckoning. The particular error is known as sensor drift. If the
interval is small enough then the error is not large enough to cause a problem. As
integration is the area under a curve or the sum of small rectangular areas under
34
a curve. We can see how this would be detrimental to the true accuracy of the
result. Therefore, even if the first integration (Velocity) is relatively accurate the
second integration (Position) will be very inaccurate. To handle this we would need
a digital filter that would filter out the nonzero noise. In addition, we need to keep
a working distance to check against the target distance. In order to obtain the
current working distance, we take the integration of the current acceleration from
zero to the moment of acceleration sampling, for each time we sample. We would
make sure that there is no real discernable delay in motion so we must tune the
speed and acceleration sampling. The Accelerometer of choice is the one in Ti’s
SensorHub BoosterPack that will work in conjunction with the Ti’s Tiva™ C Series
TM4C LaunchPad. This board has a nine axis MEMS motion tracking device
InvenSense MPU9150 9-Axis Motion Sensor (from here on out will be known as
MPU9150) with three types of sensors. The first is the 3-axis gyro sensor. Since it
does not pertain to our project, it will not be discussed in detail. The 3-axis
accelerometer, which is the focus of this section, and the 3-axis Compass, which
will be explained in detail in the following section. (48-54)
The MPU9150 provides a digital output for accelerometers in the X, Y, and Z-axis.
These digital outputs are produced in the flowing manner. Inside the MPU9150’s
there is a unique system called micro-electro-mechanical system (MEMS). MEMS
devices consists of a proof mass with a cantilever, each with its own capacitance.
When acceleration occurs the capacitance between the two objects (proof mass
and cantilever) changes producing a voltage. These series of voltages will be
converted from analog to digital using an ADC. The digital signal-conditioning unit
will then convert the digital value. It will then convert this value to an appropriate
digital value that can be stored into the dedicated sensor register and read through
the I2C by the host processor. MPU will then make discussions about motor control
from integration.
These different axes are digitized by Analog-to-digital converters (ADC) and are
then transmitted for use by the MPU9150’s Inter-Integrated-Circuit’s (I2C)
interfaces. The MPU9150 is a System in Chip (SiP) that is made up of an
MPU6050, which actually contains the 3-axis accelerometer in it, and the Digital
Motion Processor (DMP), which is capable of computing the complex algorithms
of MotionFusion. The MPU9150 will then interface with the Application processor,
which is an ARM –based processor of the Tiva-C series. As shown below: (48-54)
Additionally this data can then be interface to a computer form the Tiva-C any
motion related application could utilize it. The a-fore mentioned InvenSense’s
MotionFusion™ combines all of the data from the three sensors into a single bit
stream. MPU9150 I2C interface is primarily used to communicate this information
to the ARM processor of the TIVA-C series. The MPU9150 Bus connection with
the ARM processor is always slave. Meaning, it can only be used by the ARM
processor and nothing else. (48-54)
35
Figure 3.6.2.A MPU-9150 Interfacing
3.6.3 Magnetometer
The current direction of the robot is necessary in waypoint calculation. Since the
robot must be able to take in account that there may be curved obstacle that it
must traverse through, it is not guaranteed that the waypoint will be in the current
direction of the robot. In order for the robot to calibrate and position itself, so that
it turns to that direction, we will need the magnetometer to tell us whether we turn
right or left. In addition, the duration of the turning will be controlled by constantly
sampling and checking the robots direction to the desired direction. Example it the
current heading of the robot is NE 45o and the waypoint falls NE 60o then the robot
will turn itself to the right until the current heading is NE 60o. The magnetometer
of choice is TI’s SenseHub. It utilizes AKM’s AK8975 3-axis digital compass. The
AK8975 utilizes magnetic sensors (Hall Sensors) that detect terrestrial magnetism
in all three axis. In addition, this uses an I2C bus interface send its data to an
external CPU. Hall sensors are relatively small, dissipate low amount of power,
and are cheap. The Hall Effect is the phenomenon that occurs when a magnetic
field is applied to a conductive a plate the electrons flow along the curves of the
filed that thereby produces a voltage. Depending on the polarity and strength of
this voltage, one can determine Magnetic north. When coupled with a second
conductive plate at a right angle to the first plate we can provide two axis for the
magnetic field. This in turn provides us one of eight directions. Below is a diagram
that depicts this phenomenon: (55-60)
But it requires a that there are no other magnetic fields because Earth’s magnetic
field is weak in addition to the fact that with a two axis hall sensor we would need
to keep it flat and leveled with the earth itself. To compensate for this, a third axis
is implemented. The third axis performs “electronic gimbaling” also known as
“electronic tilt compensation.” This uses a dual axis tilt sensor that measures the
X and Y deviation or pitch and roll. By combining this tilt output info with the Z-axis
reading, we can determine the deviation if the X and Y axes from the horizontal
plane and then mathematically rotate it back. The analog signal from the hall
sensor will be converted to a digital output string along with the accelerometer
outputs as mentioned in the previous section. (55-60)
36
Figure 3.6.3.A: Hall Effect sensor (Reprinted with Permission from
electronics-tutorials.ws) (61)
3.7. Microcontroller
3.7.1. Tiva C
The Texas Instruments Tiva C Series TM4C123G is a low cost microcontroller.
The ARM Cortex M4F is the processor used for the microcontroller. The
TM4C123G is interfaced with a USB 2.0 device. The TM4C123G features Onboard In-Circuit Debug Interface (ICDI), USB Micro-B plug to USB-A plug cable,
LCD controller, serial communication, and Preloaded RGB quick-start application.
The attractive thing about the TM4C123G is that it can easily interface with booster
packs, packages, memory, and other peripheral devices. Since the Tiva C Series
TM4C123G comes from Texas Instruments, it holds the reliability of a name brand
company. Not only this but Texas Instruments gives you the TivaWare, a software
to easily get started working with the TM4C123G. (62-67)
The TM4C123G is driven by the ARM Cortex M4F microprocessor. The Cortex
M4F was created to satisfy the demand for an efficient, easy-to-use blend of
control and signal processing capabilities. The focus of these cores is to make
them highly efficient yet low cost and low power. One reason the Tiva C Cortex
M4F was considered was that the Cortex M4F focuses on motor control,
automation, and power management aspects. The Cortex M4F can reach speeds
of up to 80 MHz. The core has a 3-stage pipeline with branch speculation. The
Cortex M4F has a single precision floating point (FPU) unit that is IEEE 754
compliant. It has an optional eight-region memory protection unit (MPU) with sub
and background regions. Non-maskable Interrupts with up to 420 physical
37
interrupts are implemented on this core. Power saving sleep and deep sleep
modes is included with the processor. The Cortex M4F uses anywhere from 12.3
µW/MHz to 157 µW/MHz. (62-67)
Size being an important of this project the Tiva C Series TM4C123G is a great size
of the J-Bot platform. The official dimensions are 2.0 in x 2.25 in x .425 in (L x W x
H). It can be said that the TM4C123G will definitely not pose any problems with
size. Even with booster packs attached, the space taken up will be small, not to
mention well used. (62-67)
One focus of this project is to make a rover that is inexpensive. The price for the
TM4C123G is one of the best. This microcontroller is the second cheapest of them
all. This is one advantage the TM4C123G has over other microcontrollers being
considered for use in this project. (62-67)
Another attractive point working in the Tiva C Series TM4C123G is that Texas
Instruments create it. The support for Texas Instruments products is attractive
because no project is without bugs and problems. Having the support from the
company and communities built around their technology is a priceless commodity
to have available. Texas Instruments’ E2E is huge community on the Texas
Instruments website. It includes Q&A forums with over 1,506,000 answered
questions on almost any subject one could think of. These forums can be used to
answer questions about ARM processors, digital signal processors,
microcontrollers, amplifiers, power management, interfaces, applications, etc.
Additionally, the community has on-going blogs on can keep up with on a subject
area one is interested in. For example the Texas Instruments MSP430xx
microcontroller line. You can make a TI account as post questions on the forums
and blogs. Other users that might have encountered the same problems, TI
aficionados who are well versed with TIs technology, and even TI employees, can
answer your questions! Truly, the amount of support is overwhelming when using
any Texas Instruments product. (62-67)
As mentioned before the TM4C123G comes with the TivaWare software, the
TivaWare is a suite of software tools designed to simplify and speed up
development of applications based on the Tiva C MCUs. The TivaWare is
completely royalty-free and has a free-license. TivaWare is written in C to allow for
efficient and easy development. TivaWare comes with royalty-free libraries of all
kinds, and notes and documentation. These are all important because with royaltyfree libraries users can create full-functions and easy-to-maintain-code. Answers
to questions could be found within the code examples for the TM4C123x devices
and the documentations.
As far as memory goes, the TM4C123x microcontrollers have 258KB flash
memory. Additionally, the TM4C123G has 2KB of EEPROM and 32KB of SRAM.
This microcontroller allows for direct memory access (DMA). Direct memory
access is when certain subsystems are allowed to access the main memory
independently without the CPU. (62-67)
38
Some additions to the TM4C123G that make it an attractive choice for the IRIM
project is that the microcontroller comes with six 32-bit pulse width modulators, six
64-bit pulse width modulators, and LDO voltage regulator. All these parts are
required to drive the 4 motors on the J-Bot. unfortunately the voltage regulator on
the TM4C123G is a linear voltage regulator. Which means all the unfavorable
things that come along with those will also be present. This is probably why Texas
Instruments included a temperature sensor on the MCU. Space is limited and it will
be waste of space to use a linear regulator, as it will require a heat sink. (62-67)
In the end, the Tiva C Series TM4C123G was not chosen for this project for a few
reasons. As previously stated the LDO voltage regulator on the TM4C123G is a
linear regulator, which comes an array of problems and unwanted characteristics.
Everyone in the team has already worked with MSP430 and is familiar with its
environment and programming style required for it. One thing that the TM4C123G
really had going for it was the support from the company and community. Luckily,
the MSP430 is also a Texas Instruments product and has an even bigger support
community behind it. (62-67)
3.7.2. MSP430G2
Like the Tiva C series TM4C123G, the Texas Instruments MSP430F5529 is a low
cost, low power microcontroller. The MSP430F5529 is a great choice for
microcontroller unit for any project because it is designed for general purpose
applications. This means it has a wide array of instructions that can be applied for
multiple kinds of projects. The MSP430F5529 CPU is designed around the Von
Neumann architecture and it support the MIPS16 instruction set. The
MSP430F5529 is very versatile, it comes with 40 pin that can be connected with
any booster pack in its line. The MSP430F5529 is attractive because of its
versatility, compatibility, size, price, and familiarity. Like the Tiva C Series MCU the
MSP430 has great support from the company and the community. The MSP430xx
line has a stronger community than the TM4C123G. Also along with support, the
software provided by Texas Instruments makes it easy to go from purchase to
programming in no time. (68-72)
The MSP430F5529 CPU is designed around the Von Neumann architecture. This
means that there is one common memory for data and code, buses are used to
access memory and input/output, and direct memory access is allowed. Memory
is byte-addressed, and 16-bit words are combined in a little-endian format. The
CPU supports The MIPS16 instruction set and contains 16 16-bit registers.
MIPS16 is a very simple instruction set. It only has 27 unique instructions
separated into three families (R-type, I-type, J-type). The CPU runs at about 25
MHz. The MSP430F5529 is ultra-low power at 2.2V it takes in only 220 µA. It
wakes up from standby mode in <1 µs. In standby mode the MSP430F5529 takes
in only .4 µA.(68-72)
Because size is such an important factor in this project, it is important to take this
into consideration when choosing an MCU. The MSP430F5529 is very small, in
fact it is smaller than the MCU from the Tiva C Series by a little bit. The dimensions
39
are 2.55 in x 1.95 in x .432 in (L x W x H). This small, but powerful MCU will not
take much space on the J-Bot.
Another small thing about the MSP430F5529 is its price. Out of all the
Microcontrollers considered for this project its price was the cheapest. It is very
fortunate that a microcontroller that can handle what we need from it is so cheap.
(68-72)
Figure 4-1: Functional block diagram for the MSP430G2 (Reprinted with
permission from Texas
Instruments) (71)
Since the MSP430F5529 is a Texas Instruments product it has a vast amount of
support available online. The E2E community is available for anyone. It is likely
that someone has already encountered your problem and a solution is already has
been reached. The question forums can be searched also the blogs can be. An
attractive addition to the MSP430F5529 is that the MCU has its own blog. Apart
from the E2E community there is a lot of literature online with solutions, other
projects, and forums. All this support, literature, solutions, experts, videos, forums,
and blogs speak volumes of a product’s reputation. So many people wouldn’t not
dedicate their time to a product that doesn’t work or they don’t like. (68-72)
The Tiva C Series TM4C123G came with the TivaWare, the MSP430xx line comes
with Code Composer Studio (CCS). The Code Composer Studio is an integrated
40
development environment (IDE) that is comprised of a suite of tools used to
develop and debug applications. Included comes a C/C++ compiler, source code
editor, project build environment, debugger, profiler, and other features. With this
IDE one can program the MSP430F5529 easily and quickly. Code Composer
Studio uses the Eclipse frame work along with advanced debugging capabilities
from TI resulting in a development environment for embedded developers. You
can be assured that with Texas Instruments you will have an updated and fluid
environment. (68-72)
The MSP430F5529 has a 128 KB flash memory. Additionally, it has a 8 KB RAM.
Flash memory is a widely used, nonvolatile, and reliable memory to store data and
code.
One of the advantages and purposeful design points of the MSP430F5529 is its
compatibility with other parts and booster packs. It has already been discussed
that voltage regulation will play an important role in the creation of the motor control
board. For example the MSP430F5529 can be paired with a TSP62120. The
TSP62120 is a 15V, 74mA high efficiency Buck converter. Basically it is a highly
efficient switching voltage regulator. The TSP62120 has a 96% efficiency rating,
thus making it low heat. This is important because a heat sink would not be
necessary with this regulator. (68-72)
The MSP430F5529 was chosen as the MCU for this project for a few reasons.
Firstly, The MSP430F5529 is low cost and this is important because the project
focuses on making an inefficient interior mapping robot (IRIM). Also, the MCU is
an ultra-low power which is important for battery life. The MSP430f5529 power
consumption is unmatched in the market. Next, the MSP430F5529 compatibility is
important because it is almost inevitable that this MCU will have to interface with
another part. Most importantly the MSP430F5529 was chosen because all
members of the group are familiar with it and have programmed one prior to taking
on this project. This saves a lot of time in designing and programming the motor
controller. This was ultimately the deciding factor in the decision for an MCU. If an
MCU that everyone is familiar with is available to use and will get the job done it
makes it hard to argue against not using it. Learning a new instruction set and how
to program with it is not an easy task to take on in the middle of a project. (68-72)
3.8.
Wireless Components
Navigation systems and obstacle avoidance systems require visual input and
heavy processing power. Our robot is meant to be autonomous and capable of
roving freely without any strings attached so to say. For it to achieve this it must
use a small computer to process this data onboard. The reason for a small
computer is because the size of the robot is a restrictive factor and the availability
of space for component mounting is limited. One such component would be TI’s
OMAP4430 PandaBoard, which has a dual-core ARM® Cortex™-A9 Processor.
However, the cost of the PandaBoard is roughly $200. Our objective is to design
an Inexpensive robot. Therefore, like any business model that sees the expense
of having native workers it would be best to outsource the job. In our case, it would
be best if we outsourced the job of processing to a normal everyday computer. In
41
order to conceive this we would need wireless communication. There are three
possible communication methods and nine possible combinations for sending and
receiving data. We can use Radio Frequency Transceivers, Wireless Fidelity, or
Bluetooth.
3.8.1. Sub-GHz Radio Frequency Transceivers
Radio Frequency Transceivers are one of the oldest wireless communications.
They are able to transmit data at rates up to 100 GHz. However, we will not be
implementing speeds over 1 GHz due to bandwidth allocations by the FCC.
Instead, our RF Transceiver modules will be operating in the 902-958 MHz ISM
bands. ISM radio bands sections of the radio spectrum reserved internationally for
industrial, scientific, and medical purposes other than telecommunications. The
main advantages of RF are that unlike Wi-Fi it does not need a third party network
in order to establish a connection and it provides the same data rate of a Bluetooth
transmission over a longer distance. This is primarily because Bluetooth operates
at 2.4 GHz whereas the RF transceiver will be operating at sub-GHz frequencies.
It is a known fact that the higher the frequency the shorter the distance. Because
as the frequency increases so does the tendency of it being absorbed by physical
objects in its environment. To make up for the signal loss additional signals need
to be transmitted causing a delay in message completion and increase in power
consumption. Due to this quality, low frequency signals tend to “hug” the earth’s
surface since they are able to bounce off the atmospheric layers. (73-75)
The module of choice is TI’s CC110L RF BoosterPack. It works in conjuncture with
the MSP-EXP430G2 LaunchPad development kit as an extension. The CC110L
uses Anaren’s integrated radio A110LR09A. The A110LR09A is capable of
implementing a plethora of wireless networks. Most importantly and obviously, it
can implement point-to-point networks. In our robot system, we will be transmitting
data to and from a computer. Such wireless connections that utilize RF are halfduplex that is that they cannot send and receive data simultaneously. This is a
major drawback for an autonomous system that must react to obstacles as close
to real time as possible. In Half-Duplex, a receiving device must wait until the
transmitting device is finished transmitting before it can transmit. (73-75)
It is feasible to implement a full-duplex connection by using self-interference
cancellation. Self-interference works as such:
“The receiver “listens” to determine the RF environment and uses algorithms to
predict how the transmission will be altered by the environment and cancel it out
from data received from the receiver. By doing so, the problem of self-interference
goes away, and the receiver can “hear” the signal it is supposed to hear while the
transmitter is active. The major cost for this approach is increasing complexity in
terms of software, which can slightly impact battery life,” (76)
One additional drawback is that communication between the robot and computer
will unencrypted thereby allowing for malicious interference. This would require
more development time than it is worth. Although it has an advantage of distance
over Bluetooth and reliability in network over Wi-Fi, it will much simpler in design
42
to implement one of the other two. However, if there is such a library predefined
by TI or Anaren then we will most definitely take that course. (73-75)
3.8.2. Bluetooth BLE
One method, out of the three, that is quite simple, is to implement Bluetooth as a
UART. Bluetooth technology has been around for a while and has extend from
phones interfacing with headsets and speakers to full-blown data transmission
from computer to computer. The main advantage that Bluetooth has over RF is
that it requires less hardware, less development for interfacing two devices with a
high throughput, and all communications are AES encrypted, which allows for safe
communication. Instead of having to design two Bluetooth transceiver, we only
need to design the Bluetooth transceiver on the robot side and purchase a
Bluetooth dongle for the computer. Since RF is restrained at sub-GHz frequencies,
its throughput can be lower than Bluetooth if the computer and robot are within the
same vicinity without physical interference. Another advantage it has is over WiFi. With a Wi-Fi network, we have to consider a third party system (the internet).
Our connection reliability is dependent on the assumption the internet will be
available. In addition, an Internet of Things connection (IoT) is restricted in where
the Robot and Computer is located. The Bluetooth connection is not reliant on a
third party system so those restrictions do not apply. The downside to Bluetooth is
that it is restrictive in range from the object. It lacks the range capability of Wi-Fi (if
the robot is in a Wi-Fi hotspot and the computer is in a Wi-Fi hotspot) and lacks
the range capability of the sub-GHz RF transceivers because the RF transceivers
are at a lower frequency so their range is higher due to the a-fore mentioned
reasons in 3.8.1. (78-87)
The Bluetooth module we will be using is the Emmoco Development board
Bluetooth Low Energy (EDB-BLE) booster pack for the TI MSP430. The EDB-BLE
utilizes TI’s CC2541 system on chip (SoC). Bluetooth currently comes in multiple
flavors, Bluetooth Smart, Bluetooth Smart Ready, and Bluetooth Classic. The
CC2541 is the first type. This falls in the classification of so-called “single-mode”
device. The main advantage of these devices is that they connect to both singlemode devices and Bluetooth Smart Ready devices also known as “dual-mode” and
that they have low energy consumptions. However, the benefit of low energy
consumption will be lost to our implementation. Low energy Bluetooth devices are
devices that are not on all the time. They are only active for a short burst (when
they need to do a particular job) and then go back into sleep. These are devices
with short duty cycles. An example would be a heart-rate belt monitor. It only sends
out a few bytes per second and that only occurs a small percentage of time that it
is attached. The percentages are usually in the single digits. In our implementation
of the BLE, we will be using the Bluetooth as a UART. Bluetooth will be in constant
use, meaning a high duty cycle. Nonetheless, the benefits of the Bluetooth are
good enough for consideration. (78-87)
The CC2541 interfaces with the MCU though the I2C. The MCU uses the CC2541
as a wireless version of a serial communication like that the RS232 serial
communications. It runs in the ISM unlicensed bandwidths along with Wi-Fi and
43
ZigBee. The CC2541 is able to work in point to point networks. That is one
Bluetooth module will be the master and the other a slave. It possible for the
CC2541 to trade places between master and slave. This is not necessary for a
point-to-point network. Because the idea behind a master-slave network is that,
the master can request data and send data to and from any slave. The slave can
only send data to and from the master. Our Master will be the computer and our
only slave is the CC2541 on the robot. However since the CC2541 allows for
piconets we can add more Bluetooth modules for different connections. This will
allow for a more loosely coupled system. The Kinect can route data straight to the
master computer. The SenseHub can route its data straight to the computer and
the Piccolo can also route data to the computer. This will increase throughput of
sent data since one CC2541 will not have to put all this data into a queue and
artificially cause a delay. The only CC2541 that needs to be spoken back to is the
one in charge of the Motor Control. The only challenge is to let the computer see
these as parallel connections through one communication port.
The Bluetooth connections process takes three steps. The first is the initial inquiry
of the devices. Each Bluetooth device has a unique address. This address is 48bits and usually coded in 12-bit hexadecimal. The upper half (most significant half)
identifies the manufacturer. The lower 24-bits are the unique address of that
module. This is important since we need destination address for each device that
will pair up. This makes sure that data packets will not be sent to the wrong
recipient. The next stage is called paging. In this stage the actually connection is
established (handshake). Once established the connection becomes encrypted.
Once a device is connected, it is in one of four states: Active Mode Sniff Mode,
Hold Mode, or Park Mode. Active mode is the mode our Bluetooth device will be
since the devices will be actively transmitting and receiving. Sniff mode is when
the device is in sleep mode and will only wake up every so often to listen for
transmissions. Hold mode is when the device is told to sleep and wake up after
some time. Park Mode is when the device is put to sleep indefinably and will be
woken up by the master explicitly. (78-87)
Since our focus is to replace the typical RS232 serial connection with the
Bluetooth, (the Bluetooth was originally designed for this). The way to do that is to
use Serial Port Profile (SPP). The idea is that the applications that are involved
see an emulated wired connection. These devices are legacy applications and they
do not know how the
Bluetooth connecting will work so a separate application will be needed. This is
known as the Bluetooth-aware helper. The Bluetooth-aware helper will then utilize
the Bluetooth stack to transport data to and from the other device. Figure 3.8.2.A
describes the stack:
Application A and Application B will the legacy applications that use RS232
connections (wired connections). The scan only process and transmit RS232
signals. So Application A will try to transmit RS232 signals through an emulated
connection. The Serial port Emulation or any other API (the Bluetooth-aware
helper) will then initiate the RFCOMM will transmit data with TX connection or
44
receive data through the Rx connection. It is also known as the serial port
emulation.
The SDP (service discovery protocol) will check if the services that Application A
is valid or available through from Application B. It should be noted that Both
RFCOMM and SDP receive inputs and outputs from the Serial port emulation API.
The LMP (link management protocol) is the protocol in charge to establish the
connection of the two devices. It also handles power control queries device
capabilities. Once the connection has been established and data is being
transmitted then it is up to the L2CAP (Logical Link control and adaption protocol)
segment the data to be sent out.
Figure 3.8.2.A Bluetooth Protocol Stack
Each segment will have a header with the part number. This is essential since data
is being transmitted wirelessly and there is a possibility of lost or corrupted packets.
Therefore, we need the data to be segmented. The Receiving Bluetooth will use
the L2CAP to reassemble this code. The packets are formed with three distinct
sections. Access Code, Header, and Payload. The access codes are used in
paging and inquiry procedures. The header contains the members address, the
code type, flow control, acknowledgement indication, the sequence number, and
header error check. The payload holds the actual data that is being sent or
received. The L2CAP is responsible for maintaining the Quality of service that the
higher levels demand. The Baseband is responsible for the acknowledgements
and retransmission of data. It controls the synchronization of data by adding and
offset to the free running clock of the system. This allows a temporary
synchronized clock for both Bluetooth devices. Another aspect of the Baseband is
the frequency hopping. The Bluetooth protocol utilizes the 2.4GHz frequency that
is 83 MHz wide. Out of that, the Baseband uses frequency hopping spread
spectrum to “hop” between 79 different 1 MHz-wide channels in this band. This
allows for three advantages. The first is that using FHSS signals are resilient to
narrowband interference. The reason for this is because when collecting the signal
from different spectrums the interference signals also become spread, making the
interference much smaller and easier to handle also due to the smaller magnitude
it can even fade into the background. The second advantage is that spread signals
45
are difficult to intercept. Meaning not only do the listeners pick up jumbled up noise
but also if they want to jam the signal it will be just as difficult. This is due to the
pseudorandom sequence of the signals. Unless this pattern is known, it is hard to
intercept or jam. The third advantage is that since this allows data to be transmitted
among various spectrums, the possibility if interference with another device that
uses the same 2.4GHz but in a narrow band is very low. The noise levels will also
be low. This results in a more efficient use of bandwidth. The Base band also uses
Time division multiplexing. This allows a signal to use the entire spectrum for a
short amount of time. This is ideal since data packets are relatively small and
transmissions times are short. Therefore, we are able to transport the entire data
packet within the period at a high bandwidth. Bluetooth’s TDM variety is
Synchronous. It allocate a fixed time for each signal. Which is 625 microseconds.
These bit streams are sent ahead of the logical data packets so that bit
manipulations are done to increase reliability and security. The bits are used for
power control, link supervision, encryption, channel data rate (changes for quality),
and multi-slot packet control. However, the biggest weakness is range. Bluetooth
devices require a distance of 100 meters and in some devices (like low energy
devices) 50 meters at most. This would require that computer and robot to be within
the same vicinity. The solution to this problem would be to implement Wi-Fi as our
medium. Which would mean to apply the Internet of Things (IoT) to our project.
(78-87)
3.8.3. Internet of Things (Wireless Fidelity)
Modern day communication is heavily dependent of wireless connectivity. Things
as important as GPS and things as insignificant as a celebrities tweet all rely on
some mobile internet capable thing and the internet. The Internet of Things is the
movement when physical devices, that have not need for the internet and its
plethora of content to function, use the internet complex and robust network to
achieve connectivity. It is difficult to create an entire network that can be robust
enough to handle different activities. Ideas as monitoring a patients heartbeat,
sugar levels, and blood pressure in real time from miles away without developing
a network to cater to that was difficult to realize. To have a system where a hotel
can adjust the light and temperature settings at the customer’s preference without
having to add additional hardware was vexing. However, all this can be realized
by using the internet as a medium. We can have pacemakers that sends data out
an IP address although the Wi-Fi. We can have hotel employees modify the hotel
temperature and lighting ahead of time with an app. Internet of Things simplifies
the process of data transmission and reception. All we simply would have to do is
specify which IP address the data should go and come from this removes
unnecessary low-level software development. In addition, the data rates are much
faster than that of the previous two implementations. This environment is the ideal
setting for our project. (88-94)
Our testing environment is indoors since the Kinect can only work indoors. The
location of our testing environment is any building in the University of Central
Florida. Every building that is in UCF is Wi-Fi ready. Also with the other
implementations we would need to be within a reasonable distance but with Wi-Fi
46
the distance no longer matters. Considering our small time-frame for developing
our autonomous system, developing with IoT will drastically reduce code
development and debugging. The Wi-Fi module of our choice is the TI
CC3100CC3100. (88-94)
TI’s CC3100CC3100 is a complete IoT solution for any MCU. It integrates all the
necessary requirements for internet and Wi-Fi protocols. This alleviates
development on the MCU, since normally the protocols must be developed per
MCU. Ease of development from hardware specific because it is available in any
easy to use QFN (quad flat no-lead) package. The advantage of that is when it
comes to PCB design it is easy to apply. (88-94)
The CC3100CC3100 supports the following security modes for personal networks:
WEP, WPA, and WPA2 with its on chip security accelerator. Its supports 802.11
b/g radio in the 2.5 GHz ISM band with a throughput of 1-54 mbps. It also includes
Integrated IPv4 TCP/IP stack. In addition to its development usefulness, it also
excels in power consumption. With its own DC-DC converter, it is able to draw its
low energy requirements from a variety of power supplies. With its own ARM MCU,
it is able to go into low power modes and consume as small as four micro amps.
(88-94)
3.9. Power
3.9.1. Nickle-metal hydride battery
A Nickle-metal hydride battery or a NiMH battery is a type of rechargeable battery.
A NiMH uses positive electrodes of Nickle oxyhydroxide and negative electrodes
of a hydrogen-absorbing alloy. NiMH batteries are very efficient batteries they have
three times the capacity of a NiCad battery of the same size. In addition, the energy
density of the NiMH is close to that of a lithium-ion cell. NiMH batteries usually
come in AA size. In the early 2000s, NiMH batteries were on the rise but since then
have fallen in popularity. (95)
Although these batteries are very efficient, they do have a down side. NiMH
batteries have a 4% loss of charge per day of storage. A low self-discharge variant
were made in 2005, but this lowered capacity by 20%. One advantage of using
these batteries is that they already come in AA and AAA sizes and shapes, which
ameliorate their use in a project. (95)
3.9.2. Lithium ion battery
A Lithium ion battery is a type of rechargeable battery. Lithium ions move from the
negative electrode to the positive electrode during discharge and vice versa during
recharge. Lithium ion batteries are common in consumer electronics such as cell
phones, tablets, and laptops. Their popularity stems from having high energy
density, no memory effect, and slow loss of charge. An advantage to using these
batteries is that you can use a battery back to provide the same voltage as leadacid batteries. This is convenient for robot project that require voltages produce by
batteries of this kind. (97)
3.9.3. Lithium Iron Phosphate battery
47
A Lithium Iron Phosphate battery is another kind of rechargeable battery. These
batteries use LiFePO4 as cathode. Although Lithium Iron Phosphate batteries have
lower energy densities they offer, longer lifetimes and they have better power
densities. Having higher power densities mean that the rate at which energy can
be drawn from them is higher. These batteries are also safer than their lithium ion
counterparts are. Lithium Iron Phosphate batteries, like Nickle-based batteries,
and unlike lithium ion batteries they have a constant discharge voltage. With a slow
capacity loss these batteries maybe a good choice to use for a project with
longevity in mind. In a robot rover situation, these batteries can be put a battery
pack to produce 12.8V with four cells. This would be equivalent to having six-cell
lead-acid batteries. (96)
3.9.4. Power set up
There are a few options when it comes to the way you want to power your robot
microcontrollers and motors. The two main options are having multiple batteries
power different components of your robot or having one battery power everything.
Both have their advantages and disadvantages. (98-100)
One way to set up your power design is to use one battery to power every
component of your robot. The advantages to this are only having one battery to
change and minimizing the weight added to the rover since you only have one
power source. One disadvantage is that you will need a voltage regulator to control
how much voltage a component is getting. Most microcontrollers operate at around
12V while motors can operate anywhere between 3V to 9V. In addition, most
electronics work at around 5V for example sensors. This makes the wiring complex
and requires more work. (98-100)
The other option is to have multiple batteries that power different components of
the project. The advantages to doing this are it is a simpler design thus making
your design time much shorter. Another advantage is the project can work more
efficiently because having multiple batteries makes the project have a modular
layout. The disadvantages to using multiple batteries are some components will
top working before others do. This can be easily avoided if recharging of batteries
is done regularly and appropriately. (98-100)
This project will use the one battery approach. This was chosen to keep the design
simple, lower cost, and minimize use of space on robot. A simple design is
favorable because with more parts come more problems and complications. One
complication is that if multiple parts of the robot have their own power source each
source will require some kind of monitoring to assure the right amount of
voltage/current is being supplied at any given time, thus complicating the design
of the motor controller board. Keeping the cost down of this project is important
because the point is for it to be inexpensive. Not only this but, lowering cost should
be a priority of any project whether academic or corporate in nature. Performance
and meeting requirements does take priority over cost but if equal results can be
achieved with both methods then the cheaper one is most attractive. Minimizing
the space usage on the robot is important because space is limited. (98-100)
48
The battery being chosen is a lithium ion battery. The batteries are LG Lithium
18650 3.7V rechargeable cell. These batteries produce 2600mAh or 9.62Wh. A
battery pack will be needed to provide the amount of voltage needed. This battery
pack will be used because it can hold four batteries. The battery pack will produce
14.8V, which will be more than enough. (98-100)
3.9.5. Voltage Regulation
Voltage regulators output a predetermined magnitude that remains the same
regardless of changes to the input or load conditions. There are two kinds of
voltage regulators: linear or switching regulators. (101-106)
Linear regulators utilize active pass devices such as BJTs or MOSFETs controlled
by a high gain differential amplifier. By comparing the output voltage with an
accurate reference voltage, it can adjust the pass device to maintain a constant
output voltage.
Switching regulators differ from linear regulators because while linear regulators
act as an adjustable resistor switching regulators act more, as the name implies a
switch. Switching regulators periodically store energy in the magnetic field of an
inductor, and then discharges the energy into the load. The switching action
usually switches between 50 kHz to 100 kHz depending on the circuit. The time
the switch remains closed between switches varies to maintain a constant output
voltage. While the switch is closed, the inductor is charging and the diode, which
is in reverse biased acts like an open circuit. When the switch is opened, the
energy stored in the magnetic field of the inductor is discharged and the diode is
conducting. (101-106)
Both regulators have the same results but at different costs. Linear voltage
regulators are very cheap which make them favorable in project with limited funds
or a tight budget. Linear voltage regulators are great for low power applications.
Compared to switching regulators they are easy to use as well. Although cheap in
price linear when it comes to power because they use up so much of it mostly to
waste it. Linear voltage regulators, as previously stated, behave like adjustable
resistors this makes them highly inefficient. Typical efficiencies for linear voltage
regulators are about 40% to as low as 14%. As resistors do this wasted power is
released as heat. In some situations, the heat released is so much that large and
expensive heat sinks must be used to dissipate the heat. This much-wasted power
also translates to having a reduced battery life for the project. Even with advances
in linear voltage regulation technology, they are still highly inefficient. For example,
LDO (Low Dropout) regulators are in efficient but they allow flexibility with input
voltage drops. Like resistors, linear voltage regulators can only step down
voltages, which can be highly impractical for project that requires more flexibility
from their regulators. Another advantage of linear voltage regulators is that they
produce a much smaller noise than switching voltage regulators. (101-106)
Switching voltage regulators are almost the opposite of linear regulators as far as
advantages and disadvantages are concerned. Switching voltage regulators are
more expensive than linear regulators. This may dissuade their use in projects.
49
Although it is not favorable to pay a lot for a single regulator if a project does not
require many regulators, the advantages of these voltage regulators far outweigh
the price tag. Switching voltage regulators are highly efficient, anywhere from 80%
to 95% efficient. Since their efficiency is less, dependent on the input voltage these
regulators can handle powering load from higher voltage sources. This is important
because the regulators can be utilized more flexibility with a wider variety of power
supply and voltage ranges. Because of their efficiency, they do not require to be
cooled down as much. This is great because spending a little more on a switching
voltage regulator can save you money by not having to purchase an expensive
heat sink. In a way, this balances it is self-out. Additionally, with this higher
efficiency battery life is less affected and will last longer. Switching voltage
regulation circuits are more complex than linear voltage regulation circuits thus
complicating the design. Fortunately, many companies sell switching voltage
regulators as ICs (integrated circuits) taking the complexity out of the hands of the
designer. These integrated circuits even come in a three-pin format as linear
regulators do. Unlike linear voltage regulators, switching voltage regulators can
step up and step down the voltage in the output. This is something linear voltage
regulators cannot do. Although more efficient switching voltage regulators have a
higher noise than linear regulators. With higher switching frequencies come
smaller inductors and capacitors, this means a higher noise in the circuit. Switching
voltage regulators require a means to vary their output voltage according to input
and output voltage changes. This can be achieved with a pulse width modulator
(PWM). With a PWM controlling the rate at which the switch opens and closes. A
constant output can be achieved by adjusting to changes in the output voltage of
the voltage regulator. (101-106)
Voltage regulation plays a pivotal role in the design of the PCB board because
without it one battery design would not be possible. Many components on the
motor control board work at different nominal voltages. One could see the great
need for voltage regulation. There are so many different kinds of regulators, one
must be careful when choosing a regulator for a project. Most importantly, one
must identify what functions are required of the regulator. This may be step-up,
step-down, or both. These are the most common. Next the type of regulator much
be chosen whether a linear or switching will be needed. To drive the J-Bot it has
been decided that a switching step-up/down voltage regulator would be best.
Although a linear step down or a switching step down can be used, the latter was
chosen because it allows for greater flexibility when testing the board. Three
voltage regulators where considered for this project: The Texas Instrument
TPS71501, the Pololu S18V20ALV, and the Motorola MC34063AP1. The Motorola
MC34063AP1 was chosen was the voltage regulator to be used on the board.
(101-106)
The Texas Instrument TPS71501 is a linear step-down LDO, single output voltage
regulator. The TPS71501 is advantageous because it offers high input voltage, low
drop out voltage, low power operation, low quiescent current, and small packaging.
The TPS71501 operates from 2.5V to 24V input voltage. It can output from 1.2V
to 15V. The TPS71501 has an adjustable output voltage, which is imperative for
50
this project because the motors require different speeds to make turns. One side
of the rover will need to be moving slower than the others to make turning possible.
This cannot be achieved with a fixed output voltage regulator. The TPS71501
outputs a maximum of .05 A or 50mA. The TPS71501 has an output accuracy of
4%. This voltage regulator was not used because it is a linear regulator and the
concern of wasted power came into play. Dissipating the heat produced by this
regulator is a concern because space is limited on the J-Bot. In addition, battery
life is an important factor in any project and the PCB board will be using a minimum
of five regulators that will each waste too much power and produce too much heat.
The design of the board is not low power enough to be justify the use of a linear
regulator and accept their inefficiencies. In addition, the maximum output current
of the TPS71501 is too low for the application it is need for. The motors on the JBot have a no load current of 120mA and 160mA at 3V and 6V respectively.
Additionally, the motors have a locked-rotor current of 1.5A and 2.8V respectively.
The .05A maximum that the TPS71501 puts out is too small to meet these values.
(101-106)
Figure 3-6: Here is the circuit diagram inside the TSP71501. (Reprinted with
permission from Texas Instruments) (108)
The Pololu S18V20ALV is a step-up/down switching voltage regulator. The
S18V20ALV is advantageous because its offers a 2.9V to 32V input voltage, puts
pout 4V to 12V, high efficiency (80%-90%), flexibility, low heat, small size, and
good output current. The motor controller will be powered by a 14.8V power supply
so the input range for the S18V20ALV is perfect. The motors can be operated from
3V-12V depending on what speed is required of each one. This is one reason the
Pololu S18V20ALV was not chosen because it has an output voltage range of 4V
to 12V. This range is almost perfect but the two nominal voltages of the motor are
3V and 6V as suggested by the manufacturer. 3V is just outside the range for the
S18V20ALV. Another reason the S18V20ALV was not chosen is its price. It is far
more expensive than any of the other three voltage regulators. Since the
S18V20ALV is a switching voltage regulator, it has a high efficiency, which
51
translates to good battery life and low heat. Low heat is important because we will
not require any heat dissipation, which will add to the complexity of the design and
take up space, a precious commodity. While on the topic of space, the size of the
S18V20ALV is no bigger than a quarter! The official dimensions are 0.825″ × 1.7″
× 0.38”. Finally, the S18V20ALV has a good maximum output current measured at
2A. It is safe to say that the Pololu’s S18V20ALV barely does not make the cut. It
could be included in this project but with all the options available in the world of
ICs, settling for an almost ideal part does not make much sense, if the perfect one
is available! (101-106)
The Motorola MC34063AP1 is a step-up/down single output 8–pin switching
voltage regulator. The MC34063AP1 is advantageous because its input voltage
range is 3V to 40V, output voltage range of 1.25 to 40V, high efficiency (87.7% at
12V in), precision, low heat, small size, and flexibility. As stated before the motors
need to be operated anywhere between 3V and 12V, the MC34063AP1 has a very
wide output voltage range making it a great choice for a regulator. The power
source will supply 14.8V, which is fin because the input voltage range
encompasses this voltage with ease. Since the MC34063AP1 is a switching
voltage regulator, it will be highly efficient. For example: 87.7% at 12V in, 83.7%
at 25V in, 62.2% at 5V in, etc. Since the regulator is, a switching regulator will not
experience high amounts of heat. No heat sink will be required to cool down the
board. The MC34063AP1 output is accurate with in a 2% range, typical values are
4%. The MC34063AP1 is very small; it is not bigger than a penny! The official
dimensions are 10.16 x 6.6 x 3.44 mm. (101-106)
Figure 3-7: Circuit diagram for the Motorola MC34063AP1. (Permission
Pending) (109)
52
I.R.I.M will use the 5V variant of the LP2950 linear regulator. Although it was
previously stated that linear regulators are highly inefficient and can cause a lot of
heat. It was decided that the LP2950 will be used to supply the logic voltage for
the L293D H-bridge. In this case the rewards outweigh the risk because the
LP2950 is very small and requires very little board space and extra components.
The heat problem would not arise here because the L293D does not require much
current. The inefficiency problem is still an issue but with the voltage and current
need for the L293D it will not pose much of a threat to the power system.
4. Design
4.1. Hardware Design
4.1.1. Robot Platform
The J-Bot must be organized to optimize space and make its use count. The inside
of the body will house the four motors that come with the J-Bot kit and the battery
holder that holds four 3.7 lithium ion batteries. The first tier will be used to hold the
motor controller board. Additionally, the first tier will hold the battery pack and all
other connections that will power the Microsoft Kinect. The second tier will hold the
Kinect by surrounding it with four pegs around its base to hold it in place while the
J-Bot moves around and turns.
4.1.2. Power Requirements
The power requirements of this project are different for each component. Data
sheets of part will be an important resource because you do not want to supply
parts with currents or voltages that they are not rated for. The MSP430 processor
that will be used for the motor controller runs at 5V. All four motors will usually be
operating at 6V each. Other parts such as the proximity sensor, magnetometer,
accelerometer, and encoder require a 14.8 V lithium ion battery will power 3–5 V.
all of these. It is evident that voltage regulation will play a pivotal role in the design
of the motor controller.
Voltage regulation will be taken care of by the Motorola MC34063AP1. This
switching voltage regulator will be used to step down the voltage from the power
supply to the processor. Additionally, the MC34063AP1 and the L293 will be used
to adjust the voltage given to the motors.
Although a one-battery design has been implemented to power, the motors and
PCB motor controller the Microsoft Kinect will have its own power supply because
it requires 12V to power. The 12V need to come from a wall socket connection.
12V will be supplied by the same set up as the power supply to the motor controller
board. This means 4 3.7V lithium ion batteries will be placed in a batter holder that
will output 14.8V. Since the Kinect runs at 12V and 1.08A, the voltage and current
will be regulated to match these values as closely as possible. Using the
MC34063AP1 to regulate should be possible because of it wide range of voltage
and current outputs. Since the Kinect takes power from a wall socket, the cable
will be cut after the AC to DC converter box and we will supply the 12V and 1.08A
directly into the cable.
53
4.1.3. Computational Requirement
4.1.4. PCB Design
This is a project were data processing is essential. The first is the visual feed that
will come from the Kinect. The Kinect provides four streams. Three visual and one
auditory. The auditory is not needed so it will be discarded. The three streams are
image stream, the depth stream, and the skeletal stream. The skeletal stream is
not needed so we can excluded it. The image stream is purely an aesthetic stream.
It holds no value in the navigation of the robot. The only stream that will be of use
is the depth stream. The depth stream is a 16-bit value per pixel that tells the
computer how far that pixel is from the Kinect. The PCB will have all of the following
modules on it. It will have the Wi-Fi module, a communication module that links all
other modules to the Wi-Fi module, a Locomotion module that controls the motors
as well as receive feedback about the motion from the robot and the robot’s
wheels, Voltage regulator, a Power supply to supply the raw voltage values to the
multiple motor assembly’s, and the Kinect. Although the computer is displayed, it
is not on the actual PCB but rather is part of the High Level Abstract
Representation.
One of the most fundamental part of our project is the communication between
the robot and the Computer. In order to do this we decided to pick one of three
different methods. We could use RF signals and achieve a long distance without
interference but we would still have to follow the robot around. The connection
would not be encrypted and the design of the data packets would be up to us
entirely. It also adds hardware since both the robot and the computer would need
RF transceivers to communicate. Bluetooth was a candidate since it had a wellestablished library of protocols and it required one less component than RF
transceiver in that a Bluetooth dongle could be bought to communicate with the
robots Bluetooth transceiver. The only problem was that the Bluetooth had a
shorter distance. Therefore, the final decision was to use Wi-Fi as our medium.
Since our robot will be functioning indoors and more specifically in a Wireless
internet rich area, it was the best option especially since the protocols have
54
already been defined and the libraries for it have already be developed. Our WiFi module of choice is TI’s CC3100CC3100 as shown below:
Figure 4.1.4.C: Schematic Diagrams for MSP430F5529 and CC3100 respectively
(reprinted with permission from Texas Instruments (113-114)
Pin Number
7
18
15
14
MSP430G2 to CC3000 Pin Assignments
Assignment
SPI_CLK
SPI_CS
SPI_MOSI
SPI_MISO
Table 4.1.4.A MSP430F5529 to CC3100 Pin Assignments
55
The antenna circuit design that is recommended by Texas Instruments is as
follows:
Figure 4.1.4.D: Suggested Antenna Design for CC3000 by Texas Instruments
(reprinted with permission from Texas Instruments) (113-114)
Need to replace the following with the odroid xu3 info
Because we decided to used Microsoft Kinect as our main sensor, our project were
required to
To be able to determine where the robot needs to go and if it has gotten there in
time we need Accelerometers and Magnetometers. Our choice is the MPU 9150.
Figure 4.1.4.I: SparkFun 9 Degrees of Freedom Breakout - MPU-9150 (120)
This chip is capable of providing the accelerometer and magnetometers
information as a Digital output that can be read by a processor. Below is the
schematic diagram of the MPU9150:
56
Figure 4.1.4.J: MPU-9150 diagrams (reprinted with permission from Texas
Instruments) (121-122)
MPU-9150 Pin Assignments
Pin Number
1
2
3
4-7
8-9
10-11
12-13
14
15
16
17-18
19
20
21-22
23
24
Assignment
GND
NC
VCC
NC
VCC
GND
VCC
NC
GND
NC
GND
NC
GND
NC
9
10
Table 4.1.4.C MPU-9150 to MSP430F5529 Pin Assignments
TBC assignments refer to the fact that those pins will be assigned to a later module.
In particular, they will be assigned for motor control and motor feedback. It is
necessary to be able to control the power state of the motor, the rotation direction
of the motor, and the rate at which the motor can be spun. For each of these
conditions three pieces of hardware are needed. For the first condition, controlling
the power state of the motor, we can use a transistor and a micro controller. The
microcontroller provides a discrete voltage to the motor through the transistor. The
transistor acts like a switch or to be more accurate, a gate. When the voltage is
57
below the “turn on voltage,” the transistor will not allow voltage to flow into a motor.
However, once the microcontroller sends a voltage that exceeds the turn voltage
of the transistor, the “gate is opened,” and the voltage “floods” into the motor, thus
allowing for a discrete “on-off” state of the motor. However, this is not enough. We
must be able to control the direction of rotation. In order to spin the motor backward
we must apply the voltage in reverse. This can be accomplished with an H-Bridge.
The H-bridge will allow forward and reverse rotations by applying a forward or
revere bias voltage. However, there is still the matter of speed control. The nature
of a DC motor is when a voltage is applied the motor spins but once the voltage is
removed the motor stops spinning. The longer the voltage is applied the higher the
number of rotations can be achieved. The speed is the number of rotations per
time unit. The application of voltage can be called a “pulse.” Microcontrollers such
as the MSP430 are capable of doing this with pulse-width-modulation (PWM). This
will now complete the motor control in its entirety. The following diagram and table
shows one set of PCB connections between the MSP430F5529 and one of the HBridge:
Figure 4.1.4.K L293D Schematic Diagrams (123)
L293D Pin Assignments
Pin Number
1
2
3
4,5,9,12,13
6
7
8
10,11,14,15
16
Assignment
40
34
MOTOR1
GND
MOTOR1
33
VCC1
NC
VCC2
It should be noted that one H-Bridge is sufficient to control all four wheels.
However, due to the unique nature of each DC motor. This should not be
encouraged.
58
.
4.1.5. Motion
The J-Bot will move with a combination of autonomous movement and coordinated
movements. The autonomous movement will come from the sensors on the J-Bot.
For example a proximity sensor will alert the robot it is about to run into something
and the robot will stop moving before doing so. The coordinated movements will
come from the computer the robot is communicating with. Using accelerometer,
encoder, and magnetometer information a program will calculate where the robot
should go next.
The IRIM can work in several different settings and environments. The usual
environment will be in some kind of room. As long as the floor can be navigated,
the J-Bot rover will go around the room. The J-Bot rover will not be able to climb
stairs. Therefore, if a certain area is inaccessible that area will not be scanned past
the depth of range of the Microsoft Kinect. The depth of perception of a Microsoft
Kinect is 4m or ~13ft. Any obstacles impeding the J-Bot rover’s movement will be
avoided using the 2D map made from the Kinect’s 3D point cloud. The J-Bot’s tires
are made of rubber and should easily travel on most surfaces such as carpet,
hardwood floors, tile, concrete, brick, etc. Since the Kinect’s depth of perception
reaches about 13ft, it will not be so useful in wide-open areas where there is
nothing to make a point cloud of (not that you would want a map of an open area
anyway). The IRIM works in most temperatures, no promises are made in extreme
conditions (absolute zero, volcanoes).
Accuracy is a big issue in a project like this. The position of the robot rover and
where it should be going are important. Small errors can amount to large ones with
a longer use. For example if the J-Bot is scanning a very large room, the errors
may add up towards the end and the rover might encounter obstacles it should not
be encountering. If the position of the robot and the position the computer thinks
the robot is in differ too greatly this will cause problems. This is why the
magnetometer, accelerometer, and encoder are being used. With information from
these modules, the J-Bot rover can keep track of its position and correct its self
from deviating from its path. It is rare for a vehicle to move in a straight line or
follow a path perfectly. Extensive testing will have to be done and data must be
recorded to figure out the tendencies of the rover. For example if the rover pulls to
the right after a certain distance traveled or amount of degrees deviated from its
original intended heading passes a threshold it will correct its self. Error can occur
by making complex movements also. For the Kinect to have optimal scanning the
J-Bot will make a 360 degree turn at waypoints. If the turn was not exactly 360
degrees error arise, luckily the magnetometer will correct these mistakes. The goal
is to achieve a 98% accuracy level. Data will be collected through testing to be
able to set distance and angle values before making error correction or detection.
Turning will be an important aspect of moving the rover. As mentioned before it will
be easy to move the robot forward or backward. Turning is a different story. To
turn the motors one pair of motors on the same side have to turn more slowly than
59
the ones on the opposite side. The greater the difference between the two the
sharper the turn. This means having to change the voltage the regulators put out
to the motors. The regulation of the voltage of each side will need to be
independent of each other to achieve this result.
As far as speed is concerned, it will not be an important factor. We will not be drag
racing with the J-Bot rover. The J-Bot will need to move fast enough as to not be
boring to look at. What is important is that the motors provide enough torque to
move the weight on the rover. The motors will usually be operated at 6V which
gives it a no load speed of 200 RPM. At 3V the rover has a no load speed of 100
RPM. The J-Bot should not be moving that quickly anyway because it could mess
up the Kinect’s ability to create a point cloud. In addition, if the J-Bot takes off too
quickly or is moving too fast and comes to a sudden stop it could knock the Kinect
out of its holding place. This would not be good at all.
4.1.6. On Board Computer
For our project, we decided to use Microsoft Kinect as our main sensor, in order
for the Kinect to function we need an operating system with compatible driver and
a small but reasonable high performance computer to handle the data generated
by the Kinect. After test out different small microcontroller such as Beaglebone
Black, Odroid C1, and Odroid XU3, we decided to go with Odroid XU3 (Figure
4.1.6.a).
Figure 4.1.6.a – Odroid XU3
The Odroid XU3 is a small yet powerful computer with 2 processors Samsung
Exynos5422 Cortex™-A15 2.0Ghz quad core and Cortex™-A7 quad core CPUs.
With 2Gbyte of LPDDR3 RAM at 933MHz, Ethernet port, and 4 USB 2.0 ports,
the Odroid XU3 was not only able to process the point cloud data generated by
the Kinect and send over the network, it also capable to perform the conversion
from Kinect depth image to laser scan, allowing the robot to be more responsive
to obstacles.
60
4.2. Software Design
4.2.1. Software Design Overview
The software is the brain of the project, thus the software design is an important
factor that determine the complexity, reusability, and extensibility of the whole
project. The following figure (Figure 4.2.1.a – Software Overview Diagram)
described the general design of the high-level software component (little or no
direct interaction with hardware devices).
In the (Figure 4.2.1.a), the top most ovals are the list of expected inputs from either
device drivers, simulated data, or data collected. The boxes are the subsystem
that handles a specific task. The directed arrows represent the flow of data from
drivers or sub system to another. For 3D scanning data, the input go through point
cloud multiplexer to convert into point cloud data that can be used for mapping and
navigation. This also is the point where different setting can be set to determine
how data being handle and process. Dead reckoning subsystem as the name
implied are using feedback from motor encoders and data from inertia
measurement unit (IMU) to calculate the robot displacement, thus in general
determine the current location and rotation of robot in the environment. Robot
Operating System (ROS) is a set of open source software libraries that help handle
the data. For this project specific, ROS was used to handle sharing resources,
threading, and increase the modulation of the project. Mapping is subsystem that
attempt to map the environment given point cloud data, robot orientation, and
possibly landmarks from image. The subsystem may possibly using simultaneous
localization and mapping (SLAM) to create a map as the robot move within the
environment. Navigation subsystem are one that taking a map produce by
mapping subsystem and perform pathfinding algorithm to find the shortest path if
available to the destination. The navigation system needs to take into account the
size of the robot, turning radius, and obstacles to determine the path. Some
common algorithms being consider including iterative deepening, A*, and D*.
Finally, communication subsystem mainly handles sending command to the robot
and receives the sensors data from the robot. Currently there are two approach
being considered, one is directly convert the commands into byte code and send
over the network, another approach is simply divide the command into subcommand and have corresponded drivers to convert the sub-command to byte
code and finally forward to system that transmitting the signal. (128-130)
61
Indoor
Scanning Data
Outdoor
Scanning Data
Collected Data
Encoders
Data
Inertia
Measurement
Unit Data
Camera
- Merge Data from multiple in put
- Switch Input source depend on environment
conditions
- Using the data from motor
encoders and IMU to calculate
the robot position
- Publish the data that can be used by other
process
Images
Robot
orientation
vector
Point Cloud
Data
Environment Map
- Using the robot orientation,
point cloud data, and possibly
combine with image processing
to create a map of the
environment.
- Given a map, try to
navigate through the
environment
Robot Command
- Communicate with the ro bot
platform and send appropriate
commands through network
(Bluetooth or wireless)
Figure 4.2.1.a – Software Overview Diagram
4.2.2. Robot Operating System (ROS)
For this project, aside from being a functional robot, the extensibility and
exchangeability are essential. Robot operating system is a set of open source
libraries that help speed up the process of building a functional robot. In specific,
ROS abstract the process of threading thus allow the software to function more
efficiently and independently. ROS also support sharing resources between
different processes thus typically reduce memory usage. (128)
The software component of the project will based on ROS publish-subscribe
architecture. A simple way to describe is in this project, ROS will act as a black
62
board where data can be publish by multiple different processes, and at the same
time the same data can be subscribe by multiple processes. (128)
In the (Figure 4.2.2.a) a data can be subscribe by multiple process even though
there are no process publishing it. In this event, no actual data will be process until
there is a process that publishes the data. When the data is received, ROS will
trigger the function in subscribing processes, thus the whole system is technically
event driven system rather than polling.
In (Figure 4.2.2.b) a data can be published by multiple processes, in this cases
ROS can be configure to maintain a queue of certain size thus reduce the problem
of processes collision and synchronization between multiple processes.
Point Cloud
Data
Figure 4.2.2.a – Multiple Subscribers
Robot
Commands
Queue
Figure 4.2.2.b – Multiple Publishers
In the (Figure 4.2.2.c) is the demonstration of two processes subscribed to a topic
called “/chatter,” which is a topic that contain a “string.” Since there is no process
that publishing the data, there is no actual data being received. ROS however
63
provided a software called “rqt_graph” that allow to monitor the connections
between processes and topics.
In the (Figure 4.2.2.d) is the demonstration of two processes publishing to the
same topic called “/chatter.” Again, since there is no process that subscribe to the
data, there is no actual data being process.
Figure 4.2.2.c – Multiple Subscribers demo on Terminal
64
Figure 4.2.2.d – Multiple Publishers demo on Terminal
In the (Figure 4.2.2.e) is the combination of two processes publish to the same
topic and two processes subscribe to the topic. In this case, two publishing
processes are simulated to send the data at different rates from each other; this is
similar to real life scenario where different sensors can have different rate of output
or when the result from one process is more expected than others. As the result,
the different rate of publish/subscribe data allow developers to prioritize resources
for important processes rather than distribute computational power evenly among
all the processes.
Figure 4.2.2.e – Multiple Publishers and Subscribers with different Publish rate
Furthermore, ROS also has a large number of open sources package written by
communities, which would help reduce the complexity of the interaction between
complex algorithms. An example of the useful package is openni_launch
package, which access the Kinect data from OpenNI library and provide ROS
compatible node and topic.
The (Figure 4.2.2.f) is the overview diagram of the ROS components, their
relationship, and how they interact with each other. The diagram is separate into
two area; one contained the nodes run on the stationary computer/laptop, and
the other one contained nodes and software that run on the mobile robot
platform. The rectangular boxes represented the ROS node, which can either
publish or subscribe the topic(s). The oval shapes represented the topics that
can be published or subscribed by node(s).
From the right of the (Figure 4.2.2.f), the Odroid XU3, which mounted on the
robot platform and connected to the Kinect, run the OpenNI node that published
the pointcloud data which will be subscribed by Octomap (3D mapping node) to
65
build a the 3D map of the environment. At the same time, the OpenNI also
provided depth image for Depth_Image_to_Laserscan node, a node used depth
image to create data similar to data obtained from expensive LIDAR devices. The
laserscan data is much smaller in size thus can be transfer over the network
much more quickly and consistently than the pointcloud data. The result of such
setting is that the robot will be more responsive to obstacle, thus reduce the risk
of collision.
On the stationary computer, the one of the main nodes are Octomap, which take
the point cloud data and the transform coordinated of the sensor to create a 3D
map. The Gmapping node is the node responsible for taking the laser scan data
and build a 2D map, which can be used for navigating as well as allow the
Frontier Exploration node to determine unknown area. The Frontier Exploration
used the map to search for unexplored area and create a goal so the robot would
move toward to explore. The most important node of the project is the navigation
node, this node take in the map data provided by Gmapping, the goal either from
user or from other node such as Frontier exploration, the transform coordinates
of the robot and sensor, and the feedback from robot. The output of the
navigation node is the velocity that the robot should move. The robot however
does not need to strictly follow the exact velocity, instead, the velocity is more or
less the suggestions about which direction the robot should move. It is the
feedback from the robot that would allow the navigation node to accurately guide
the robot. The command velocity from the navigation would be read by the Robot
Communication node, which then translated to the robot command and send the
command over the network so the robot would move. The robot that receive the
command would move and at the same time sending feedback to the stationary
computer, reporting whether the robot actually move and how fast it is moving.
The Robot Communication node, the same node that sends the command to
robot, would receive the feedback from the robot. The feedback velocity would
then be used by velocity to odometry node, which converts to velocity data to
odometry data while at the same time update the transform coordinates.
66
User Input
Octomap (3D
mapping)
Boundary
Transform
Gmapping
Frontier
Exploration
Goal
Velocity to
Odometry
Feedback Odometry
Point Cloud
Map
Navigation
Stack
Range Sensor
Depth Image
Laser Scan
Depth Image
to Laser Scan
Command Velocity
Feedback Velocity
Robot
Communicati
on
Robot
Platform
Figure 4.2.2.f – ROS nodes overview
4.2.3. Point Cloud Library (PCL)
The goal of the project is not just completing the robot and navigation system but
also allow the users/developers to exchange and expand different devices depend
on the goal’s specific needs. Thus even though this project was design to be an
indoor robot, it is important that the software allow users/developers to switch out
indoor scanning devices (Kinect) with a more powerful outdoor scanning devices
(LIDAR). (129)
Therefore this project use point cloud to store depth information of the scanning
devices. By converting the scanning data into point cloud data, the project abstract
the processing layer from the device specific, thus any process that require
scanning data can be work independent with simulated data or previously collected
data without the present of actual scanning devices. (129)
The point cloud library not only allows the scanning devices to be exchange but
also allow multiple devices to work in conjunction all together at the same time.
The following (Figure 4.2.3.a) show the general idea of point cloud abstraction and
multiple functional scanning devices.
67
Collected Point
Cloud Data
Point Cloud
Output
- Merge Data from multiple input
- Switch Input source depend on
environment conditions
Indoor Scanning
Device
- Publish the data that can be
used by other process
Outdoor
Scanning Device
Figure 4.2.3.a – Point cloud application for project
From above figure, (Figure 4.2.3.a), depend on the configuration, the project
should allow the different choices of how the system handle input from multiple
devices. For example, the robot can used high power outdoor scanning device
while outside then switch to lower power indoor scanning device when inside, or
simply reduce the rate of collecting data of outdoor scanning device while inside.
Since the robot platform is small, there is a limit of how much processing power
and memory it can have, thus it is important to take note that the point cloud data
is memory consuming as the explored area become greater. The software need a
way to reduce the amount of point cloud data such as filter area of interest (only
keep the point cloud of certain area or height), flatten data (the height of all point
cloud are set on the same plane), density reduction (limit the density of point cloud
in the area), point absorption (new points that are closed to existing points will not
be remembered), and polygon representation (polygon of points cloud that present
the surface).
Filtering area of interest is the basic technique of computer vision. When a robot
receives the image feed from camera, it is a common technique to filter out the
undesired color, or isolate a specific color. The same concept can be apply to point
cloud. By specify a certain area relative to the 3D sensor (origin of point cloud) the
amount of point cloud can be significantly reduced.
The (Figure 4.2.3.b) demonstrate a simple filtering using the height of point cloud.
The technique is simple to implement, computationally fast, and very effective for
navigating and mapping. For this project, the robot is height is about one foot, thus,
it is very reasonable to filter out any point that is higher than one foot. The filtered
point cloud not only small in memory but also provide a side benefit that the robot
now can move under certain areas such as table, chair, or horizontal bar.
68
Also in this project, the robot was mean to work within a building or place with flat
floor, thus the robot can take advantage of this and also reduce the point cloud
around the plane of the floor as displayed in (Figure 4.2.3.c). Note that in the
(Figure 4.2.3.c) some points lower than the floor plane were not removed, it is
important to keep such points because it tell that there is an area where the ground
lower than acceptable level. When encounter such situation, the points can be
treated as any point above the floor plane and marked as obstacles.
Height of
Interest
Figure 4.2.3.b – Point Cloud Filtering using Height
Height of
Interest
Floor Plane
Floor Plane
Figure 4.2.3.c – Point Cloud Filtering using Height, and Floor removal
For simple robot such as this project, there is no value to know whether a wall have
hole on it or there are one objects stacked on another, thus there is an option to
flatten the point cloud to a single plane so it can easily be used for navigate and
mapping. The (Figure4.2.3.d) show how a filtered point cloud being flatten to a
plane, let called it obstacle plane. The process of flattening the point cloud can be
done at the same time as filtering the point cloud, thus it is a relatively cheap in
computation. While this technique is good for navigating or mapping, it is at the
same time reduce the detail of scanned data. Thus, as mention before, the
technique only useful for simple robot that does not required to interact with
environment.
69
Obstacle Plane
Obstacle Plane
Floor Plane
Floor Plane
Figure 4.2.3.d – Filtered Point Cloud being Flatten to Obstacle Plane
Filtering point cloud is fast and simple but when multiple point clouds are being
stored rather than being disposed, over time the point clouds can overlap each
other. Multiple point cloud held together can take a lot of memory space, thus result
in performance drop, therefore, it is important to resolve the issue. One of the
available options is to reduce the point cloud in the area to a specific amount, in
short density limiting. The (Figure 4.2.3.e) show how a dense point cloud can be
reduced to save memory. The concept of density reduction is in a group of point
clouds that are too closed to each other, only keep one of them and remove other
points. This in general retain the point cloud integrity and achieve goal of memory
saving. The nature of this technique is trade of the finery of point cloud data for
memory saves. The method is generally similar to the resolution of image, whereas
a high-resolution image may look sharp and clear but the size would be very big,
while a low-resolution image is smaller but blurry and lack of details. Also, note
that the point cloud presents the point in 3D space thus computing the distance
between the points, and checking for close points around a certain point would is
not very cheap in computation.
Figure 4.2.3.e – Overlap Point Cloud (Left) Density Reduction (Right)
70
While density reduction is good to save memory space, it would cost a lot of
computation when the size of point cloud grows. Thus, rather than reduce the
density for the entire point cloud stack, it is more reasonable to perform the
reduction when the new set of point cloud is received. For each point in the new
point cloud, simply check if there already any point cloud nearby, if there is, then
either not add the new point cloud to the already exist, or remove the detected
points and add the new one places. The (Figure 4.2.3.f) show the reduction of point
cloud when new point cloud is received, the new point cloud will not be added if a
point is detected nearby. In the (Figure 4.2.3.f), the left side is the old point cloud
(black) and new point cloud (while). After reduction (right), only points that have no
closed neighbors will be added (grey point).
Figure 4.2.3.f – (Left) New Point Cloud (White) Overlap Old Point Cloud
(Black)
(Right) Remain Point Cloud (Grey) that be added
Of all the mentioned methods of memory saving, they all have the same problem
that is as the robot explore the area, the size of existing point cloud grow
exponentially together with the explored environment, and as the result, will limit
the area of operation of the robot. Not only that, most of the methods are tailored
based on robot navigation reference, thus area that robot can go through may not
available for human, and the area that human can maneuver through could be
impossible for robot to know. Such limitation could easily be noticed for scout
drone, or search and rescue robot. One potential solution for the problem is
actually a concept that is already used in video game industry, which is polygon
representation. In a game, there are exist many objects, and a character can using
ray casting method to determine the distance to an object. While the robot is
different from a character in a game, however, if the point cloud can be clustering
together and using those points to create a polygon to represent the object, then
a great amount of memory can be saved. The main idea can be understand in two
71
stages, first is to create a polygon from known point cloud, second is to determine
if the new point cloud is belong to the object.
The concept of create polygon using point cloud is well known in modeling and
simulation field. It is however often take a lot of computation, thus not very practical
for robot, which operating in real time. A fast and simple way to create a polygon
from known point cloud is by connecting the points that are closed to each other.
The (Figure 4.2.3.g) demonstrate the concept of create polygon by connecting
points that are closed together from newly received point cloud. On the left of
(Figure 4.2.3.g) is a simple connected scheme where each point only connected
to two other point that closed to it, the last point what cannot find any closer point
will reconnected to the starting point. The scheme is less refine and may miss
some important point, it is however relatively fast and simpler than highly
connected scheme, which is to the right of (Figure 4.2.3.g). Notice that there are
some point that is not include to the polygon because they are too far away for
others, they could either left as it is for a couple scan since there is possibility that
they would be included to the other scan, or they could simple regarded as noise
and be removed. The goal here is a cluster of point cloud now can be represent
as an object using a much smaller group of point thus greatly reduce the memory
usage.
Figure 4.2.3.g – (Left) Simply Connection Polygon
(Right) Highly Connected Polygon
Once the objects were created, every time a new point cloud are receive, it will be
tested against the already exist object. If the tested points collide with the existing
object, they will be removed. The remained point cloud will connect together to
create a new object as shown in (Figure 4.2.3.h). Notice that there some of the
remained points can create a small insignificant object like the one made of three
grey points in (Figure 4.2.3.h), it is typically reasonable to remove an object that
was created by a small number of points.
72
Figure 4.2.3.h – New Point Cloud absorb by Objects
All the point cloud figures in this section (4.2.3.) were represented as 2D data just
to show the proof of concept. The actual point cloud data will be stored and
processed in 3D space, thus much more complex and computational intensive.
Also noted that point cloud received from 3D scanner may contain noise from the
devices, shock from movement of the robot, as well as false data due to inaccurate
orientation of the robot. Such problem will also need to be resolved in order to
display correctly.
4.2.4. 3D Display
While there is no need to display the point cloud data to construct navigation map
or navigate in the environment, the image can convey a lot of information to human.
Thus, it is very important for human to be able to observe the environment collected
by robot and analyze the information from the robot point of view.
It is also important to take note that I/O operation is slow and computational
intensive, thus for 3D data displaying will be limited for high end computer or during
development process rather than mandatory, especially when human can make
sense in most of the situation with just 2D image from camera.
The (Figure 4.2.4.a) is the accumulated point cloud of the environment in one
revolution around the robot. In this image, the point cloud simply overlaps each
other and slowly disappears over time. Therefore as the robot move to another
point, it would not be able to remember what it has seen. On the other hand, if the
point cloud was to be allow to remain forever without any algorithm to resolve the
stacking of points on the same spot, the system memory would run out and the
program will either freeze for crash. The correct 3D display of the environment
should filter out the overlapping point cloud so that only a single set of point cloud
to forever remain on the surface of the object. With only a single set of point cloud
to cover the area, there will be more memory to spared for bigger environment,
thus allow the robot to remember that object location even when it no longer able
to see the objects.
73
Figure 4.2.4.a – One Revolution of Point Cloud around the Robot in
Simulated Environment
The (Figure 4.2.4.a) was produced using the configuration parameters from
turtlebot library, an open source library that built for turtlebot robot, then run on
simulation software gazebo and display using ROS integrated software rviz.
Gazebo is a great tool to simulate the environment but the learning curve is steep.
Rviz is the software to view the ROS topics and good for developers, it is however
unnecessary for end users who are not supposed to configure the parameter of
the ROS system.
4.2.5. Simultaneous Localization and Mapping (SLAM)
As the robot moving in the world, it is important to know the location of the robot
with in the environment and the map of the environment so that the robot can
navigate through with efficiency. The outdoor robot often equipped with high
accuracy Attitude and Heading Reference System (AHRS), which often consist of
GPS, gyroscopes, accelerometers, and magnetometer, thus the localizing an
outdoor robot is relatively easier than indoor robot and from then it can do the
mapping with less problem. Since this project feature indoor robot, localization and
mapping is a very difficult problem because for a robot to know where it is within
the environment it need an accurate map. On the other hand, for a robot to produce
accurate map it need to know where it is within the environment. Both localization
and mapping are strongly depend on each other, therefore simultaneous. In this
project, the robot will be using gMapping which is openSLAM integrated to ROS.
The gMapping provide a solution for localization and mapping by using local map
74
and global map. As the robot move around the environment, it recorded a set of
data about the environment and store in local map, then by compare the local map
with global map, it would be able to estimate the probability of its location and then
merge the local map to global map. (130)
In the (Figure 4.2.5.a) is the map produced by gMapping in a simulated
environment, which the objects are mostly static. The real world however is very
dynamic thus, what was there before may no longer be there the next time the
robot look at it. The robot not only need to map the objects to the global map but
also need to know how to remove the objects from the map if it no longer there. It
is also important to note that the real world will not produce perfect scanning data
whether it can be noise produce by the devices, people who happened to walked
by, or dynamic objects in the environment.
Figure 4.2.5.a – gMapping in Simulated Environment.
Another problem with the gMapping is that it compares the scanning data with 2D
map, which mean it does not make full use of the environment landmark and
structure. The (Figure 4.2.5.b) show simple pair of local map and global map. From
the local map, it is difficult to determine where the local map is in the global map.
The effect is especially severe in the environment where multiple place in the
global map has the same structure similar as in (Figure 4.2.5.c). The gMapping
75
would then failed to make used of landmarks, colors, signs, and other components
that may help identify the location.
Figure 4.2.5.b – A pair of local map and global map
Figure 4.2.5.c – A pair of local map and global map (multiple places with
similar structure)
76
4.2.6. Navigation
In modern world, as seen in many video game and simulation, navigation may
appear to be a rather simple problem to solve. It is however not entirely correct as
in simulation, the world is known, the map is perfectly accurate, the objects are
easy to recognized, and there is no noise from scanning. For a robot, however,
navigation remains a challenge. There is no guarantee that the robot movements
will always perfect, that mean just because the robot was trying to straight do not
mean it would physically go a perfect straight line. In addition, the world beyond
the robot’s line of sight remains dynamic thus the original plan may no longer
correct as the robot execute the plan. As the result, it is important to make sure
that the robot is capable of dynamically change the plan when unexpected
obstacles present within the planned path.
Some well-known path finding are A*, D*, Dijkstra, and Wavefront planner. A* is
good to quickly find the solution with known map and relatively simple to
implement. It is the most standard path-finding algorithm used in video game
industry, where the environment is always known. D* is good with dynamic
environment that change often. Dijkstra is a good path-planning algorithm for
searching multiple places or devise multiple plans. D* dynamic planning is very
efficient compared to re-compute the plan, thus also often used in many robots,
though the algorithm is more complex and hard to implement. Dijkstra is another
common path-finding algorithm that can be used to plan the paths not to a single
target but multiple targets, and note that the paths are also the shortest paths.
Dijkstra can also be used to compute multiple path to the same target, thus if one
plan is fail, there always an option to used other plans. Wavefront planner, a wellknown function, is best for a known map, static goal, and less dynamic
environment since it provides the shortest path to the goal from any location.
However, when the area is unknown, and the world can change quickly, wavefront
planner is an expensive algorithm. For this project, it would be beneficial to
implement different path finding algorithm to be executed at different situation.
When the robot first receives the waypoint to destination, it would use the A* to
quickly compute the path, if the world is changing, it would then switch to D*.
4.2.7. Autonomous System
A robot that can navigate through the environment is nice, but how does it decide
the destination. Whether the robot is in fully autonomous mode, semi-autonomous,
tele operating, or remote control by human, there is a need to implement some
special behavior that could prevent the robot from damage human’s properties, or
damage itself. Other mechanisms such as fail safe, energy saving, or power report
are essentially important for a robot as well. Still sometime, human can make a
judgment that the robot need to perform a certain action that may result damage
the property or robot, hence there also a need to allow human to manually override
any preset behavior as they see fit. Either there are also cases where the robot
may receive false data due to devices error, environment conditions, or
interference from external source, such situations must also be resolved in certain
77
way. Thus, generally, the robot requires some form of autonomous system or a
more complex intelligence system.
There are many way to implement an autonomous system. One of the most
common autonomous system is finite state machine, which is used a lot in video
game industry, robotics field, as well as commercially smart products. The finite
state machine offers a fixed pattern of behaviors, where a behavior will be execute
once certain conditions are triggered. The behavior within a finite state machine is
predictable, easy to test, debug, and control. For most robot in general and this
project in specific, it is a good idea to implement the finite state machine at the very
bottom level, where behaviors such as stop when detect object, or sleep when no
command is received. The down side of finite state machine is that the robot is
fixed with the known pattern of behavior and cannot act outside of the preset
conditions. Thus when encounter unknown conditions, or when the developers
failed to account for all the possible situations, the robot may start to behave
strangely and sometime dangerously, such as ram over a glass or fall off the table.
Another approach for robot that is current being study is reinforce learning. Used
in few video games, and some research robots, reinforce learning is much harder
to predict and difficult to test, debug, and reproduce, since the trigger conditions
for a certain behavior could be difficult to reproduce, or just some noise from
sensing devices. Nevertheless, reinforce learning may possibly produce a more
flexible, diverse, seemingly intelligence behaviors, and, to many people, just plain
entertained interesting behaviors.
For this project, the finite state machine for the bottom level will consist of simple
behavior; the state machine here will be called reactive system. The reactive
system is the most basic and fundamental behavior of the robot in which the robot
will prioritize over any command other than manually override human given
command. The (Figure 4.2.7.a) is the basic finite state machine graph of reactive
system. The state “Complete Stop” is .when the robot attempt to shut down the
whole system as if the user issue shut down command. The action would prevent
the robot from suffer any electrical damage or loss of data due to sudden loss of
power. The state “Stop” is an emergency state where robot may encounter
something much unexpected or when user want the robot stop doing whatever it
is doing. The “Move Back” state is a simple reactive to help robot avoid suffer any
damage due to the change in the environment such as moving object, or when
there is error in the decision system or navigation system that may cause the robot
to crash into object. The state, under normal condition, would save the robot from
damage such as falling or crash. The “Wait for Command” state literally takes
whatever command from either decision system or human and execute. It is in this
state that human can manually override the reactive system, and command robot
perform task that it normally would not perform, including crash to the wall at full
speed and drive off the table.
78
While the reactive system is simple, it is naturally not going to do much other than
waiting for human command. Thus, there is a need for a higher-level state machine
where the decision is made based on sensory data and data processing. The
(Figure 4.2.7.b) is the basic design of decision-making system, also called planning
system.
Finished
Move Back
Robot tilt forward
OR
Detected Object too close
Power Online
Complete Stop
Waiting for Command
Lowe Power
Low Power
Emergency Stop
Stop
Emergency Released
Figure 4.2.7.a – Finite State Machine of Reactive System
From the (Figure 4.2.7.b), the robot will have the most basic states “Idle”, “Stop”,
“Mapping”, Navigating”. “Idle” state is the fundamental state where the robot will
conserves its energy while waiting for command from human. When in this state,
the robot may follow the basic reaction such as “Move Back” and will not attempt
to return to original position. That behavior can be understand as when the robot
was not told to remain in place, then it is practically free to do anything as
necessary. This project is design so that it would be as general as possible and
maximize extensibility. Therefore, such behavior is very favorable for general
robot, especially if the robot is the kind that works in museum, or as helper. The
general robot then can be implemented with other non-essential states, where the
robot could start execute when it is not occupied doing human command, or simple
wandering around the environment where it may randomly receive command from
human, rather than having human go to them to give command. Similar to “Idle”
state, “Waiting” state is the state in which robot also conserves its energy while
waiting for command from human. The different is the robot will not take any
command with lower priority until it is released from the “Waiting” state. In addition,
if the robot by any reason has to move from the position, it will always attempt to
79
return to waiting place. This is a standard state in most robot system, in which the
robot is locked in specific place for further command. The “Mapping” state is the
main function of this project. While in this state, the robot will attempt to map out
the environment as much as possible. Since the field of operation may be large
and unnecessary to map out the entire area, it is important that the mapping action
can be interrupted or aborted. The final state is “Navigating,” with the known map;
the robot should be able to navigate through the environment. If the destination,
however, is not within the boundary of known map, the robot will change to
“Mapping” state and attempt to map the area to the destination. There is no
guarantee that the destination can be reached or for any reason the robot cannot
map the area to the destination, the state require condition to abort the task.
No matter what state the robot currently in, the robot must always available to
receive the command from human, the command could be vary from adding the
next command to the queue, having the robot to abort the currently task and move
to the next one, or simple clear out the queue task and return to “Idle” state. Since
one command can interfere or even interrupted the currently being executed task
as at worst the whole chain of task within the queue, it is require that each
command be marked with priority level. It is so that human can confirm that they
know exactly what the consequence of the command.
Request Stop
with Release Condition
Waiting
Condition Meet
Resume
Condition Meet
Requested Stop
with Release Condition
Requested Stop
with Release Condition
Resume
Condition Meet
Path to Destination
Mapped
OR
Receice Waypoint
Request to Explore
the Area
Idle
Mapping
No more unknown Area
OR
Interrupted
No Path to
Destination
Arrived at Destination
OR
Interrupted
Navigating
Receive Destination Way Point
Figure 4.2.7.b – Basic Finite State Machine of Planning System
80
4.2.8.
Accelerometer Communication
The Accelerometer’s data is raw input. It only interfaces with the MCU and nothing
else. The MCU will either do double integration to determine the position value or
do a single integration or compare the velocity to the target velocity. The master
computer will send either a target velocity and time interval or a desired distance.
Based on which implementation would be the “best fit” in terms of accuracy,
performance, and efficiency.
To initialize the system we need to send a signal straight to the MCU to let it know
to start receiving data from the computer. This initial signal will set the “fetch” bit
so that the MCU will receive data. MCU will then sample the Accelerometer’s data
and determine whether it should continue moving or whether it should stop and
fetch the next set of coordinates. Once it starts moving, the fetch bit will be set to
“low” indicating that the MCU should not receive any data. It is should also send
similar signal to the computer to temporarily disregard the Kinect feed. This will
prevent unnecessary computation. Once MCU has received the information about
getting to the waypoint, it will spin the motors until it has reached the right amount
of seconds and the target speed or it has traveled the desired distance, whichever
format the computer sends. Once the condition is met, the fetch signal is set to
high allowing the robot to continue navigating. Below the figure describes the
subsystem.
81
Figure 4.2.8.A: High-Level Representation of Accelerometer
Communication
4.2.9.
Magnetometer Communication
Along with the information for traveling the right amount of distance, the heading
is also important information. The computer needs to know what the current
direction of the robot is so that it is able to tell the robot to which direction it should
turn before moving. The magnetometer information needs to be used by both MCU
and the computer. Therefore, the MCU must now transport the Magnetometer’s
information as well as the Kinect feed. The magnetometer information is only sent
to the computer when the robot comes to stand still and the signal for fetch is set
to high.
The information data received by the computer is going to be transmitted by the
magnetometer to the msp430, and the msp430 would communicate with the
computer. Once the information is in the computer, the data is used for the
calculation of the expected location of the robot. The computer will then compute
the desired heading as an offset of the current heading. Based on the results the
computer will send the offset data back to the MSP430, which is going to send the
calculated data back to the msp430 on the motor controller. Then the information
will be used as a reference and will turn to the proper heading.
Figure 4.2.9.C: High-Level Representation of Magnetometer
Communication
82
4.2.10.
Encoder Communication
To account for possible error in acceleration, we need the encoder to provide the
angular velocity of the wheels. We can translate this angular velocity to
translational velocity by doing simple calculations. This Encoder information will be
feed into the MCU so that it can determine whether the motors need to keep
spinning or not. To use the encoder as the judge, we would need to be provided
the distance beforehand and count the number of rotations occurred. This will yield
the distance traveled per time unit. The procedure is very similar to the
accelerometer’s procedure. The only difference is in how the velocity or distance
traveled is derived.
Figure 4.2.10.D: High-Level Representation of Encoder
Communication
83
4.2.11. Wireless Communication
All communication between the computer and the MCU will be through the internet.
However, since both devices will be connected to the same IP address it will be
difficult for the Wi-Fi module of both the MCU and Computer to differentiate which
packets are from itself and which are from the other Wi-Fi module. So in order for
their not to be a conflict we need a 2 more IP address that will be recognized as a
recipient and a transmitter. In other words, we need two separate web servers.
The computer will send packets to the webserver 1 and will actively wait for a
packet from the webserver 2. The webserver 1 will relay the data packet to the
MCU. The MCU will also be sending the Kinects’ visual feed to web server 2.
Having two webservers reduces dependencies. Below is a diagram of how the five
are connected.
Figure 4.2.11.A: High-Level Wireless Communication Software Architecture
The wireless communication from the computer standpoint expects the data to be
sent, as a USB form after gathering the data the internal hardware from computer
would convert the data into RS232 for interpretation.
The model used for communication works as follows, the Kinect sends data to
the IBM Bluemix, which is the server. The server is going to have an IP address
where the data will be located. The software from the computer would retrieve
the data from the server through a port, which is going to be allocated for this
communication. The ROS would utilize the data and carry out the necessary
procedures to display continuous real time video using the frames generated by
the Kinect.
84
4.2.12
Socket Programming
The communication between the MPU9150 and the computer was realized by
the use of sockets. The program in charge of handling the aforementioned
communication was written in python. The choice of using python over C++
which are the two languages supported by the Robot Operating System is the
simple syntax and the ease of understanding of the language, as well as the
many powerful features the language provides.
The communication system of the robot and the computer consisted in a client
and server system. The client in this case was the computer requesting the data
from the MPU, the robot had the role of being a server which listen for a
connection and start sending data. To initialize the communication the robot
sends a request for a command through sockets. The computer on the other end
after receiving this request would subscribe to a topic which the navigation stack
in the Robot Operating System is publishing. After the program gather data from
the navigation stack, it interpret the data to send commands through the socket
communication established. Once the robot receives a commands and start
executing it, it sends feedback to the computer, the program is in charge of
converting the accelerations into velocities, and the gyroscope into angular
velocities which are then publish in a topic, the navigation stack would subscribe
and make use of this data to update the map.
4.2.12.
Programming Languages
During our project, we found a few languages to meet the requirements to realize
the work. The list of the following languages is the possible choices that meet the
criterion of the project based on the restrictions of the hardware and software.



C – Is a programming language fully supported by the Libfreenect and
OpenCV libraries.
C++ - Libfreenect, the Robot Operating System, OpenNi support this
language, among other software programs, which are going to play an
important role in the overall project.
Python – This is another language supported by both the Libfreenect and
the Robot Operating System. The python language was used to create a
node in the ROS which is able to publish feedback gathered from CC3100
which is sending acceleration and gyroscope data obtained from the
MPU9150.
The programming language our group is going to use is definitely C++. This
language besides the flexibility and object oriented features it provides, it is also
one of the few languages supported by the Robotic Operating System. Since there
were only two programming languages to choose from there was not much
decision making during the process of choosing a language. The other option
besides C++ was python, but we do not have as much experience with this as we
85
have with C or C++. Furthermore, in case we decide to implement image display
later into the design of the project the optimal library to process our imaging data
would be OpenCV, since is written in C++ this will provide consistency in the
software. It is prefer to realize if possible all the applications in a single language.
This will facilitate future troubleshooting of the software during the testing phase
as oppose to have multiple languages implemented in different parts of the project.
With more than one programming language in our project the troubleshooting part
of the project can become hectic, since some of the members in the group may
have more experience than others time have to be spend on scheduling who is in
charge of the troubleshooting of a specific part of the project.
4.2.13.
IDE
For this project to manipulate the data and develop applications for the Kinect, we
are going to use various platforms. The Visual Studio 2013 was first used to check
the Kinect SDK packages. In the preliminary stages of the project, this IDE was
used to get ourselves familiar with the code and the different functions
implemented already in the SDK for Kinect. Visual Studio support various platform
of the windows operating system but in this project we are planning to work with
the Robotic Operating System or ROS, which only supports the Linux Ubuntu
platform, for this reason it is not possible to employ the Visual Studio IDE for
development purposes but only to test the different functions from the SDK with
the Kinect hardware connected.
Since we are using the robotic operating system, we have to find alternatives IDEs
that are supported by Ubuntu. ROS supports various IDEs such as:






Eclipse – This development environment is the most likely to be employed
by our group to develop, maintain, and troubleshoot the code necessary to
realize the applications we desired our robot to performed. At stage of the
project, we have decided this to be our primary IDE for realization of the
project.
CodeBlocks – This IDE also have the C and C++ languages available for
development purposes.
Emacs – is a text editor similar to the gedit text editor built in the Ubuntu
platform. This editor lets the developer browse through the ROS package
file systems.
Vim – This IDE is not fully supported by ROS because the support is
provided thanks to a plugin. We are not aware of the capabilities and
function we can create with this environment. It is not one of the possible
IDEs for the project.
NetBeans – it is an IDE written in Java with support of many languages,
among them C++. The group has consider this IDE but we do not have
previous experience with it.
QtCreator – it does not required a setup. The developer may work from the
terminal.
86
4.2.14.
Linux
Linux Ubuntu is the operating system the group has chosen to work on the
development of applications for the navigation and mapping functions. This is due
to the free access for the Ubuntu operating system. Ubuntu is a user-friendly OS,
similar to the operating systems installed on our personal computers, which
provides us with a familiar interface. Linux contains other versions of operating
systems. Some of these distributions are the most popular and the majority is not
supported by ROS.
Mint Linux is a distribution of Linux, which has a well support interface for desktops;
this version of Linux is one of the most popular in used nowadays along with
Ubuntu. This distribution is not officially supported by ROS but it may serve as an
experimental platform in the future as more users become interested in the version.
Mageia is also a recognized version but not as popular as the mint version,
However, in recent years the popularity of this distribution has increased rapidly.
Mageia is not supported by the ROS but is an alternative of the Ubuntu version for
development of other applications except anything that has to do with the robotic
operating system.
Another of the popular distributions of the Linux operating system is the Fedora
version. This version is lower in popularity as compare to the other versions
discussed above. Fedora uses GNOME 3.x as its package management. (25)
The last version of Linux to be discussed according to its popularity is called
Debian. This distribution is one of the first versions being develop during the early
90s. Many versions as of today are based on Debian; the most popular version
based on this distribution is the Linux Ubuntu. (25)
4.2.15.
Windows
Windows is suitable for development with the Kinect since the sensor is
manufacture by windows. In the design stages to realize our project, various
restrictions were encountered. The first problem we ran into was on how we can
develop applications for the Kinect without the software development kit, as the
SDK supports windows only. After an extensive research about what platforms
suitable to get our robot running, we decided to employ the robotic operating
system. Previous projects about mapping with an autonomous robot used the ROS
to simplify the amount of functions needed to realized navigation and mapping
since the operating system already comes with packages which alleviate the
amount of time that a developer need to spend on creating an application from
scratch.
The first released of the Kinect SDK to allow developers create new applications
for the Kinect and employ the Kinect in other areas other than gaming itself
supported Windows 7.
The following is a list of the operating systems compatible with the software
development kit:
87




Windows 7
Windows 8
Windows 8.1
Windows Embedded 8
4.3. Design Summary
4.3.1. Design Overview
The general design of the project consists of the robot platform, which will be
mounted with 3D sensor, proximity sensor, and motors encoders. The robot itself
is equipped with a reactive system, which simply uses a proximity sensor to avoid
collision, and motor encoders to maintain desired speed. The robot will send the
3D data over the network to the control center where it will processed the received
data and return an appropriate command. The control center will mainly use the
3D data received from the robot platform to perform mapping and navigation. The
robot will use a finite state machine to control, whether a human is controlling the
robot or it is in autonomous mode. The state machine will also be used to allow the
robot to switch the appropriate behavior depending on the situation. Another task
of control center is provide graphical user interface so human observe the state of
the robot, and control the robot from distance.
88
Point Cloud
Data
Mapping
System
Navigation
System
Graphical
User
Interface
Control
Center
Finite State
Machine
Kinect
Robot
Platform
Reactive
System
Sonar
Motors
Encoder
General System Design
4.3.2. Software System
A simple summary of the software system can be seen in (Figure 4.3.2.a). The
control center will take in the raw data from sensors, which usually depend on the
driver software of each individual device, and manipulate or convert the data to a
more general form such as point cloud, image matrix, etc… Once converted, the
data from sensors then can be used by other processes such as mapping,
navigating, filtering, and construct 3D display. Using the point cloud data, the robot
will attempt to construct a map of the environment. Then using the map, the robot
will able to navigate within the known world. The autonomous system take in all
the information including sensory data, map, and navigation plan and convert to
command, in which the robot would then execute. It is also the responsibility of the
autonomous system to report the current state of the robot. The report to GUI may
include the current velocity of the robot, current state of action, sensory data, and
planned path. Also from the GUI, the user should be able to give direct command
89
to the robot; such command should have higher priority than most command from
autonomous system.
Mapping
Sensors Data
Motors Controller
Input
Processing
Output
Processing
Point Cloud
Image
3D Manipulating
Image Processing
Navigation
Velocity
Autonomous
Command
GUI
Figure 4.3.2.a – Software System
4.3.3. Hardware Summary
The Wi-Fi module uses is single antenna to transmit and receive data. It is
connected to the Communication Module via the Inter-Integrated Circuit interface
(I2C). Figure 4.3.3.B shows he Communication module in more detail.
90
Figure 4.3.3.A High Level Representation
The Wi-Fi module uses is single antenna to transmit and receive data. It is
connected to the Communication Module via the Inter-Integrated Circuit interface
(I2C). Below shows he Communication module in more detail:
Figure 4.3.3.B Communication Module Flow Chart
The MSP430G2 receives a maximum of two inputs at any given time. The Kinect
feed will be continuously sent to the processor. However, it only transmits that data
when the Computer requests for it. The Kinect’s feed is sampled when the
computer receives a request for coordinates. Once the request is sent, the Kinect
feed and current heading is sent over the Wi-Fi to the computer. The Computer
will then process the feed and determine the waypoint. It will send the waypoint
data to the MSP430G2. The MSP430G2 will process this data and send it to
Locomotion in a form that locomotion can understand. Then movement occurs.
When robot comes to a stop (i.e. arrived at waypoint), Locomotion module will send
91
a coordinate request signal along with the current heading. The MSP430G2 will
relay this request and data to the computer and the whole cycle begins. The reason
the MSP430G2 is being a relay is because the number of I2C pins are limited. The
Wi-Fi interfaces with the same pins that the Accelerometer/Magnetometer
interfaces with. Therefore, we dedicated a processor each and split the work up
between the two. The Kinect interfaces with the MSP430G2 using a USB port that
is interfaced with the MSP430G2’s UART pins (TX and RX). The MSP430G2 and
Locomotion module will be connected via general-purpose pins. The Locomotion
module is shown in more detail below:
Figure 4.4.3.C Locomotion Module Flow Chart
The actual Locomotion unit consists of three types of hardware. The first is the
second MSP430G2 that is in charge of the motor control and motor feedback. The
second is the motion sensor called the MPU9150. The MPU9150 is an
accelerometer + magnetometer + gyroscope. It sends to the MSP430G2 the
acceleration and compass heading of the robot. This is important since the current
compass heading is need so that the robot knows which direction it should turn
form the coordinate information provided by the Computer. In addition, the
Computer needs to know the current heading of the robot whenever it computes
the waypoint so that it can tell the processor to move X-number of degrees from
any of the compass points. The robot can act upon that and turn accordingly. The
acceleration information is just as important since it can provide the velocity at any
time step vie an integration or the current displacement via a double integral. The
third are the H-Bridges that allow for bidirectional rotations of the wheels. The
MSP430G2 receives the coordinate information from Communication module and
sends the PWM signals to the H-Bridge. There is one H-Bridge for every two motor
assemblies. The reason is that the left side wheels will rotate at the same speed
and the right side wheels will rotate at the same speed. While the wheels are
rotating, the motor assembly is providing feedback about the rotational speed of
92
the wheel. This information is then fed back to the MSP430G2. The MSP430G2
will continue to send PWM signals to the motors until it has received the right data
from the encoder’s feedback. There are two conditions for robot to perform
properly. The first is that it covers the necessary distance. The second is that the
wheels spin at the proper speed. Both of these pieces of information can be yielded
using the accelerometer and the encoder. A double integral of the accelerometer
information yields the current displacement. The encoders can also provide us the
displacement by takin the number of tick marks that occurred. Since we know the
displacement per tick mark, we can determine the overall displacement in real
time. Moreover, once the desire replacement is met we can stop the robot so that
it may scan again. Alternatively, the encoder provides the wheels rotation. It is
quite possible that each wheel rotates at a different speed. So to keep this in check
we must constantly check to make sure each wheel is rotating properly. However,
this is just a fail-safe system. Below is a detailed look at the Motor assembly:
Figure 4.3.3.D Motor Assembly Module Flow Chart
Our power supply is a 14.8 battery. Of course, all voltage values need to be
properly applied, so the need of a voltage regulator or a series of voltage regulators
are needed. The voltage levels required for the project vary. We need 1.9 volts for
the encoders, 3.3 volts for the MCU’s and Wi-Fi module and 12 volts for the motors.
Below is a diagram of how the different voltage regulators supply voltage to each
module.
93
5. Construction
5.1.
Robot Platform Assembly
The J-Bot v2.0 was easy to assemble. All parts were well labeled. The parts were
accurately manufactured. The materials are lightweight yet strong, it seems the
robot could handle come rough situations easily. The instructions for assembly are
clear and all diagrams are helpful in guiding along the construction. Very little tools
are needed to put together this robot. Only a Phillips head screwdriver and pliers
are needed. Figures 5-1 and 5-2 are a few photos of the robot fully assembled.
Figure 6-1: This is the J-Bot fully assembled.
94
Figure 6-2: This is the inside of the body of the J-Bot.
5.2.
3D Scanner Mounting
The Microsoft Kinect will be mounted on the second tier of the J-Bot. The second
tier was chosen because it has the highest vantage point on the rover, it has the
most space since there is nothing above it, and nothing else will be on the second
tier so other items will not obstruct the view. A high vantage point is important for
the Kinect because it gives it the best view of the room it is scanning. The second
tier has the least amount of components and gives the most space to the Kinect.
This is advantageous because the Kinect is by far the biggest component of this
project and requires space for it to operate. On the first tier space is taken up by
the motor control board which will lay flat thus taking up the maximum space it can.
In addition, on all four corners are the pillars that hold the second tier up and they
will obstruct the Kinect’s view of the room? The first tier is 3in lower than the second
tier. Taking away this height from the Kinect’s vantage point would contribute to
the projects detriment.
The Kinects power cable is very long (15ft) and in the middle, there is an AC to DC
converter that is quite bulky. The power cable is so long probably because a Kinect
is meant to be placed above a television. A long cable would facilitate supplying
the Kinect with power. It was decided that the cable will be cut past the AC to DC
converter and power will be supplied directly with a separate power supply.
Since the Kinect will be on the second tier, it will feel the movement of the J-Bot
the most. Luckily, the Kinect has a rubber pad on the base that does a great job of
keeping it in place. To be safe the Kinect will be held in place by four metal rods
similar to the pillars on the first tier. The four rods will be placed around the base
of the Kinect. The rods are ~2.25in long. This is a good length to catch the Kinect
if it were to tip over. The rods will be fastened to the second tier plate with nuts
below.
95
5.3.
Software Installation
For this project to function properly, there is many software need to be installed.
The software including operating system (Linux Ubuntu), driver for Kinect
(OpenNI), base system (Robot Operating System), and Point Cloud Library. The
detail installation process can be found at the respective website, the following is
the quick summary of the installation process of necessary software.
The operating system for this project is Ubuntu 14.04, a Linux operating system.
To install, the developer need to download “.iso” file from official website
(www.ubuntu.com), and burn the file into a DVD disc. Next restart the computer,
make sure the computer boost from the DVD disc. The computer should display
multiple options at the loading screen, choose install Ubuntu and the developer
can follow the install wizard of Ubuntu. (131-132)
The Kinect was design to work in Windows operating system, thus for it to function
in Ubuntu, the developer need to install OpenNI as driver. In addition, because
there is some variation of Kinect, each may has slightly different procedure. The
following website (http://choorucode.com/2013/07/23/how-to-get-started-withkinect-for-windows-on-ubuntu-using-openni/) describes the process to install
OpenNI so that it may work with Kinect for Windows.
Robot Operating System (ROS) is the backbone of the project. The version used
to develop for this project is “indigo.” The detail procedure can be obtain from
official website (http://wiki.ros.org/indigo/Installation/Ubuntu).
Point Cloud Library is the main 3D data processing library, the one being used for
this project is the prebuilt binary for Linux, in which the procedure can be obtain at
the official website (http://pointclouds.org/downloads/linux.html).
6. Testing and Evaluation
6.1. Unit testing
6.1.1. Robot Platform Testing
Testing the robot platform will comprise of testing the motors, tires, sensors, load
capabilities, turning, speed, and movement. Data will be collected from this test to
sharpen the accuracy of the results. Results should be as close to theoretical
values as possible. This is not always possible but there should not be a great
amount of error. For example, the error in movement will only be allowed to be no
greater than 2%
Testing the motors will be done in a few steps. First, the motors will be tested with
no load. This will confirm that power is reaching the motors and they are in working
order. Next, a load will be applied. For this for test, the load will be the weight of
the J-Bot rover itself. This will confirm the motors are spinning with enough torque
to carry the weight of the J-Bot rover. Also with this initial load test, it will also test
the tires. The test will confirm if the wheels stay on the J-Bot and work on the
96
surface we put them on. Next, we will do a load test with the full equipment to see
if the motors can handle the weight. If this test fails, it means we need to choose a
new amount of power to supply the motors with. This test fails if the motors fail to
move the rover at an acceptable speed. This value will be recorded to make sure
that under no circumstances will the motors ben supplied with that much power.
Testing the tires will consist of running the J-Bot at full load at different speed on
different surfaces. Signs to look for will be if the rover’s wheels are slipping. The
tires included in the J-Bot kit should easily pass these tests. Another factor to be
considered when testing the J-Bot’s tires will be paying attention which surfaces
will cause the most wear. Concrete, brick, and other rough surfaces will probably
do the most damage. While tile, carpet, and hardwood floors will do the least
amount. This should not be a problem because the wear should be minimal. Wear
would be a concern on rough surfaces with the Geekbot wheels because of the
materials they are made from and their width.
To test the sensors first the rover will be run in and empty setting to make sure the
sensors are not miss firing. Next, a simple obstacle will be placed in front of the
rover’s path to test if the sensor is reacting a stimulus. The rover should encounter
an obstacle; the sensor should then signal the processor to stop moving the
motors. Once the motors have stopped moving the processor should being with a
sequence that will reverse and turn the J-Bot in a different direction and wait for a
new instructions from the computer. This situation should not happen unless the
obstacle was out of range for the Kinect to see. By the time the J-Bot has stopped,
reversed, and turned the Kinect should have identified the obstacle and updated
the 2D map.
Turning is one of the more complex movements the J-Bot can perform. It involves
all the components of the motor control board. First turning will be tested by seeing
if all the basic turns can be made. The basic turns are left and right moving
forwards, and turning left and right moving backwards. If any of these tests fail, the
programming must be re-revised. Next, testing how hard the J-Bot can turn will be
important. This data will allow us to understand its limitations. Once this data is
collected, it must be assured that the J-Bot will not be put in a position where it will
be asked to make a turn it cannot make. Testing this will begin by making a slight
turn and progressively making a tighter turn. Eventually the J-Bot will reach a limit.
This limit might be the lower limit of the voltages allowed for the motors. The next
test will determine if the J-Bot can turn about an axis perpendicular to the floor.
This will be important for the Kinect to scan a room efficiently. This turn can be
accomplished by moving the motors on one side of the J-Bot rover forwards while
moving the other two backwards. In a perfect world, this turn can happen if the
motors turn at the same rate. Getting the motors to turn at the same rate should
mean supplying them with the same amount of power. In the real world, this is not
always the case. Data will have to be collected to get all motors to turn at the same
rate.
The next set of tests will evaluate the speeds the J-Bot can achieve at full load. A
few important data points will be the slowest speed the J-Bot can be operated at
97
and the fastest speed. These values will be of use because the motors should not
be provided with more power than what these speeds require. Another important
speed to calculate will be a good speed to select as the average speed for the JBot. This speed will be the operating speed of the J-Bot most of the time. It should
be a speed where the J-Bot can reach destinations quickly without messing up the
Kinect’s point cloud or knocking any components over/off the rover.
6.1.2. Power and Regulation Testing
Testing the power and regulation of the motor control board is an important part of
the testing process. If components do not receive enough voltage, they will not
operate correctly or may not operate at all. Testing these components will require
a multimeter to measure voltage, and current. If the components are receiving too
much voltage, they may not operate and burn out. This does not have to happen
right away. Even if you are getting the correct reading, the noise in the circuit over
time will damage parts of your board. Oscilloscopes can be used to accurately see
what is going on in your circuit. Every single component on all boards needs to be
tested to check for output values and functionality.
6.1.3. Input Output Testing
Objective:
The objective of this test is to make sure the velocity of the autonomous robot
match the desired value. In order for the robot to maintain velocity at all times all
the motors have to receive certain voltage. The group is going to test the velocity
of every motor to calculate the pulse width needed to maintain the angular velocity
provided by the encoder.
Hardware Components:




Microcontroller
Encoder
Motors
Wheels
Pre-Test:
The encoder has to be connected with the motors and it has to have a
communication with the MCU. The group is going to have a specific velocity to test
the velocity each motor is going to provide for the wheel. Every motor is slightly
different and may need extra or lower voltage.
Test:
A team member is going to send a specific velocity to the MCU initially, The MCU
will transmit the information to the encoder, and after gathering the information, the
encoder will provide the motors with the angular velocity.
98
Since every motor have a slightly different requirements to provide the desired
angular velocity, the encoder will provide the information from the motors to the
MCU. Finally, with the information collected by the encoder, the MCU can calculate
the pulse width required for each of the motors to keep the required number of
rotations.
Expected Result:
In theory, the initial given velocity to the encoder should be return to the MCU if
the motors would provide the same number of rotations to the wheels.
6.1.4. Circuit Testing
More testing can be done on the communication board to see that everything is
working appropriately. Testing the board components will be easy since the parts
either work or they do not. If the computer cannot establish a communication link
with the board then the Wi-Fi card may not be in working order. Once the
communication link can be established, we can pinpoint which components are not
sending information. Testing will consist of testing each component for input and
output then gradually more components will be added until all components are
working simultaneously.
6.2. Software Testing
6.2.1. 3D Scanner Testing
The first test for the 3D Scanner is to make sure it is function in the environment
specify by manufacture. The 3D scanner for this project is the Kinect for Windows,
thus it was tested in Windows operating system using demo software that come
with Microsoft Kinect SDK 1.8. The (Figure 6.2.1.a) show the Kinect for Windows
is function in Windows without any error.
Figure 6.2.1.a – Kinect for Windows tested in Windows Operating System
99
The next test is test the 3D scanner in the development environment of the project,
which is Linux operating system (Ubuntu 14.04). The driver used for the Kinect is
OpenNI. The (Figure 6.2.1.b) show the Kinect for Windows is function in Ubuntu
as intended.
Figure 6.2.1.b – Kinect for Windows tested in Linux Operating System
(Ubuntu)
After verify that the Kinect is function properly, next test is the range of the 3D
scanner. In this project, the range of the Kinect specify by manufacture is from
800mm to 4000mm. The (Figure 6.2.1.c and Figure 6.2.1.d) show the Kinect being
tested in Ubuntu with 825mm being the minimum range and 3900mm range for
acceptable noise level.
100
Figure 6.2.1.c – Kinect Minimum Range tested in Linux Operating System
(Ubuntu)
Figure 6.2.1.d – Kinect Maximum Range tested in Linux Operating System
(Ubuntu)
With the Kinect range known, the Kinect need to be tested with Robot Operating
System to ensure the compatibility between them. The test will also test the update
rate of Kinect to ROS as well as the detail of data. The (Figure 6.2.1.e) show the
Kinect function in conjunction with ROS.
Figure 6.2.1.e – Kinect Compatibility tested with ROS in Linux Operating
System
101
6.2.2. Communication System Testing
Objective:
The objective of this test is to check the data received by the computer matches
the data originally sent over the network by the MSP430 Microprocessor.
Hardware Components:



Computer
MSP430 Microcontroller
Xbox Kinect
Software Components:




IBM Bluemix Web server
Computers IP address
Server 1 IP address
Server 2 IP address
Pre-Test:
For communication purposes, since the computer and the Kinect are going to use
the same IP address there is a confusion problem with the data sent and received.
To solve this issue the group decided to use IBM web services called IBM Bluemix.



Establish a connection with the IBM server
Set up the communication of the MSP430 with the IBM server.
Connect the computer to the server
Test:
The group will generate data using the Kinect by capturing an image. The Kinect
will sent out three streams of data this will be process by the MSP430, which is in
charge of sending out the streams of data to the server. From the server the
computer will gather the data, process the information and sent the requested data
by the Kinect back to another server. From that server the MSP430 will received
the requested data.
Expected Results:
The expected information received by the computer should be the same
uncorrupted data sent by the MSP430.
6.3. Performance Testing
6.3.1. Known Environment
Testing in a known environment will be the most basic testing of the full capabilities
of the system. This will test the J-Bots abilities to move a waypoint created by the
computer. This will test the Wi-Fi connection card, the communication between the
102
communication board and the motor controller, and the motor controller’s ability to
interpret instructions. The computer will create a waypoint and send this instruction
to the motor control board through the communication board. Once the motor
control receives the instruction is should operate the motors and take the J-Bot to
the desired destination.
These tests will also test the communication feedback from the magnetometer,
accelerometer, and the encoder back to the computer. These re important factors
because using this feedback can keep track of the robots position. In addition, the
accuracy of this information will determine if any measure need to be taken to
correct errors in traveling. This is where the information collected in unit testing will
come into play.
6.3.2. Unknown Environment
Testing the IRIM system in an unknown environment will be more complex than in
a known environment. This is because all the components in the previous test will
be tested but also the Kinect’s ability to scan, the communications board ability to
receive and transmit the data stream from the Kinect, the software’s ability to
display the 3D point cloud, and create a 2D map. All these processes are important
for the IRIM to be fully functional.
Once the scanning begins, the Kinect should create a point cloud. This information
will be passed on to the communication board. The Kinect should not have a
problem scanning because it was received in working condition from Microsoft.
The communication board via USB connection will take in the data stream provided
by the Kinect. This data will be passed on to the computer via Wi-Fi connection
which then it will be interpreted.
If the data was correctly passed on to the computer, the software will interpret the
data stream and display a point cloud. Additionally, a 2D map of the room will be
created to help navigate the J-Bot rover. Waypoints will be created using the 2D.
Instructions for the motor controller board will then be communicated back to the
J-Bot.
Figure 7-1: Block diagram of data flow through IRIM system.
The IRIM system should successfully do all the things previously mentioned, as
they are requirements of the system.
103
7. Financial and Administrative
7.1.
Bill of Materials
Passive Components:
Items
Distributor
Manufacturer
Quantity
2.2 pF
Capacitor
22 pF
Capacitor
10 pF
Capacitor
.1 µF
Capacitor
2.2 nF/50V
Capacitor
.01 µF
Capacitor
10 nF
Capacitor
1 µF/6.3V
Capacitor
16 pF
Capacitor
2.2 kΩ
Resistor
10 kΩ
Resistor
100 kΩ
Resistor
15 kΩ
Resistor
1.5 kΩ
Resistor
3.3 kΩ
Resistor
33 Resistor
Jameco
AVX Corp
Jameco
Jameco
33 kΩ
Resistor
100Ω
Resistor
47 kΩ
Resistor
2.2 nH
Inductor
Total
Total cost
1
Price per
unit
$.89
Jameco
2
$0.19
$.38
1
$.19
$0.19
Jameco
Jameco
Valuepro
Panasonic
5
$.25
$1.25
Farnell
Kemet
1
$.29
$0.29
Jameco
Panasonic
1
$.76
$0.76
Digi-key
TDK Corp
1
$.84
$0.84
Jameco
AVX Corp
4
$.79
$3.16
Newark
Multicomp
2
$.20
$0.40
Jameco
2
$.04
$0.08
3
$.05
$0.15
2
$.05
$0.10
1
$.05
$0.05
3
$.05
$0.15
2
$.04
$.08
2
$0.04
$0.08
3
$.05
$0.15
6
$.05
$0.30
5
$.05
$0.25
Mouser
Jameco
Valuepro
Jameco
Valuepro
Jameco
Valuepro
Jameco
Valuepro
Jameco
Valuepro
Jameco
Valuepro
Jameco
Valuepro
Jameco
Valuepro
Jameco
Valuepro
Jameco
Valuepro
Taiyo Yuden
1
$0.10
$0.10
-
-
-
-
$9.65
Jameco
Jameco
Jameco
Jameco
Jameco
Jameco
Jameco
Jameco
Jameco
$0.89
104
Processors, ICs, hardware, and analog components:
Items
Distributor
Manufacturer
Quantity
Kinect
Microsoft
store
Jameco
Microsoft
J-Bot 2.0v
MSP430G2
processors
MSP430F16
MPUN150
USB Female
Type A SMD
TUSB3410VF
Wi-Fi module
(CC3000)
AT8010E29HAA
Switching voltage
regulator
TPS71533
TPS71550
TPS71519
1N4148 Diode
12MHz SMD
oscillator
IF
Emitter/detector
Batteries
Texas
Instruments
Texas
Instruments
Invensense
Sparkfun
Texas
Instruments
Texas
Instruments
Digi-Key
Jameco
Texas
Instruments
Texas
Instruments
Texas
Instruments
Jameco
Texas
Instruments
Sparkfun
1
Price per
unit
$149.00
Total
cost
$149.00
Jameco
Kitpro
Texas
Instruments
Texas
Instruments
Invensense
4UCON
1
$84.95
$84.95
2
$2.80
$5.60
1
$0.00
(sample)
$6.62
$1.25
$0.00
Texas
Instruments
Texas
Instruments
Johanson
Technology
Inc.
Motorola
1
$0.00
1
$0.00
(sample)
$0.00
(sample)
$1.28
5
$.59
$2.36
Texas
Instruments
Texas
Instruments
Texas
Instruments
Major
Brands
Texas
Instruments
LITE-ON
1
$.89
$.89
1
$.89
$.89
1
.89
.89
1
$.05
$0.05
1
$0.00
1
$0.00
(sample)
$1.95
8
$6.74
$53.92
2
$15.47
$30.94
4
$3.75
$15.00
2
20
$29.95
$.15
$59.90
$3.00
2
$5/sq. in
$60.00
-
-
$478.49
Proximity sensors
Pins
AA Portable
Power Corp
AA Portable
Power Corp
Texas
Instruments
Jameco
Sparkfun
PCB
OSH Park
AA Portable
Power Corp
AA Portable
Power Corp
Texas
Instruments
Parallax Inc.
Autosplice
Inc.
OSH Park
Total
-
-
Battery holders
H-Bridge
1
1
1
$6.62
$1.25
$0.00
$1.28
$1.95
105
7.2.
Budget
Subsystem
Motor Control system
Communication system
Software
Hardware
Power system
7.3.
Total cost
$165.84
$84.18
$0.00
$233.95
$84.86
Facilities and Equipment
Companies and labs will be used to facilitate the creation of IRIM. Facilities will be
used during building, testing, and manufacturing. One of the requirements for the
senior design project is to make a circuit board instead of buying one. Making a
PCB will require experience. No one in the group has experience making circuit
boards so it was decided to allocate the work to a company. Eagle CAD is a free
circuit design software. Using a file created using Eagle CAD; OSH Park will create
a PCB board for us.
Facilities:
OSH Park:
About OSH Park: Osh Park is a community PCB order. It started out from another
PCB order and has been growing ever since. You can go on their website and
make your order right there. You must upload the file containing the design of your
board. OSH Park accepts the Gerber CAM files. The design must follow certain
rules, which will be described in the following section. Turnaround time for designs
is 12 business days.
Design rules:





6 mil minimum trace width
6 mil minimum spacing
at least 15 mil clearances from traces to the edge of the board
13 mil minimum drill size
7 mil minimum annular ring
Pricing:
Standard 2-Layer order: $5 per sq. inch - This includes three copies of your
board. You can order more copies as long as it is in multiples of three.
Standard 4-layer order: $10 per sq. inch - This includes three copies of your
board. You can order more copies as long as it is in multiples of three. Orders are
sent weekly, and turnaround time is about two weeks.
106
OSH Park is the company we decided to go to get out PCB boards printed. More
than one person has referred this company to us and after some research, we feel
comfortable with them.
UCF Senior Design Lab:
The UCF senior design lab will be used to build and conduct tests on our systems.
The lab will provide all the machines we will need to use to collect data. The room
will be a good place to keep our project in a neutral location if anyone wants to
work on it outside of scheduled hours.
Resources used:











Tektronix MSO 4034B Digital Mixed Signal Oscilloscope, 350 MHz, 4
Channel
Tektronix DMM 4050 6 ½ Digit Precision Multimeter
Tektronix AFG 3022 Dual Channel Arbitrary Function Generator, 25 MHz
Agilent E3630A Triple Output DC Power Supply
Resistors
Capacitors
Inductors
Breadboard
Dell Optiplex 960 Computer
Lockers
Multi-sim SPICE
Lab Hours:
Luckily, the lab is open 24 hours a day 7 days a week. This is definitely an
advantage for I.R.I.M.
8. Conclusion
This project is the final test that we made for ourselves in order to measure our
knowledge that we learned, the abilities of each individual, the spirit of teamwork,
and the determination for success. The project complexity is not just a challenge
but also an entertainment factor in which we want this last two semester to be an
unforgettable and “enjoyable” experience, a memory that would accompany us to
the rest of our career.
This project was not mean to be one-time attempt; instead, we hope that this
project would be the starting point in which we can expand further in the future, a
project that we can share to the community of computer science and electrical
engineering. The electrical component of the project was design to be very basic
in which any engineer can expand further. The software in this project will be
107
extensible and easy to install by any computer enthusiastic or computer major
student alike.
9. Standards and Realistic Design Constraints
9.1 Realistic Design Constraints
A few design constraints on I.R.I.M are space, battery life, speed, and network
integrity. These factors will limit the project in various ways. These things must be
considered when making the project to make the process easier, cheaper, and
efficient.
I.R.I.M has limited space. The J-Bot is very flexible and can be customized but at
the end of the day it will reach the limit for weight or real estate on the bot. This
must be considered when design the PCB, choosing battery packs, Component
selection, etc. If there is too much sitting on the bot it will make moving more
complicated.
Battery life also constrains I.R.I.M because with on board power supplies I.R.I.M
can only run for long without running out of power. This is extremely important
when choosing components for the project. A great example of this is going with
the Texas Instruments’ MSP430F5529. The F5529 has ultra-low power
consumption which is exactly what we need. The ODROID XU3 is not low power
at all requiring 5V at a max of 4A. Battery life is directly affected by space
constraints because it would be easy to power everything with a big battery. The
problem is they must fit on the bot and not affect movement.
Speed is another constraint which must be taken into consideration for this project.
Moving too fast might consume too much power and moving too slow may make
I.R.I.M inefficient and boring. Also, moving too quickly cause directly cause the
breakdown of the mapping. This cannot happen as mapping will be the project’s
main function.
Network integrity can limit I.R.I.M greatly because without a network to connect to
I.R.I.M will be rendered useless. I.R.I.M requires a network to map its environment
because it communicates with Wi-Fi. Even if I.R.I.M is running off of a router the
router requires power. This is why I.R.I.M is an interior mapping project.
9.2 Standards
10. Reference
[1][2][3]-
http://www.rapidform.com/3d-scanners/
http://www.i3du.gr/pdf/primesense.pdf
http://www.forbes.com/sites/sharifsakr/2013/05/22/xbox-one-wins-firstround-of-console-war/
108
[4][5][6][7][8]-
[9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34]-
http://www.scanner.imagefact.de/gb/depthcam.html
http://cubify.com/en/Products/SenseTechSpecs
http://www.youtube.com/watch?v=Ot7lQ2GplaI
http://www.kinectingforwindows.com/2012/09/07/what-is-the-differencebetween-kinect-for-windows-kinect-for-xbox360/
http://blogs.msdn.com/b/kinectforwindows/archive/2013/03/06/kinectfusion-demonstrated-at-microsoft-research-techfest-coming-soon-tosdk.aspx
http://www2.engr.arizona.edu/~arsl/lidar.html
http://oceanservice.noaa.gov/facts/lidar.html
http://velodynelidar.com/lidar/products/brochure/HDL64E%20Data%20Sheet.pdf
http://www.ptgrey.com/stereo-vision-cameras-systems
2D mapping solutions thesis paper (pdf thesis paper)
http://www.parallax.com/product/28015
https://learn.adafruit.com/ir-sensor/overview
http://www.adafruit.com/products/157?&main_page=product_info&cPath=3
5&products_id=157
http://www.digikey.com/en/articles/techzone/2012/feb/using-infraredtechnology-for-sensing
http://www.adafruit.com/datasheets/tsop382.pdf
http://www.rssc.org/content/pros-and-cons-range-sensors
(14-1)http://www.parallax.com/sites/default/files/downloads/28015-PINGSensor-Product-Guide-v2.0.pdf
http://pointclouds.org/about/
http://openkinect.org/wiki/Main_Page
http://openkinect.org/wiki/Roadmap
http://msdn.microsoft.com/en-us/library/hh855348.aspx
http://msdn.microsoft.com/en-us/library/ms973872.aspx#manunman_rcw
http://www.jameco.com/webapp/wcs/stores/servlet/Product_10001_10001_
2146345_-1
http://www.trossenrobotics.com/robogeek-geekbot-barebones
http://www.robotshop.com/en/lynxmotion-tri-track-chassis-kit.html
http://www.lynxmotion.com/images/html/build115.htm
http://www.lynxmotion.com/images/html/build103.htm
http://en.wikipedia.org/wiki/Brushed_DC_electric_motor
http://en.wikipedia.org/wiki/Brushless_DC_electric_motor#Radio_controlled
_cars
http://www.dfrobot.com/index.php?route=product/product&path=47&produc
t_id=100
http://www.robotshop.com/blog/en/how-do-i-interpret-dc-motorspecifications-3657
http://electronics.stackexchange.com/questions/97477/difference-betweena-dc-motor-and-gear-motor
109
[35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72]-
http://boards.straightdope.com/sdmb/archive/index.php/t-313597.html
https://www.clear.rice.edu/elec201/Book/motors.html
http://en.wikipedia.org/wiki/Torque
http://www.modularcircuits.com/blog/articles/h-bridge-secrets/h-bridgesthe-basics/
http://www.precisionmicrodrives.com/uploads/media_items/h-bridgeconfiguration.690.595.s.png
http://www.ti.com/product/l293
http://www.ti.com/lit/ug/spruhj1f/spruhj1f.pdf
http://www.societyofrobots.com/sensors_encoder.shtml
http://www.ti.com/lit/ug/spruhj1f/spruhj1f.pdf
http://www.ti.com/lit/ug/spruhj1f/spruhj1f.pdf
http://www.ti.com/lit/ug/spruhj1f/spruhj1f.pdf
http://www.ti.com/lit/ug/spruhj1f/spruhj1f.pdf
http://www.societyofrobots.com/sensors_encoder.shtml
http://www.x-io.co.uk/oscillatory-motion-tracking-with-x-imu/
http://cache.freescale.com/files/sensors/doc/app_note/AN3397.pdf
https://stackoverflow.com/questions/17572769/calculating-distances-usingaccelerometer
http://invensense.com/mems/gyro/documents/PS-MPU-9150A-00v4_3.pdf
http://www.invensense.com/mems/gyro/mpu9150.html
http://www.dimensionengineering.com/info/accelerometers
https://en.wikipedia.org/wiki/Accelerometer#Structure
http://pdf1.alldatasheet.com/datasheet-pdf/view/535562/AKM/AK8975.html
http://wikid.eu/index.php/Hall_effect_Magnetometer
http://www.ni.com/white-paper/8175/en/
http://www.memsic.com/userfiles/files/publications/Articles/Electronic_Prod
ucts_Feb_%202012_Magnetometer.pdf
http://www.sensorsmag.com/sensors-mag/electronic-gimbaling-6301
http://www.spectronsensors.com/articles/Magnetics2005.pdf
http://www.electronics-tutorials.ws/electromagnetism/mag26.gif
http://www.ti.com/tool/ek-tm4c123gxl
http://www.ti.com/lit/sg/spmt285d/spmt285d.pdf
http://www.arm.com/products/processors/cortex-m/cortex-m4processor.php\ (64)
http://e2e.ti.com/?DCMP=Community&HQS=e2e
http://www.ti.com/lit/ug/spmu296/spmu296.pdf
http://www.ti.com/tool/sw-tm4c
http://www.ti.com/ww/en/launchpad/launchpadsmsp430.html?DCMP=mcu-launchpad&HQS=msp430launchpad
http://en.wikipedia.org/wiki/TI_MSP430#MSP430_CPU
http://www.ti.com/tool/ccstudio?dcmp=PPC_Google_TI&k_clickid=cc67264
f-e57d-48a2-9ed0-5fdef0927369
http://www.ti.com/lit/ds/symlink/msp430g2213.pdf
http://www.ti.com/lit/ds/slvsad5/slvsad5.pdf
110
[73]- From: http://www.edaboard.com/thread34173.html
[74]- http://www.ti.com/lit/sg/slya020a/slya020a.pdf
[75]- http://www.anaren.com/sites/default/files/usermanuals/A110LR09x_Users_Manual.pdf
[76]- http://www.extremetech.com/extreme/187190-full-duplex-a-fundamentalradio-tech-breakthrough-that-could-double-throughput-alleviate-thespectrum-crunch
[77]- https://commons.wikimedia.org/wiki/File:HalfDuplex.JPG
[78]- http://www.ti.com/lit/wp/swry007/swry007.pdf
[79]- https://learn.sparkfun.com/tutorials/bluetooth-basics]
[80]- https://learn.sparkfun.com/tutorials/bluetooth-basics
[81]- https://developer.bluetooth.org/TechnologyOverview/Pages/SPP.aspx
[82]- https://en.wikipedia.org/wiki/List_of_Bluetooth_protocols#Radio_frequency
_communication_.28RFCOMM.29
[83]- https://developer.bluetooth.org/TechnologyOverview/Pages/Baseband.asp
x
[84]- http://www.anotherurl.com/library/bluetooth_research.htm
[85]- http://www.hp.com/rnd/library/pdf/WiFi_Bluetooth_coexistance.pdf
[86]- http://www.campusnikalo.com/2013/02/multiplexing-types-withadvantages.html
[87]- https://en.wikipedia.org/wiki/Frequency-hopping_spread_spectrum
[88]- http://www.ti.com/lit/ds/symlink/cc3000.pdf
[89]- http://techcrunch.com/2013/05/25/making-sense-of-the-internet-of-things/
[90]- http://www-01.ibm.com/software/info/internet-of-things/
[91]- http://www.ti.com/product/cc3200
[92]- http://www.ti.com/lit/ug/swru331a/swru331a.pdf
[93]- http://processors.wiki.ti.com/index.php/CC3000_Basic_WiFi_example_application_for_Launchpad
[94]- http://www.ti.com/lit/ds/symlink/cc3000.pdf
[95]- http://en.wikipedia.org/wiki/Nickel%E2%80%93metal_hydride_battery
[96]- http://en.wikipedia.org/wiki/Lithium_iron_phosphate_battery#Advantages_a
nd_disadvantages
[97]- http://en.wikipedia.org/wiki/Lithium-ion_battery
[98]- http://www.robotshop.com/blog/en/how-do-i-choose-a-battery-8-3585
[99]- http://www.batteryspace.com/Battery-holder-Li-Ion-18650-Battery-Holder4S1P-With-2.6-long-20AWG.aspx
[100]- http://www.batteryspace.com/lg-lithium-nmc-18650-rechargeable-cell-3-7v2600mah-9-62wh---icr18650b4-un38-3-passed.aspx
[101]- http://www.ti.com/product/TPS71501
[102]- http://www.pololu.com/product/2572/specs
[103]- http://www.jameco.com/1/1/703-mc34063ap1-1-5a-step-down-invertingswitch-voltage-regulator.html
[104]- http://www.analog.com/en/content/ta_fundamentals_of_voltage_regulators/
fca.html
111
[105]- https://www.dimensionengineering.com/info/switching-regulators
[106]- http://www.digikey.com/en/articles/techzone/2012/may/understanding-theadvantages-and-disadvantages-of-linear-regulators
[107]- http://www.rason.org/Projects/swregdes/swregdes.htm
[108]- http://www.ti.com/general/docs/datasheetdiagram.tsp?genericPartNumber=
TPS71501&diagramId=SLVS338Q
[109]- http://www.jameco.com/Jameco/Products/ProdDS/316945.pdf
[110]- https://www.jameco.com/webapp/wcs/stores/servlet/ProductDisplay?storeI
d=10001&productId=316945&langId=1&catalogId=10001&ddkey=https:CookieLogon
[111]- https://www.sparkfun.com/products/12820?gclid=CIWw2eifoMICFc1i7Aodp
CcApQ
[112]- http://energia.nu/Serial.html
[113]- http://www.ti.com/lit/ds/symlink/cc3000.pdf
[114]- http://www.ti.com/lit/ug/slau318e/slau318e.pdf
[115]- https://www.sparkfun.com/products/12700?gclid=CJvp46vEqMICFSwV7Ao
dhRIAtg
[116]- http://www.ti.com/graphics/folders/partimages/MSP430F1612.jpg
[117]- http://www.ti.com/product/tusb3410
[118]- http://www.ti.com/lit/ds/symlink/tusb3410.pdf
[119]- http://www.ti.com/lit/ug/slau318e/slau318e.pdf
[120]- https://www.sparkfun.com/products/11486
[121]- http://www.ti.com/lit/ug/spmu357b/spmu357b.pdf
[122]- http://www.ti.com/lit/ug/slau318e/slau318e.pdf
[123]- http://www.ti.com/lit/ug/slau318e/slau318e.pdf
[124]- http://www.societyofrobots.com/schematics_infraredemitdet.shtml
[125]- https://www.sparkfun.com/products/241
[126]- http://www.ti.com/product/tps71501
[127]- http://www.jameco.com/1/1/703-mc34063ap1-1-5a-step-down-invertingswitch-voltage-regulator.htm
[128]- http://www.ros.org/
[129]- http://pointclouds.org/
[130]- https://www.openslam.org/
[131]- http://www.microsoft.com
[132]- http://www.ubuntu.com
[133]- http://choorucode.com/2013/07/23/how-to-get-started-with-kinect-forwindows-on-ubuntu-using-openni/
11. Copyright Permissions
112
113
114
115
Texas Instruments:
http://www.ti.com/corp/docs/legal/copyright.shtml
SparkFun:
https://www.sparkfun.com/static/contact
Energia:
http://energia.nu/faqs/
Creative Commons Licensing:
https://creativecommons.org/licenses/by-sa/3.0/
116
Figure 3.8.1.A
https://commons.wikimedia.org/wiki/File:HalfDuplex.JPG
Hall-Effect Sensor image:
117
Permission Pending:
Encoder Images and Circuit;
118
2D mapping solutions for low cost mobile robot
(Figure 3.3 & 3.4 in original paper) (Figure 3.1 and 3.1.1 on this paper)
Building Mobile Robot and Creating Applications for
2D Map Building and Trajectory Control (Figure 3) (figure 3.2 on this paper)
119
ROS – based Mapping, Localization and Autonomous Navigation using a
Pioneer 3-DX Robot and their Relevant Issues
(Figure 3.3 & 3.3.1) (Figure 4 & Figure 5 on the original paper)
Development of a 3D mapping using 2D/3D Sensors for Mobile Robot
Locomotion (figure 7 in original article) (figure 3.4 in this paper)
120
HDL 64E S2 permission to reproduce the image form the datasheet (Figure
3.5)
Permission request to reproduced figure 3.6 on this paper
121
Download