Uploaded by Asif Memon

FYP Report (42708)

advertisement
Final Report of BE (Industrial Electronics) 2019
PR–IIEE – 27B – 42708 – 2019
AUTONOMOUS 2D MAPPING OF INDOOR
ENVIRONMENT USING MULTIPLE LIDAR
AND ROS
Muhammad Mohsin Ali
(15024)
Muhammad Nomair Ishaque
(15027)
Muhammad Taha Siddiqui
(15030)
Syed Saad Ali
(15049)
Project Supervisors
Engr. Asif Ahmad
Engr. Sajid Hussain
Submitted to In-Charge Project Work
Engr. Agha Muhammad Shakir
Assistant Professor, IIEE
Institute of Industrial Electronics Engineering, PCSIR
www.iiee.edu.pk
i
CERTIFICATE
This is to certify that the project report entitled
“AUTONOMOUS 2D MAPPING OF INDOOR ENVIRONMENT USING
MULTIPLE LIDAR AND ROS”
Submitted by
Muhammad Mohsin Ali
(15024)
Muhammad Nomair Ishaque
(15027)
Muhammad Taha Siddiqui
(15030)
Syed Saad Ali
(15049)
Is a bonafide work carried out by them under the supervision of Engr. Agha Muhammad Shakir
& Engr. Asif Ahmad and it is approved for the partial fulfilment of the requirement of the
Institute of Industrial Electronics Engineering for the award of the degree of Bachelor of
Engineering in Industrial Electronics Engineering during the academic year 2019.
This project report has not been earlier submitted any other Institute or University for the award
of any degree.
Members FYP-Committee
1. Dr. Farah Haroon
2. Engr. Sajid Hussain
3. Engr. Agha M. Shakir
Associate Professor
Assistant Professor
Assistant Professor
4. Engr. Asif Ahmad
5. Engr. Nauman Pirzada
Lecturer
External Advisor
Engr. Ashab Mirza
Principal, IIEE
ii
The Project Group Detail
Mr. Muhammad Taha Siddiqui (Group Leader)
IIEE-1152/2015-16
Address: A-2451, Gulshan-e-Hadeed, Phase 2, Bin Qasim Town Karachi.
Cell: 03152053997
Email: tahasiddiqui.27@iiee.edu.pk
Mr. Muhammad Nomair Ishaque
IIEE-1149/2015-16
Address: A-2456, Gulshan-e-Hadeed, Phase 2, Bin Qasim Town Karachi.
Cell: 03130298938
Email: nomairaryan.27@iiee.edu.pk
Mr. Muhammad Mohsin Ali
IIEE-1146/2015-16
Address: A-810, Gulshan-e-Hadeed, Phase 1, Bin Qasim Town Karachi.
Cell: 03002500747
Email: mohsinali.27@iiee.edu.pk
Mr. Syed Saad Ali
IIEE-1117/2014-15
Address:
Cell: 03243346511
Email: saadi.26@iiee.edu.pk
iii
ACKNOWLEDGEMENTS
First of all, we are very thankful to ALLAH ALMIGHTY who gives us the knowledge, will,
and ability to accomplish our goals. Without His kind help, we would not be able to do this.
The primary acknowledgment for this project is directed towards our supervisors Engr. Asif
Ahmed Memon and Engr. Sajid Hussain who gave us the golden opportunity to do this
wonderful project " Autonomous 2d Mapping Of Indoor Environment Using Multiple
LIDAR And Ros " which also helped us in doing a lot of research and we came to know about
so many new things. Both these supervisors helped us in every step of our work and helped us
finding the solutions to our problems during the development of this project. We are also
thankful to our external supervisor which helped us not only in the selection of this project but
also helped a lot in every step of the development of the project.
We also thank our classmates for their useful suggestions. Their technical support and
encouragement helped us to finalize our project. We would also like to express our gratitude
towards the Institute for providing us with the best facilities and proper environment to work
on our project.
iv
ABSTRACT
The project includes multiple vehicles which have laser sensor (RP-LIDAR) mounted on the
top, which moves around an unknown environment autonomously and produces maps of the
surroundings indicating walls and obstacles. The map will be stored in a storage memory (in
the Raspberry Pi) and can be retrieved later. The make the vehicle autonomous the data coming
from LIDAR is used. All the work done of our project in the ROS (Robotics Operating System)
environment.
Key Words:
ROS (Robotics Operating System)
Raspberry Pi
RP-LIDAR
Autonomous
v
TABLE OF CONTENTS
CHAPTER TITLE
1
2
PAGE
Title Page
Detail of Supervisors
Detail of Group Members
Acknowledgements
Abstract
Table of Contents
List of Figures
List of Tables
List of Abbreviations
List of Symbols
Introduction
1.1
Background
1.2
Motivation
1.3
Problem and Challenges
1.4
Possible Solutions
1.4.1 Digital Image Processing
1.4.2 Microsoft Kinect Sensor
1.4.3 LIDAR
1.5
Proposed Solution
1.6
Objectives
Literature Review
2.1
Introduction to Robotics
2.2
SLAM
2.2.1 Mapping
2.2.2 Sensing
2.2.2.1 By using LIDAR
2.2.2.2 By using Radar
2.2.2.3 By using Camera
2.3
Advantages of using LIDAR
2.4
Software Environment
2.5
Robotics Operating System (ROS)
2.6
Build System
2.7
ROS File System
2.7.1 Packages
2.7.2 Meta Packages
2.7.3 Package Manifests
2.7.4 Repositories
2.7.5 Message (msg) Type
2.7.6 Service (srv) Type
2.8
ROS Node
2.9
ROS Core
2.10
ROS Topic
2.10.1 ROS Topic Command-Line Tool
vi
i
ii
iii
Iv
V
Vi
Vii
Viii
Ix
X
1
1
1
1
1
1
2
2
2
2
3
3
4
4
5
5
5
5
6
7
7
8
9
9
9
9
9
9
9
10
11
11
11
2.11
2.11.1
2.11.2
2.11.3
2.11.4
2.12
2.13
2.13.1
6
ROS Messages
Msg File
Msg Type
Building
Header
RVIZ
Kinematics
Kinematical Analogy of Skid-Steering with Differential
Drive
2.14
LIDAR
2.14.1 Basic Principle
2.15
Summary
Methodology
3.1
Procedure
3.2
Detail of Hardware
3.2.1 Raspberry Pi
3.2.2 Arduino Mega
3.2.3 RPLIDAR A1
3.2.4 Motor Driver
3.3
Detail of Software
3.3.1 Ubuntu Linux
3.3.2 ROS
3.3.3 Python 3
3.4
Communication between Vehicles and PC
3.5
Autonomous Controlling of Vehicle’s Movement
3.6
Motor Driver
3.7
Laser Sensor
3.8
Mapping & Localization
3.9
Map Merging
Results and Discussion
4.1
LIDAR Scanning Reading Test
4.2
Scan data of LIDAR
4.3
Hector Mapping Algorithm
4.4
Map Merging
4.4.1 Using Merger Package
4.4.2 Using Stitch Command
Conclusion and Recommendations
5.1
Conclusion
5.2
Future Recommendations
References
7
Appendices
3
4
5
vii
12
12
12
12
12
13
13
13
17
17
20
21
21
21
21
21
22
22
22
22
22
22
23
23
24
25
26
28
30
30
32
33
33
33
34
36
36
36
37
LIST OF FIGURES
FIGURE
TITLE
2.1
2.2
2.3
Communication of nodes in ROS network
The kinematics schematic of skid-steering mobile robot
Geometric equivalence between the wheeled skid-steering robot and
the ideal differential drive robot
RPLIDAR A1
Description of LASER sensor’s reference frame
A visualization of laser scanner reading in an actual room (rviz plot)
Communication between PC and Vehicles in ROS network
Model of motor driver L293D
Internal block diagram of vehicle
Example of a laser beam visualization
Example of mapping of the floor
Complete map of the floor
Final output after map merging
Hardware model of the SLAM vehicles
The LASER beam data obtaining from RPLIDAR
Map of the environment after implementing the Hector Mapping
algorithm
Rqt_graph after using multimerger package
Result after using the multimerger package
Result after processing the images through python script using stitch
command
2.4
2.5
2.6
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
4.1
4.2
4.3
4.4
4.5
PAGE
viii
10
14
17
18
19
20
23
24
25
26
27
27
28
29
32
33
34
34
35
LIST OF TABLES
TABLE
TITLE
PAGE
4.1
Standard deviation for various distances at 90-degree
ix
30
LIST OF ABBREVIATIONS
SLAM
ROS
Pi
Simultaneous Localization and Mapping
Robotics Operating System
x
LIST OF SYMBOLS

wi
vx ,vy
Ratio of sum and difference of left side and right-sight’s wheel
Angular Velocity
Translational velocity
xi
CHAPTER 1
INTRODUCTION
1.1
Background/Rationale for the project
Mapping and Navigation is in wide need in many applications from autonomous vehicles to
space missions.
Our project is basically focused on the concepts, mapping and navigation, by mapping we mean
to generate or calculate a map of an unknown real-world environment by taking data from real
world in different forms.
To generate or calculate the map, we need to input data from our moving vehicle. This data
may consist of many things. Such as odometery data from the wheels of the vehicle or it may
be the data from LIDAR mounted on the vehicle. All the needed data will be feed to appropriate
algorithms to generate a map. Different algorithms need different sort of data. But our basic
and most focused data with be the form the LIDAR. This data will be distances from
surroundings in all 360 degrees which is omni directional.
1.2
Motivation of the Project
The future is automation. It is one of the fastest growing industries in the world right now
Robotics and Automation. Autonomous vehicles will be roaming all around in future. The basic
principle behind an autonomous vehicle is SLAM algorithms. To understand the vehicles of
future is our basic motivation behind this project and moreover to make the mapping quicker
we’ve used multiple SLAM at the same time on a single network.
1.3
Problems and Challenges
•
•
•
1.4
Mapping and Localization for navigation is a challenge in real world environment.
There may be many disturbances in the real environment that may alter our result. Such
as the ground surface not being plane, or too much dust covering the lens of LIDAR.
We need to condition the environment to avoid getting inaccurate results.
The factor which is to make the vehicle autonomous is very important that, what
approach should we use to make it self driven in an unknown environment.
The selection of proper algorithm is also a problem. It depends on factors such as the
environment of system where the robot works.
Possible Solutions
There are many possible solutions to build a system capable of implementing SLAM. Some of
them are given below:
1.4.1 Digital Image Processing (our project does not implement it)
In this solution, we mount cameras on vehicle body, take images continuously of the
surrounding and then process these images using digital algorithm. Then by comparing
1
the images at different time intervals, we can calculate the change in position of the
vehicle and can make a map of the surroundings.
1.4.2 Microsoft Kinect Sensors
Kinect is a line of motion sensing input devices that was produced by Microsoft for
Xbox 360 and Xbox One video game consoles and Microsoft Windows PCs. They can
be used in SLAM applications.
1.4.3 LIDAR
LIDAR (also called LIDAR, LIDAR, and LADAR) is an acronym for light detection
and ranging. It is a surveying method that measures distance to a target by illuminating
the target with pulsed laser light and measuring the reflected pulses with a sensor.
Differences in laser return times and wavelengths can then be used to make digital 3-D
representations of the target.
1.5
Proposed Solution
We will use LIDAR method because it is quite accurate, and it can also work in dark
environment. In some cases, it is relatively less expensive, faster, and more accurate than
conventional methods of topographic mapping using photogrammetry. In addition, data
derived using LIDAR is digital and can be manipulated easily in computer software. The
software environment used is ROS.
1.6
Objectives
•
•
•
•
•
•
Understanding Linux and ROS environment
Mapping of unknown environment
Localization in the map
To make the vehicle autonomous
Multiple vehicles on the same network
To merge the final output map, obtained from both vehicles
2
2.1
CHAPTER 2
LITERATURE REVIEW
Introduction to Robotics
Robotics is a branch of engineering that involves the conception, design, manufacture, and
operation of robots. This field overlaps with electronics, computer science, artificial
intelligence, mechatronics, nanotechnology and bioengineering.[1]
There are many types of robots; they are used in many different environments and for many
different uses, although being very diverse in application and form they all share three basic
similarities when it comes to their construction.
1. Robots all have mechanical construction, a frame, form or shape designed to achieve a
task. For example, a robot designed to travel across heavy dirt or mud, might use
caterpillar tracks. The mechanical aspect is mostly the creator's solution to completing the
assigned task and dealing with the physics of the environment around it.
2. Robots have electrical components which power and control the machinery. For example,
the robot with caterpillar tracks would need some kind of power to move the tracker
treads. That power comes in the form of electricity, which will have to travel through a
wire and originate from a battery, a basic electrical circuit. Even petrol-powered machines
that get their power mainly from petrol still require an electric current to start the
combustion process which is why most petrol-powered machines like cars, have batteries.
The electrical aspect of robots is used for movement (through motors), sensing (where
electrical signals are used to measure things like heat, sound, position, and energy status)
and operation (robots need some level of electrical energy supplied to their motors and
sensors to activate and perform basic operations).
3. All robots contain some level of computer programming code. A program is how a robot
decides when or how to do something. In the caterpillar track example, a robot that needs
to move across a muddy road may have the correct mechanical construction and receive
the correct amount of power from its battery but would not go anywhere without a
program telling it to move. Programs are the core essence of a robot, it could have
excellent mechanical and electrical construction, but if its program is poorly constructed
its performance will be very poor (or it may not perform at all). There are three different
types of robotic programs: remote control, artificial intelligence and hybrid.
a. A robot with remote control programming has a pre-existing set of commands
that it will only perform if and when it receives a signal from a control source,
typically a human being with a remote control. It is perhaps more appropriate to
view devices controlled primarily by human commands as falling in the discipline
of automation rather than robotics.
b. Robots that use artificial intelligence interact with their environment on their
own without a control source and can determine reactions to objects and
problems they encounter using their pre-existing programming.
c. Hybrid is a form of programming that incorporates both Artificial Intelligence
and Remote-Control functions.
We are working on a hybrid form of robot that takes both manual input and that can also use given
data to process information and output meaningful commands based on a set of algorithms.
The robotics project we are working on basically involves SLAM.
3
2.2
SLAM
In navigation, robotic mapping and odometry for virtual reality or augmented reality, SLAM,
stands for Simultaneous Localization & Mapping. It means to generate the map of a vehicle’s
surroundings and locates the vehicle in that map at the same time. The SLAM system uses
the depth sensor to gather a series of views (something like 3D snapshots of its environment),
with approximate position and distance. Then, it stores these 3D views in memory. SLAM is
a pure work of Deep Learning and Machine Learning. It could further be enhanced with the
help of Artificial Intelligence. It is the computational problem of constructing or updating a
map of an unknown environment while simultaneously keeping track of an agent's location
within it. While this initially appears to be a chicken-and-egg problem there are several
algorithms known for solving it, at least approximately, in tractable time for certain
environments.
SLAM consists of two things:
i.
ii.
To map the environment and/or update the map continuously.
Localize the position of the robot or vehicle in the generated map. This should update
continuously as to keep track of the location of the robot.
2.2.1 Mapping
Mapping of the real-world environment can be of two types.
Topological maps are a method of environment representation which capture the connectivity
(i.e., topology) of the environment rather than creating a geometrically accurate map.
Topological SLAM approaches have been used to enforce global consistency in metric SLAM
algorithms.
Whereas, grid maps use arrays (typically square or hexagonal) of discretized cells to represent
a topological world and make inferences about which cells are occupied. Typically, the cells
are assumed to be statistically independent in order to simplify computation. map, one way to
make a 2D map is to generate an x-y plane with discrete cells. Then we mark each cell either 1
or 0 depending on whether it is an occupied area in real environment or it is an unoccupied area
where robot can move freely. Now the grids which are marked unoccupied can then be
converted into pathways where robot can move. These set of pathways can be called
collectively as a 2D map of the environment. This is the type of the map we plan to generate.[2]
Currently the most common map is the occupancy grid map. In a grid map, the environment is
discretized into squares of arbitrary resolution, e.g. lcm x 1cm, on which obstacles are marked.
In a probabilistic occupancy grid, grid cells can also be marked with the probability that they
contain an obstacle. This is particularly important when the position of the robot that senses an
obstacle is uncertain. Disadvantages of grid maps are their large memory requirements as well
as computational time to traverse data structures with large numbers of vertices. A solution to
the latter problem is topological maps that encode entire rooms as vertices and use edges to
4
indicate navigable connections between them. Each application might require a different
solution that could be a combination of different map types.
2.2.2 Sensing
To generate a map, we need to collect some sensing data from the environment. There may be
different ways to collect data from the environment. A system can use one or more systems for
collecting data at the same time. Some of the systems are:
2.2.2.1 By using LIDAR
LIDAR is a technology that measures the distance using laser light. The technology can scan
more than 100meters in all directions, generating a precise 2D or 3D map of the robot's
surroundings. This information is then used by the robot to make intelligent decisions about
what to do next.
2.2.2.2 By using Radar
Radar is the master of motion measurement. Radar, short for radio detection and ranging, is a
sensor system that uses radio waves to determine the velocity, range and angle of objects Radar
is computationally lighter than a camera and uses far less data than a LIDAR. While less
angularly accurate than LIDAR, radar can work in every condition and even use reflection to
see behind obstacles. Modern self-driving prototypes rely on radar and LIDAR to "cross
validate” what they're seeing and to predict motion.
2.2.2.3 By using Camera
Cameras are by far the cheapest and most available sensor (but not the cheapest processing),
cameras use massive amounts of data (full High Definition HD means millions of pixel or
Megabytes at every frame), making processing a computational intense and algorithmically
complex job. Unlike both LIDAR and radar, cameras can see color, making them the best for
scene interpretation.
In our application, we are only concerned with using LIDAR as a way of sensing the
environment by collecting distance data. If the LIDAR method alone doesn't give an accurate
description of our given environment, then we may resort to using some other systems or
methods in addition to LIDAR. These may include taking rotary data from the wheels of the
vehicle or using an ultrasonic sensor to detect the very near surroundings of the vehicle.
SLAM will always use several different types of sensors, and the powers and limits of various
sensor types have been a major driver of new algorithms. Statistical independence is the
mandatory requirement to cope with metric bias and with noise in measures. Different types of
sensors give rise to different SLAM algorithms whose assumptions are which are most
appropriate to the sensors. At one extreme, laser scans or visual features provide details of a
great many points within an area, sometimes rendering SLAM inference unnecessary because
shapes in these point clouds can be easily and unambiguously aligned at each step via image
registration. At the opposite extreme, tactile sensors are extremely sparse as they contain only
information about points very close to the agent, so they require strong prior models to
5
compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these
visual and tactile extremes.
A very common commercially available LIDAR is RPLIDAR. RPLIDAR is a low-cost LIDAR
sensor suitable for indoor robotic SLAM application. It provides 360-degree scan field,
5.5hz/10hz rotating frequency with guaranteed 8-meter ranger distance. By means of the highspeed image processing engine designed by RoboPeak, the whole cost is reduced greatly,
RPLIDAR is the ideal sensor in cost sensitive areas like robot's consumer and hardware
hobbyist. We chose RPLIDAR model A1.
2.3
Advantages of using LIDAR
Data can be collected quickly and with high accuracy: LIDAR is an airborne sensing
technology which makes data collection fast and comes with extremely high accuracy as a
result of the positional advantage.[3]
Surface data has a higher sample density. LIDAR gives a much higher surface density as
compared to other methods of data collection such as photogrammetry. This improves results
for some kinds of applications such as flood plain delineation.
Capable of collecting elevation data in a dense forest: LIDAR technology can collect elevation
data from a densely populated forest thanks to the high penetrative abilities. This means it can
map even the densely forested areas.
Can be used day and night: LIDAR technology can be used day and night thanks to the active
illumination sensor. It is not affected by light variations such as darkness and light. This
improves its efficiency.
Does not have any geometry distortions: LIDAR sensors are not affected by any geometrical
distortions such as angular landscapes unlike other forms of data collection.
It can be integrated with other data sources: LIDAR technology is a versatile technology that
can be integrated with other data sources which makes it easier to analyze complex data
automatically.
It has minimum human dependence: LIDAR technology, unlike photogrammetry and
surveying has minimum human dependence since most of the processes are automated. This
also ensures valuable time is saved especially during the data collection and data analysis
phase.
It is not affected by extreme weather: LIDAR technology is independent of extreme weather
conditions such as extreme sunlight and other weather scenarios. This means that data can still
be collected under these conditions and sent for analysis.
Can be used to map inaccessible and featureless areas: LIDAR technology can be used to map
inaccessible featureless areas such as high mountains and thick snow areas.
It is cheap: LIDAR technology is a cheaper method of remote sensing in several applications
especially when dealing with vast areas of land considering the fact that it is fast and extremely
accurate.
6
2.4
Software Environment
The software environment used to implement the project is ROS (Robot Operating System). It
is not an actual an operating system like Linux, etc. but it is a collection of software frameworks
for robot software development. It provides services designed for heterogeneous computer
cluster such as hardware abstraction, low-level device control, implementation of commonly
used functionality, message-passing between processes, and package management. ROS runs
on Linux Operating System. Linux has many distributions also called as Distros. The most
suitable distro our work is Ubuntu. Because it is user friendly and is easy to learn as compared
to other distros and a wide support is available for it.
In ROS ecosystem, there are many packages available that can be tuned to work for our specific
cases. The two main packages we use for SLAM are hector_slam and gmapping.
2.5
Robot Operating System (ROS)
[1] Robot Operating System (ROS) is robotics middleware (i.e. collection of software
frameworks for robot software development). Although ROS is not an operating system, it
provides services designed for a heterogeneous computer cluster such as hardware abstraction,
low-level device control, implementation of commonly used functionality, message-passing
between processes, and package management. Running sets of ROS-based processes are
represented in a graph architecture where processing takes place in nodes that may receive,
post and multiplex sensor, control, state, planning, actuator and other messages.
Software in the ROS Ecosystem can be separated into three groups:
1.
language and platform-independent tools used for building and distributing ROS-based
software.
2. ROS client library implementations such as roscpp, rospy, and roslisp.
3. packages containing application-related code which uses one or more ROS client libraries.
Both the language-independent tools and the main client libraries (C++, Python, and Lisp) are
released under the terms of the BSD license, and as such are open source software and free for
both commercial and research use.
In simple terms, ROS can be described as a set of conventions and standards that allows us to
port our programs and implementations of algorithms across different platforms. Take an
example, if you develop an algorithm that takes a laser scan data from a LIDAR and process it
to give a processed output. Let's say you have developed this algorithm for a specific model of
a commercial LIDAR. You want to port your program, so it can be used by any LIDAR
available. You will have to specify some rules or conventions or protocols or standards. It
would be better to follow an already established set of standards (say ROS). ROS standards
specify that an input data coming from LIDAR into the algorithm should be of a specific type
and that the algorithm should produce data in a certain format and so on. You can now make
your algorithm or program to accept that specific type of data as laser scan and produce output
results in a specific type. Now you can share your code with the world by specifying standards
for input and output data formats. Anyone can use the developed algorithm if they follow the
standards of input data format and output data format. There are many other formats and
standards in ROS that are to be followed if you use are to utilize a ROS based system.
7
2.6
Build System
A build system is responsible for generating ‘targets' from raw source code that can be used by
an end user. These targets may be in the form of libraries, executable programs, generated
scripts, exported interfaces (e.g. C++ header files) or anything else that is not static code. In
ROS terminology, source code is organized into 'packages' where each package typically
consists of one of more targets when built.
Popular build systems that are used widely in software development are GNU Make, GNU
Autotools, CMake, and Apache Ant (used mainly for Java). In addition, virtually all integrated
development environments (IDES) such as Qt Creator, Microsoft Visual Studio, and Eclipse
add their own build system configuration tools for the respective languages they support. Often
the build systems in these IDEs are just front ends for console-based build systems such as
Autotools or CMake.
To build targets, the build system needs information such as the locations of tool chain
components (e.g. C++ compiler), source code locations, code dependencies, external
dependencies, where those dependencies are located, which targets should be built, where
targets should be built, and where they should be installed. This is typically expressed in some
set of configuration files read by the build system. In an IDE, this information is typically stored
as part of the workspace/project meta-information (e.g. Visual C++ project file). With CMake,
it is specified in a file typically called 'CMakeLists.txt' and with GNU Make it is within a file
typically called 'Makefile'. The build system utilizes this information to process and build
source code in the appropriate order to generate targets.
ROS utilizes a custom build system, catkin, that extends CMake to manage dependencies
between packages.
catkin is the official build system of ROS. catkin combines CMake macros and Python scripts
to provide some functionality on top of CMake's normal work flow. catkin's workflow is very
similarity CMake's but adds support for automatic 'find package infrastructure and building
multiple, dependent projects at the same time.
To start creating and testing packages in ROS, you need to first create and build a catkin
workspace:
$mkdir -p~ /catkin_ws/src
$ cd~/ catkin_ws/
$ catkin_make
The catkin_make command is a convenience tool for working with catkin workspaces. Running
it the first time in your workspace, it will create a CMakeLists.txt link in your 'src' folder.
Additionally, if you look in your current directory you should now have a 'build' and 'devel'
folder. Inside the devel' folder you can see that there are now several setup. *sh files. Sourcing
any of these files will overlay this workspace on top of your environment. Before continuing
source your new setup.*sh file:
S source devel/setup.bash
To make sure your workspace is properly over layed by the setup script, make sure
8
ROS PACKAGE PATH environment variable includes the directory you're in.
$ echo $ROS_PACKAGE_PATH
/home/youruser/catkin_ws/src:/opt/ros/kinetic/share
2.7
ROS File System
Files in a ROS system are arranged in a specific way.[4]
2.7.1 Packages: Packages are the software organization unit of ROS code. Each package can
contain libraries, executables, scripts, or other objects. For example, roscpp is a standard
package in ROS. It is a C++ implementation of ROS. It provides a client library that enables
C++ Programmers to quickly interface with ROS Topics, Services and Parameters. We will
discuss the in detail.
2.7.2 Metapackages: Metapackages are specialized Packages which only serve to represent a
group of related other packages. Most commonly metapackages are used as a backwards
compatible place holder for converted rosbuild Stacks.
2.7.3 Package Manifests: Manifests (package.xml) provide metadata about a package,
including its name, version, description, license information, dependencies, and other meta
information like exported packages. The package.xml package manifest is defined in REP0127.
2.7.4 Repositories: A collection of packages which share a common VCS system. Packages
which share a VCS share the same version and can be released together using the catkin
release automation tool bloom. Often these repositories will map to converted rosbuild
Stacks. Repositories can also contain only one package.
2.7.5 Message (msg) types: Message descriptions, stored in
my_package/msg/MyMessageType.msg, define the data structures for messages sent in ROS.
2.7.6 Service (srv) types: Service descriptions, stored in
my_package/srv/MyServiceType.srv, define the request and response data structures for
services in ROS.
2.8
ROS Node
9
A ROS node is basically one executable that represents a subprogram inside your ROS
application.
Some of the great benefits you get from ROS are:
•
•
•
Code reusability
Easy separation of all the layers in your application (motor driver, camera, joystick,
controller, …)
You can write code in any language you want (most supported ones: Python/C++)
because ROS is language-agnostic
Those benefits are what you get when using nodes.
In other words, a node is an independent part of your application, that is responsible for one
thing, and which can communicate with other nodes through ROS communication tools
(topics, services, actions).
When you build a ROS application, first you create some packages. Packages are big (or not)
independent units that can contain nodes.
For example, let’s say you want to develop a part of your application with ROS for a specific
camera. You will create a package for that camera.
Inside this package, maybe you’ll have a node for the driver, a node to handle some image
processing, another node which will provide a high-level interface to modify the settings, etc.
Figure 2.1 Communication of nodes in ROS network
10
On this picture, you can see an example of 3 packages inside a robotic application with ROS.
The blue boxes are nodes, and the red arrow show the communication between the nodes.
2.9
ROSCORE
roscore is a collection of nodes and programs that are pre-requisites of a ROS-based system.
You must have a roscore running in order for ROS nodes to communicate. It is launched
using the roscore command.
NOTE: If you use roslaunch, it will automatically start roscore if it detects that it is not
already running (unless the --wait argument is supplied).
roscore will start up:
•
•
•
a ROS Master
a ROS Parameter Server
a rosout logging node
There are currently no plans to add to roscore.
2.10
ROS Topic
Topics are named buses over which nodes exchange messages. Topics have anonymous
publish/subscribe semantics, which decouples the production of information from its
consumption. In general, nodes are not aware of who they are communicating with. Instead,
nodes that are interested in data subscribe to the relevant topic; nodes that generate data
publish to the relevant topic. There can be multiple publishers and subscribers to a topic.
Topics are intended for unidirectional, streaming communication. Nodes that need to perform
remote procedure calls, i.e. receive a response to a request, should use services instead. There
is also the Parameter Server for maintaining small amounts of state.
2.10.1 Rostopic Command-Line Tool
The rostopic command-line tool displays information about ROS topics. Currently, it can
display a list of active topics, the publishers and subscribers of a specific topic, the publishing
rate of a topic, the bandwidth of a topic, and messages published to a topic. The display of
messages is configurable to output in a plotting-friendly format.
rostopic,
like several other ROS tools, uses YAML-syntax at the command line for
representing the contents of a message. This is the current list of supported commands:
11
rostopic
rostopic
rostopic
rostopic
rostopic
rostopic
rostopic
rostopic
rostopic
2.11
bw
display bandwidth used by topic
delay display delay for topic which has header
echo
print messages to screen
find
find topics by type
hz
display publishing rate of topic
info
print information about active topic
list
print information about active topics
pub
publish data to topic
type
print topic type
ROS Messages
Nodes communicate with each other by publishing messages to topics. A message is a simple
data structure, comprising typed fields. Standard primitive types (integer, floating point,
Boolean, etc.) are supported, as are arrays of primitive types. Messages can include arbitrarily
nested structures and arrays (much like C structs).
Nodes can also exchange a request and response message as part of a ROS service call.
These request and response messages are defined in srv files.
2.11.1 Msg Files
Msg files are simple text files for specifying the data structure of a message. These files are
stored in the msg subdirectory of a package. For more information about these files, including
a type specification, see the msg format.
2.11.2 Msg Types
Message types use standard ROS naming conventions: the name of the package + / + name of
the .msg file. For example, std_msgs/msg/String.msg has the message type std_msgs/String.
In addition to the message type, messages are versioned by an MD5 sum of the .msg file. Nodes
can only communicate messages for which both the message type and MD5 sum match.
2.11.3 Building
The ROS Client Libraries implement message generators that translate .msg files into source
code. These message generators must be invoked from your build script, though most of the
gory details are taken care of by including some common build rules. By convention, all msg
files are stored in a directory within your package called "msg." If you have msgs defined there,
you simply have to add the line rosbuild_genmsg() to your CMakeLists.txt file.
2.11.4 Header
A message may include a special message type called 'Header', which includes some
common metadata fields such as a timestamp and a frame ID. The ROS Client Libraries will
automatically set some of these fields for you if you wish, so their use is highly encouraged.
There are three fields in the header message shown below. The seq field corresponds to an id
that automatically increases as messages are sent from a given publisher. The stamp field
stores time information that should be associated with data in a message. In the case of a laser
scan, for example, the stamp might correspond to the time at which the scan was taken. The
12
frame_id
field stores frame information that should be associated with data in a message. In
the case of a laser scan, this would be set to the frame in which the scan was taken.
2.12
RVIZ
RVIZ is a ROS graphical interface that allows you to visualize a lot of information, using
plugins for many kinds of available topics.
The zed_display_rviz package provides two launch files (display.launch and
display_zedm.launch) that run two preconfigured RVIZ sessions for the ZED and ZED-M
cameras respectively. The two sessions load the default RVIZ plugins preconfigured to show
the most used data from the ZED ROS infrastructure.
2.13 Kinematics
Skid-steering motion is widely used for wheeled and tracked mobile robots. Steering in this
way is based on controlling the relative velocities of the left and right-side drives. The robot
turning requires slippage of the wheels for wheeled vehicles. Due to their identical steering
mechanisms, wheeled and tracked skid-steering vehicles share many properties. Like
differential steering, skid steering leads to high maneuverability, and has a simple and robust
mechanical structure, leaving more room in the vehicle for the mission equipment. In addition,
it has good mobility on a variety of terrains, which makes it suitable for all-terrain missions.[5]
2.13.1 Kinematical Analogy of Skid-Steering with Differential Drive
Figure 1 shows the kinematics schematic of a skid-steering robot. We consider the following
model assumptions:
1. The mass center of the robot is located at the geometric center of the body frame.
2. The two wheels of each side rotate at the same speed.
3. The robot is running on a firm ground surface, and four wheels are always in contact
with the ground surface.
13
Figure 2.2 The kinematics schematic of skid-steering mobile robot
We define an inertial frame (X,Y) and a local (robot body) frame (x,y), as shown in Figure 1.
Suppose that the robot moves on a plane with a linear velocity expressed in the local frame as
𝑣 = (𝑣𝑥 , 𝑣𝑦 , 0)𝑇 and rotates with an angular velocity vector 𝜔(0,0, 𝜔𝑧 )𝑇 . If 𝑞 = (𝑋, 𝑌, 𝜃)𝑇 is
the state vector describing generalized coordinate of the robot (i.e., the COM position, X and
Y, and the orientation ߠ of the local coordinate frame with respect to the inertial frame), then
𝑞 = (𝑋̇, 𝑌̇, 𝜃̇)𝑇 denotes the vector of generalized velocities. It is straightforward to calculate
the relationship of the robot velocities in both frames as follows
Let 𝜔𝑖 , 𝑖 = 1,2,3,4 denote the wheel angular velocities for front-left, rear-left, front-right and
rear-right wheels, respectively. From assumption (2), we have:
Then the direct kinematics on the plane can be stated as follows:
14
Where = (𝑣𝑥 , 𝑣𝑦 ) is the vehicle’s translational velocity with respect to its local frame, and 𝜔𝑧
is its angular velocity, r is the radius of the wheel.
When the mobile robot moves, we denote instantaneous centers of rotation (ICR) of the leftside tread, right-side tread, and the robot body as ICRl, ICRr and ICRG respectively. It is
known that ICRl, ICRr and ICRG lie on a line parallel to x-axis.
From Equations (4)–(7), the kinematics relation (3) can be represented as:
Where the elements of matrix Jw depend on the tread ICR coordinates:
If the mobile robot is symmetrical, we can get a symmetrical kinematics model (i.e., the ICRs
lie symmetrically on the x-axis and xG=0), so matrix Jw can be written as the following form:
Where yo=y1=-yr is the instantaneous tread ICR value. Noted that vl=wlr, vr=wrr, for the
symmetrical model, the following equations can be obtained:
15
Noted vy=0, so that vG=vx. We can get the instantaneous radius of the path curvature:
A non-dimensional path curvature variable λ is introduced as the ratio of sum and difference
of left- and right-side’s wheel linear velocities, namely:
and we can rewrite Equation (12) as:
We use a similar index as in Mandow’s work, then an ICR coefficient χ can be defined as:
Where B denotes the lateral wheel bases, as illustrated in Figure 2.1. The ICR coefficient χ is
equal to 1 when no slippage occurs (ideal differential drive). Note that the locomotion system
introduces a non-holonomic restriction in the motion plane because the non-square matrix Jw
has no inverse. It is noted that the above expressions also present the kinematics for ideal
wheeled differential drive vehicles, as illustrated in Figure 2.2.
Therefore, for instantaneous motion, kinematic equivalences can be considered between skidsteering and ideal wheel vehicles. The difference between both traction schemes is that
whereas the ICR values for single ideal wheels are constant and coincident with the ground
contact points, tread ICR values are dynamics-dependent and always lie outside of the tread
centerlines because of slippage, so we can know that less slippage results in that tread ICRs
are closer to the vehicle. [5]
The major consequence of the study above is that the effect of vehicle dynamics is introduced
in the kinematics model. Although the model does not consider the direct forces, it provides
an accurate model of the underlying dynamics using lump parameters: ICRl and ICRr.
Furthermore, from assumptions (1) and (3), we get a symmetrical kinematics model, and an
ICR coefficient χ from Equation (15) is defined to describe the model. The relationship
between ICR coefficient and the vehicle motion path and velocity will be studied.
16
Figure 2.3 Geometric equivalence between the wheeled skid-steering robot and the ideal
differential drive robot.
2.14
LIDAR
Lidar is a surveying method that measures distance to a target by illuminating the target with
laser light and measuring the reflected light with a sensor. Differences in laser return times and
wavelengths can then be used to make digital 2-D representations of the target. The name lidar,
now used as an acronym of light detection and ranging (sometimes, light imaging, detection,
and ranging), was originally a portmanteau of light and radar. Lidar sometimes is called 2D
laser scanning, a special combination of a 3D scanning and laser scanning. It has terrestrial,
airborne, and mobile applications.
There are many commercial versions of laser scanners available, the basic principle of all of
them is same, throw a laser beam and then sense the reflected pulse.
2.14.1 Basic Principle
Light Detection and Ranging (LiDAR) is a similar technology to Radar, using laser instead of
radio wave. LIDAR principle is pretty easy to understand:
•
•
•
Emitting a laser pulse on a surface.
Catching the reflected laser back to the LiDAR pulse source with sensors.
Measuring the time laser travelled.
17
•
Calculating the distance from source with the formula:
𝑑=
𝑐∗𝑡
(1)
2
Here,
d=distance between sensor and object.
c=speed of light.
t=time difference between shinning the light and sensing
By using this method, we can assume a fixed place as the origin or center of our system and
then measure distances in all 360 degrees or our origin by rotating the laser equipment all over
its axis. If we take the step size to be 1 degree, then we will have to calculate the distances 360
times on 1degree, 2 degrees, and so on. Similarly, if the step size is 0.5 degree, then we will
have to measure the distances at 360 x 2 = 720 times.
The laser sensor we used is RPLIDAR A1. It is an inexpensive laser sensor available currently
with a good accuracy that can give ranges in 360 degrees. The laser equipment is mounted on
a motor that rotates and takes readings in all 360 degrees. The hardware drivers are provided
by manufacturer. The readings are then used to calculate distances. The provided driver can
calculate these readings for us.
Figure 2.4 RPLIDAR A1
The calculated distance we have are in the form of two numbers r, and e, and with a time
stamp. Here r represents the distance in meters and represents the angle in degrees. In the
18
figure below, we can see, we have specified an axis system with positive X axis pointed
downwards and positive Y axis pointed rightwards. The green box represents an arbitrary
object that is place in front of sensor. Now we can calculate distance d by using the above
principle and equation.
Figure 2.5 Depiction of LASER sensor’s reference frame
When we have r and 𝜃, we can convert them to cartesian coordinate system.
𝑥 = 𝑟 cos 𝜃
(2)
𝑦 = 𝑟 sin 𝜃
(3)
When we have x and y coordinates for all 360° view, we can plot these points in a 2D cartesian
coordinate plane system. This plot can be considered an initial map of the system because it
shows obstacles in 360° view of the current position of the laser sensor.[6]
19
Figure 2.6 A visualization of laser scanner reading in an actual room (rviz plot)
2.15
Summary
The following selections have been made from our literature review:
•
•
•
•
•
•
We will use LIDAR for SLAM. For many benefits stated above.
The commercially available model we use will be RPLIDAR.
The software environment we work in is ROS.
ROS works on Linux
The ROS packages we implement are hector_slam, gmapping and some other packages that
we have to make for our required task.
Kinematics to control the direction and motion of the vehicles in a ROS network.
20
CHAPTER 3
METHODOLOGY
3.1
Procedure
We have multiple mobile vehicles that was equipped with a Pi (Raspberry Pi controller), and
ROS is installed on these Pi controllers, another controller (i.e. Arduino Mega) also mounted
on these vehicles to control the motors which is to control the motion of the robots. Further
LIDAR sensor is mounted on the top of the both vehicles which is then connected to the Pi
controller, the Arduino mega also connected to the Raspberry Pi which takes the decision input
for motor control through master. There is a PC on the network which is to organize all the
bots transmit their data to the ROS master through which we can get the mapped data as our
output.
The Raspberry takes ranges data from LIDAR, processes it and sends on the ROS network
which can further utilize for mapping of that unknown area, and this processed data also be
used for the purpose of making these vehicles autonomous.
3.2
Detail of Hardware
Following are the key components of our project:
• A PC to organize network and gather data from SLAM bots and to show the mapped
data.
• Multiple robots consisting of Raspberry Pi 3B, to connect vehicles from ROS
network.
• RPLIDAR A1 sensor to gather ranges of obstacles in an unknown environment.
• A microcontroller (i.e. Arduino Mega) to convey the motion control commands to the
motor drivers.
• A DC motor driver to implement the controlled data to the motors.
• DC motors to move the vehicles.
• Battery to power up the robots.
3.2.1 Raspberry Pi
The Raspberry Pi is a series of small single-board computers. It has a built-in Wi-Fi module
some other built-in capabilities which makes it very versatile. It is used here to get input from
sensors and to process data and then finally send that data to the ROS network which is further
processed by PC and then visualized on the screen.
3.2.2 Arduino Mega
The Arduino Mega 2560 is a microcontroller board based on the ATmega2560. It has 54 digital
input/output pins (of which 15 can be used as PWM outputs), 16 analog inputs, 4 UARTs
(hardware serial ports), a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP
header, and a reset button. It contains everything needed to support the microcontroller; simply
connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to
get started. The Mega 2560 board is compatible with most shields designed for the Uno and
the former boards Duemilanove or Diecimila.
3.2.3
RPLIDAR A1
21
RPLIDAR A1 is based on laser triangulation ranging principle and uses high-speed vision
acquisition and processing hardware developed by Slamtec. The system measures distance data
in more than 8000 times per second.
The core of RPLIDAR A1 runs clockwise to perform a 360° omnidirectional laser range
scanning for its surrounding environment and then generate an outline map for the
environment. The sample rate of LIDAR directly decides whether the robot can map quickly
and accurately. RPLIDAR improves the internal optical design and algorithm system to make
the sample rate up to 8000 times, which is the highest in the current economical LIDAR
industry.
3.2.4 Motor Driver
The L293D is a dual-channel H-Bridge motor driver capable of driving a pair of DC motors or
single stepper motor.
As the shield comes with two L293D motor driver chipsets, that means it can individually drive
up to four DC motors making it ideal for building four-wheel robot platforms.
The shield offers total 4 H-Bridges and each H-bridge can deliver up to 0.6A to the motor.
The shield also comes with a 74HC595 shift register that extends 4 digital pins of the Arduino
to the 8 direction control pins of two L293D chips.
3.3
Detail of Software
Software which has used in our project are listed below:
3.3.1 Ubuntu Linux
We are using Ubuntu Linux distro 16.04 (Xenial Xerus) LTE for our PC and the lite version of
that ubuntu i.e. Ubuntu Mate 16.04 which is used in the Raspberry Pi, attached in each robot.
3.3.2 ROS
The software environment we utilize is ROS (Robotic Operating System). It runs on Linux OS.
The ROS version is Kinetic Kame. Many packages available with ROS are utilized.
3.3.3 Python 3
The program for the hardware, we decided to have it in python language. Python is widely used
high level programming language gor general purpose programming. An interpreted language,
python has design philosophy which emphasizes code readability (notably using whitespace
indentation to delimit code blocks rather than curly brackets or keywords), and a syntax which
allows programmers to express concepts in fewer lines of code than possible in languages such
as C++ or java.
3.4
Communication between Vehicles and PC
The communication between PC and Raspberry Pi is established through Wi-Fi. The Pi is set
up to be a Wi-Fi access point (AP) and the PC is then connected to it as a client. The Pi and PC
22
both are issued an IP address in the ethernet network. Since both the devices running ROS in
it, so we can establish a ROS network on top of the underlying ethernet network. In a ROS
network, it is mandatory to be one device as the ROS master where the roscore runs. All the
devices connected to the network rely on the master node. We made the PC as our master node.
When Pi comes online, it sets up the ROS network that we connect our PC to. Now when both
of our devices are connected to the ROS network, the devices can communicate over network
by publishing on and subscribing to the ROS topics.
Figure 3.1 Communication between PC and Vehicles in ROS network
To access the Pi and to run commands on it, we need to open terminals on Pi. One way to do
this is to physically connect a keyboard and a monitor to the Pi and type commands as needed
by the situation. That is not feasible here, since the vehicle is mobile. So, we utilize SSH
program. It stands for Secure Shell. It can be used to remotely access the shells on different
computers in the same network or over the greater internet. We open a terminal on Pi by
using the interface on PC, knowing the IP address of both devices in the network. The IP
address of both devices are set to be static in the network.
3.5
Autonomous Controlling of Vehicle’s Movement
The ROS node that runs on Pi subscribes to a topic /cmd_vel and expects to get the velocity
commands on this topic. Whenever we want the robot to move in any direction, the node
publish a specific velocity on this topic in the form of two numbers, one number represents
linear velocity and the other number represents angular velocity. To move the robot in a
linear direction with a velocity of 0.3m/s for 1 second, we have to publish two numbers, 0.3
and 0 (0 angular speed).[7]
On the other hand node for the autonomous driving of the vehicle also runs at the same time
so that it can publish data for the motion of the vehicle to the /cmd_vel, which also obtained
by the processing of the /scan data which we gather or collect from the LIDAR sensor, which
23
is mounted on each of the vehicle. The decision made for the motion of the vehicles based on
the obstacle avoidance rule, in which we have to set logic for robot that it should move
towards the area where it finds the maximum open space or avoid from those areas where it
detects that obstacles.
The data we focused for autonomous driving of the vehicle, from the total data which is
coming from LIDAR, on 5-degree angles which are -90°, -45°, 0°, 45°, 90°. The movement
of the vehicles based on the data coming from these 5-degree angles.
3.6
Motor Driver
The motor driver we use is L293D. It takes 12V DC input from battery constantly. It can
control both speed and direction of motors. To control the direction, we apply 5V from
controller in two different polarities, one for forward direction and the other polarity for the
reverse direction. To control the speed, we can apply a PWM signal of varying duty cycle on
the appropriate pin of the driver. The speed of the motor or the output voltage of the driver is
proportional to the duty cycle of the PWM applied to the driver. The controller takes linear
and angular velocity commands and converts them into appropriate RPM for each of the four
motors (by using kinematics equations) and then for the specific RPM, an appropriate PWM
signal is generated for each motor.
Figure 3.2 Model of motor driver L293D
24
Figure 3.3 Internal block diagram of Vehicle
3.7
Laser Sensor
The laser sensor used is RPLIDAR. It is provided with an SDK that converts data coming
from LIDAR into distances at specific angles at a specific time. The distances represent how
far any arbitrary obstacle is at a specific angle at specific time. The two numbers representing
distances and angles can be converted distances in cartesian x and y coordinates by
calculation. The x and y coordinates are then used to make a local map or the initial map of
the current environment the vehicle is in. we have a driver provided by the LIDAR scanner's
manufacturer that publishes laser scan data in the form of a standard ROS message
(LaserScan.msg). The name of the topic it publishes to is /scan. The scan data can be
visualized in rviz.
25
Figure 3.4 Example of a laser scan visualization
3.8
Mapping and Localization
Once we have live laser scan readings, we can perform the SLAM. For mapping and
localization, we can use various algorithms that have been packaged to be used in ROS
environment. We have tried using hector_slam and gmapping, hector_mapping is a SLAM
approach that can be used without odometry as well as on platforms that exhibit roll/pitch
motion (of the sensor, the platform or both). It leverages the high update rate of modern
LIDAR systems and provides 2D pose estimates at scan rate of the sensors. It provides
sufficiently accurate results for many real-world scenarios. The hector slam system relies
completely on laser scan reading and do no need any odometry. Therefore, this system has its
limitations. For example, if we move in a long parallel hall way for a long time, the system
has no way to determine if it is moving or not, and it will lose its localization. The other
problem is if we take our vehicle to a location where there is a large open environment and
the obstacles are out of reach of the scanner's rated range, it can lose its localization.
The system works by merging the small local maps and then making a big global map. When
it starts its process, it makes an initial local map of the location (as can be seen in the figure
below), by the method stated in the chapter on laser sensor. Now when we have an initial
map, the algorithm can update it as the vehicle moves forward. The system is continuously
making maps and then merging it into one global map.[8]
26
Figure 3.5 Example of mapping of a floor (rviz plot)
The image below depicts the maps, after the movement in the floor. The red arrow indicates
the position of the vehicle.
Figure 3.6 Complete map of the floor (rviz plot)
The other usable package for mapping in ROS environment is gmapping, that we have tried,
it is a system that relies on odometry data to work. And it does not have the limitations stated
above. In this system, the localization of the vehicle is done based on odometry data provided
from the LIDAR mounted on the vehicles.
27
3.9
Map Merging
As we have used multiple vehicles for the purpose of mapping of an unknown environment
and after the mapping of that area, we have to merge both of the final map obtained from the
vehicles and tried to merge them by using their common points on the map. Attached below
are the results of our tried approach:[9]
Figure 3.7 Final output after map merging
To merge the final maps obtained from multiple vehicle we develop logic on python by using
stitch command on it so that we get this kind of output.
28
Figure 3.8 Hardware model of the SLAM vehicles
29
CHAPTER 4
RESULTS AND DISCUSSION
4.1
LIDAR Scanning Reading Test
We tested LIDAR scanner reading at different angles for different distances and get very close
to accurate results as compare to actual distances. The standard deviations for different
distances at an angle of 90-degrees with respect to LIDAR’s frame of reference is shown in
table below. It can be seen that for short distances, the results are very good (standard deviation
is least) but as distance increases a little and the accuracy goes a little down. At some distances,
the standard deviation is very large, but this huge inaccuracy can be corrected by taking
multiple readings at those distances.
Table 4.1 Standard deviation for various distances at angle 90-degree:
Standard Deviation
Actual Distance
(for sensor's reading w.r.t. actual distance)
(cm)
15
0.4926
20
0.2508
25
0.1482
30
7.3554
35
0.2303
40
0.2862
45
0.3294
50
0.4322
55
0.5574
60
0.4983
65
0.4067
70
0.4181
75
0.6874
80
0.5524
85
0.3645
90
0.5073
95
0.8996
30
100
48.1082
105
0.867
110
0.6404
115
0.9081
120
0.8767
125
1.7954
130
1.1421
135
1.7342
140
1.5778
145
479.253
150
376.9708
155
1.8305
165
0.8095
175
1.875
180
1.4071
185
2.0827
190
3.9184
195
1.1238
200
0.8287
205
3.4013
210
22.0152
215
3.364
220
3.5978
225
345.5956
230
3.7745
235
1.5957
240
4.8501
245
1.3579
255
17.8396
260
5.2674
265
1.7799
270
1.7954
31
4.2
275
1.7693
285
3.3554
290
3.7561
295
2.4505
300
2.3231
305
2.3371
310
2.5156
315
2.4276
320
2.4058
325
1.9941
330
2.8073
335
2.6088
340
3.9061
345
5.9736
Scan data of LIDAR
The laser scan data i.e. achieving from the from LIDAR is shown below, and after this laser
scan data hector mapping algorithm will be implemented to develop the map of the
environment.
Figure 4.1 The laser scan data obtaining from RPLIDAR
32
4.3
Hector Mapping Algorithm
After getting the scan data from LIDAR hector mapping algorithm will be implemented and
after implementing the algorithm the final mapped data will be shown below.
Figure 4.2 Map of the environment after implementing Hector mapping algorithm
4.4
Map merging
To merge both of the maps got by multiple SLAMs we’ve applied multiple approaches some
of them are given below.
4.4.1
Using merger package
For the purpose of map merging firstly we’ve applied an approach of using the multi merger
package in which we combine the data at the /scan level, and by applying this approach we
faced some difficulty about the initial positions of the SLAMs which cannot be solved and get
the garbage data.
33
Figure 4.3 rqt_graph after using the multimerger package
Figure 4.4 Result after using the multimerger package
4.4.2
Using stitch command
After the multimerger approach we’ve applied another method for combining the map images
which is to process them through a python script in which stitch approach is used. The output
we got from this approach is attached below.
34
Figure 4.5 Result after processing the images through python script using stitch
command
35
CHAPTER 5
CONCLUSION AND RECOMMENDATIONS
5.1
Conclusion
However, it is noted that for every technique, and for each sensor and algorithm, they have
their own advantages and disadvantages. By the time being multiple advancements are coming
in making the SLAM robots one of them we used and completed it for the application of
mapping.
By using these SLAM vehicles, LIDAR mounted on their top, can map the unknown
environment and all the work done in ROS environment and the vehicle moves autonomously
in the environment. To make the mapping faster and time saving we used multiple SLAMs
which can cover the area in less time and after getting the mapped data we’ve tried to merge
both of the images to have a complete and combined image.
5.2
Future Recommendation
The idea of mapping can be further modified to make it 3D mapping and the approach should
be used by getting data from 2D LIDAR placed on vertical and horizontal positions and by
using both of the data, a 3D image can be formed.
Further it is also be done that path planning should be introduced in the SLAMs, that after
completing the mapping, the point on the map we select, it should move towards that point by
following the shortest path and the path will also be created by the SLAM.
36
REFERENCES
[1] https://en.wikipedia.org/wiki/Robot_Operating_System
[2]https://www.researchgate.net/publication/323871295_Autonomous_2D_Mapping_of_an_
Unknown_Environment_using_Single_1D_LIDAR_and_ROS
[3]https://desktop.arcgis.com/en/arcmap/10.3/manage-data/las-dataset/advantages-of-usinglidar-in-gis.htm
[4] http://wiki.ros.org/ROS/Concepts\
[5] Tianmiao Wang, Yao Wu, Jianhong Liang, Chenhao Han, Jiao Chen, Qiteng Zhao less
(2015).” Analysis and Experimental Kinematics of a Skid-Steering Wheeled Robot Based on
a Laser Scanner Sensor”. Sensors 2015, 15, 9681-9702; doi:10.3390/s150509681
[6] http://lidarradar.com/info/advantages-and-disadvantages-of-lidar
[7]Autonomous git link
[8]Magnabosco, M., Breckon, T. P. (February 2013). ”Cross-Spectral Visual Simultaneous
Localization and Mapping (SLAM) with Handhover” (PDF). Robotics and Autonomous
Systems. 63 (2): 195-208. doi:10.1016/j.robot.2012.09.023.
[9] https://github.com/iralabdisco/ira_laser_tools#notification-settings
37
APPENDICES
Number or letter appendices and give each a title as if it were a chapter.
Example:
Appendix A: Derivation of Equations
Appendix B: Detail of Standards
38
Download