Uploaded by Murtaza Hussain

FYDP Report 43011

advertisement
UNDERGRADUATE FINAL YEAR PROJECT REPORT
Industrial Electronics Engineering
Institute of Industrial Electronics Engineering, PCSIR, Karachi
Localization and 3D Mapping for
Indoor Environment
Group Number: 43011
Batch: 2018
Group Member Names:
Muhammad Hammad Khan
18021
Syed Mazhar Hussain Rizvi
18043
Syed Murtaza Hussain
18045
Arsal Abbasi
17008
Approved by
………………………………………………………………………………………
Engr: Agha Muhammad Shakir
Assistant Professor
Project Advisor
© Institute of Industrial Electronics Engineering. All Rights Reserved – November, 2022
Author’s Declaration
We declare that we are the sole authors of this project. It is the actual copy of the project
that was accepted by our advisor(s) including any necessary revisions. We also grant
Institute of Industrial Electronics Engineering permission to reproduce and distribute
electronic or paper copies of this project.
Signature and Date Signature and Date
Signature and Date
Signature and Date
…………………... …………………… ……………………
……………………
M. Hammad Khan
S. M. Hussain Rizvi S. Murtaza Hussain
Arsal Abbasi
18021
18043
18045
17008
hammad.30@iiee.edu.pk
mazharrizvi.30@iiee.edu.pk
murtazahussain.30@iiee.edu.pk
Arsalabbasi.29@iiee.edu.pk
ii
Statement of Contribution
In this project the working is divided into 4 basic parts, so then the contribution is also
divided into 4 parts. The contribution of Muhammad Hammad Khan is to look after all
the mechanical and hardware work (hardware acrylic design of the rover, scanning
system hardware acrylic design, soldering of electronic circuitry) of the rover. The
contribution of Syed Mazhar Hussain Rizvi is to look after all the calibration of scanner
and sensor according to the real-world requirement of the rover. The contribution of
Syed Murtaza Hussain is to look after all software section (ROS) required in this project
and designing of electronic circuitry, other troubleshooting’s and lead the team. The
contribution of Arsal Abbasi is to look after all the reporting (which types of sensors
best for this project, best motors for the project, maintaining of weekly task file of the
project).
iii
Executive Summary
The primary technology used by mobile robots is simultaneous localization and
mapping (SLAM), which is based on lidar. Lidar is frequently employed in highprecision map generation, map-based matching, and positioning because it can
accurately gather distance and intensity information from the environment. A more
accurate reflection of the 3D environment can be seen in the 3D point cloud map created
by 3D laser SLAM, which also offers more usable information for construction-related
navigation and improvement.
In recent years researchers developed several solutions for simultaneous localization
and mapping (SLAM), however, 3D real-time map development and scanning of a large
area is still a big challenge and requires autonomy of the system by involving multiple
robotic units. Increasing the number of robotic units for large area mapping creates
several challenges, like the power consumption, data transfer and the determination of
the exact lidars transformation and exact joint together them for real-time 3D mapping.
Therefore, overall system exceeds the acceptable cost with high power consumption
solution for 3D surveying and real-time mapping. The methodology depends on the
Designing and Development of Rover Based 3D Scanning System, Simulated and Real
robot transformation Lidar’s joints, plugin and it’s transformations on real-time bases,
Check visualization in Rviz (the debugging tool of ROS), Check tf-tree broadcasting to
check data is publishing and subscribing correctly. Lidar zero-degree transformation,
Implementation of customize real-time 3d map development package on ROS, its name
is IRA laser tools, Rover Instrumentation and control, Real-Time 3D Map Development
using Rover, Hardware components details and Software details. The basic objective
of lidar’s zero-degree transformation is to create flexibility and reduction in the height
of rover.
This project has presented the efficient working of a scanning and mapping system
comprising of a low-cost rover. The working of the system provides ease of operation
at affordable rates. The successful Real-Time map development technique has been
utilized to show the use of rover for the surveying application of comparatively large
vicinities.
iv
Acknowledgment
First of all, we are very thankful to Allah Almighty who gave us opportunity,
determinationand strength which made us accomplish our goal.
We have made huge efforts in this project. However, it would not have been possible
everwithout the kind support and help of several individuals. We would like to extend
our sincere thanks to all of them. We are highly indebted and thankful to our final year
project supervisor Dr Engr. Asif Ahmed Memon and co-supervisor Engr. Agha
Muhammad Shakir for their guidance and constant supervision as well as for
providing necessary information regarding the project & also for their support in
completing the project. Besides that, we can never thank enough to our project in charge
Engr. Agha Muhammad Shakir for the inestimable guidance, mentorship and
constant support throughout the journey. We are highly indebted and will always
remember our teachers for the kind support and knowledge sharing.
We would like to express our gratitude towards our parents & members of FYP
Committee for their kind co-operation and encouragement which helped us in
completion of this project efficiently.
v
Dedication
This project is dedicated to our parents specially Kaniz Fatima Rizvi, Syed Mujahid Hussain
(Murtaza’s (team-leader) Parents) and Mr & Mrs Mushtaque Ahmed Khan (Hammad’s
parents).Also dedicated to Mazhar’s and Arsal’s parents .
vi
Table of Contents
Author’s Declaration ......................................................................................................ii
Statement of Contribution ............................................................................................ iii
Executive Summary ...................................................................................................... iv
Acknowledgment ........................................................................................................... v
Dedication ..................................................................................................................... vi
Table of Contents .........................................................................................................vii
List of Figures ............................................................................................................... xi
List of Tables ............................................................................................................. xiii
List of Abbreviation .................................................................................................... xiv
List of Symbols ............................................................................................................ xv
United Nations Sustainable Development Goals ........................................................ xvi
Similarity Index Report..............................................................................................xvii
Chapter 1 Introduction ................................................................................................... 1
1.1 Background Introduction ............................................................................. 1
1.2 Motivation of the Project ............................................................................. 3
1.3 Problems and Challenges ............................................................................. 4
1.4 Possible Solutions ........................................................................................ 4
1.4.1 Robotic Mapping ...................................................................................... 5
1.4.2 RTAB Mapping ........................................................................................ 5
1.4.3 3D Scanner Mapping and Point Cloud Generation................................... 6
1.4.4 Servo Motor and RP-Lidar-based 3D Scanner Mapping and Point Cloud
Generation .......................................................................................................... 6
1.4.5 Such a Combination of 2 or More lidars for 3D Point-Cloud................... 7
1.5 Mapping Technique Packages ..................................................................... 7
1.5.1 G-Mapping ................................................................................................ 8
1.5.2 Hector Mapping ........................................................................................ 8
1.6 Localization.................................................................................................. 9
1.7 3D Point Cloud and Implementation of Real-Time Mapping ..................... 9
1.8 Kinematical Analogy of Skid-Steering with Differential Drive .................. 9
1.9 Skid Steer / Differential Drive ................................................................... 11
1.9.1 Arc Based Commands............................................................................. 11
1.9.2 Linear & Angular Velocity Commands for ROS ................................... 11
vii
1.9.3 Forward Kinamatics ................................................................................ 12
1.9.4 Inverse Kinamatics.................................................................................. 12
1.10 Proposed Solution .................................................................................... 13
1.11 Objectives ................................................................................................ 14
1.12 Methodology ............................................................................................ 14
1.13 Organization of the Report....................................................................... 15
Chapter 2 Literature Review ........................................................................................ 16
2.1 Introduction ................................................................................................ 16
2.2 Building a 3D Map with the Rover Team Using Low cost 2D Laser
Scanners ........................................................................................................... 16
2.2.1 Introduction ............................................................................................. 16
2.2.2 Methodology ........................................................................................... 17
2.2.3 Multi Rover Based 3d Map Merging ...................................................... 18
2.2.4 Summary ................................................................................................. 19
2.3 Hectorslam 2d Mapping For Simultaneous Localization And Mapping
(Slam)............................................................................................................... 20
2.3.1 Introduction ............................................................................................. 20
2.3.2 Methodology ........................................................................................... 20
2.3.3 Summary ................................................................................................. 21
2.4 Mapping with Photogrammetric Approach ............................................... 21
2.5 Lidar-Based Mapping ................................................................................ 22
2.6 Simultaneous Localization And Mapping (SLAM) Strategy .................... 23
2.7 Summary .................................................................................................... 24
Chapter 3 Methodology ............................................................................................... 25
3.1 Introduction ................................................................................................ 25
3.2 System Model ............................................................................................ 25
3.2.1 Designing and Development of Rover Based 3D Scanning System ...... 25
3.2.2 The CAD Model ..................................................................................... 26
3.2.3 Selection of Rover................................................................................... 27
3.2.4 Modifications in Rover ........................................................................... 27
3.2.5 Real Rover .............................................................................................. 28
3.3 Software Details and Control ..................................................................... 29
3.3.1 (Robot Operating Software) ROS ........................................................... 29
3.3.1.1 Build System ........................................................................................ 30
viii
3.3.1.2 ROS File System .................................................................................. 31
3.3.1.3 ROS Node ............................................................................................ 32
3.3.1.4 ROSCORE ........................................................................................... 33
3.2.1.5 ROS Topic ........................................................................................... 33
3.3.1.6 ROS Messages ..................................................................................... 34
3.3.1.7 Msg Files .............................................................................................. 34
3.3.1.8 Msg Types ............................................................................................ 34
3.3.1.9 Building................................................................................................ 34
3.3.1.10 Header ................................................................................................ 34
3.3.1.11 GAZEBO And RVIZ ......................................................................... 35
3.3.2 Python ..................................................................................................... 35
3.3.3 C++ ......................................................................................................... 36
3.3.4 Ubuntu (OS in Raspberry PI) .................................................................. 36
3.3.5 ROSSERIAL And Arduino Ide .............................................................. 36
3.3.6 Simulated and Real Rover Transformation............................................. 37
3.3.7 TF-Tree and Topics Publishing by Frames ............................................. 37
3.3.8 Worlds Modeling Models for Indoors Environments ............................. 38
3.3.8.1 Hector Mapping ................................................................................... 38
3.3.9 Implementation of IRA Laser Tools (The Real Time) Customize
Package ............................................................................................................ 39
3.3.9.1 Lidar’s Pose Update ............................................................................. 39
3.4 Hardware Details and Control ................................................................... 40
3.4.1 Rover Instrumentation and Control ........................................................ 41
3.4.2 Rover Connectivity and Control ............................................................. 41
3.4.3 Connectivity and Control ........................................................................ 41
3.4.4 Rp-Lidar And Its Working ...................................................................... 42
3.4.4.1 System Connection .............................................................................. 42
3.4.5 Raspberry pi ............................................................................................ 43
3.4.5.1 Specifications for the Raspberry Pi Model 3B .................................... 44
3.4.6 Arduino Uno ........................................................................................... 44
3.4.7 Motor Driver Module L298N.................................................................. 45
3.4.8 DC Encoder Motor .................................................................................. 46
3.4.8.1 Features: ............................................................................................... 46
3.4.8.2 Encoder Specifications: ....................................................................... 47
ix
3.4.8.3 Eccoder Connection ............................................................................. 47
3.5 System Analysis And Specifications ......................................................... 47
3.5.1 Lidar Zero-Degree Transformation......................................................... 47
3.5.2 Performance Evaluation and Selection of 2D Laser Scanners................ 48
Chapter 4 Results And Discussion............................................................................... 49
4.1 Introduction ................................................................................................ 49
4.2 Rover Based Localization and Mapping .................................................... 49
4.3 Real-Time Point Cloud Generation ........................................................... 49
4.4 Simulated Results....................................................................................... 49
4.4.1 Simulated Map Generation ..................................................................... 50
4.4.2 2D Grid Map Generation ........................................................................ 50
4.4.3 Real-Time Simulated 3D Point-Cloud Generation ................................. 51
4.5 Hardware Results ....................................................................................... 52
4.5.1 Real Image of Corridor ........................................................................... 52
4.5.2 2D Grid Map Generation of Corridor ..................................................... 52
4.5.3 Real-Time 3D Point-Cloud of Corridor .................................................. 53
4.5.4 Some Scanned Areas in Map .................................................................. 54
4.5.4.1 The Desk .............................................................................................. 55
4.5.4.2 The Triangular Table ........................................................................... 55
4.5.4.3 The Window......................................................................................... 55
4.5.4.4 The Beam ........................................................................................... 56
4.5.5 Some Other Experimental Result............................................................ 56
4.5.5.1 The Chair ............................................................................................. 56
4.5.5.2 The Table ............................................................................................. 57
4.5.6 Comparison Between Simulation and Hardware Results ....................... 57
Chapter 5 Conclusion And Recommendations ............................................................ 58
5.1 Conclusion ................................................................................................. 58
5.2 Limitations ................................................................................................. 58
5.3 Recommendations for Future Work........................................................... 58
APPENDIXS................................................................................................................ 59
References .................................................................................................................... 71
Glossary ....................................................................................................................... 76
x
List of Figures
Figure 1.1 Example of 3D Mapping [1] ....................................................................... 2
Figure 1.2 Example of 2D Mapping [1] ....................................................................... 2
Figure 1.3 Example of 3D Mapping - RTAB Mapping [2] .......................................... 5
Figure 1.4 Example of 3D Velodyne Scanner on Rover Example of 3D Mapping
Using 3D Velodyne Scanner [2] .................................................................................. 6
Figure 1.5 Example of servo motor with lidar with lidars tf-tree [1] ........................... 6
Figure 1.6 Example of rover with servo-lidar combination and its 3D point-cloud [1]
........................................................................................................................................7
Figure 1.7 Example of combination of 3 lidars for 3D point-cloud [1] ....................... 7
Figure 1.8: Example of 2D Mapping – Gmapping [1] ................................................. 8
Figure 1.9: Example of flow control of Hector Mapping and its 2D map [3] .............. 8
Figure 1.10 The kinematics schematic of skid-steering mobile robot [25] .................. 9
Figure 1.10(a) and (b): (a) Simulated model (b)Real model ...................................... 13
Figure 3.1 Simulated and Real model of scanning system ......................................... 26
Figure 3.2 Simulated Rover CAD model setup in Rviz ............................................. 26
Figure 3.3 Selected Rover Hardware ......................................................................... 27
Figure 3.4 Rover with supported RPLidars ................................................................ 27
Figure 3.5 Rover Odometry from Encoders and 1000 Rpm 12V Dc gear motor with
encoder ....................................................................................................................... 28
Figure 3.6 Real Rover with using acrylic sheet A,B .................................................. 28
Figure 3.6 Real Rover with using acrylic sheet C ...................................................... 29
Figure 3.7: Communication of nodes in ROS network .............................................. 33
Figure 3.8: Rover Transformation .............................................................................. 37
Figure 3.9: TF-tree (transformation tree) of Rover .................................................... 38
Figure 3.10: Topics Used in ROS For Communication ............................................. 38
Figure 3.11: Flow Diagram and 2D Map ................................................................... 39
Figure 3.12: 3D Point-Cloud using IRA Laser Tools (The Real Time) Package ...... 39
Figure 3.13: Lidar Pose Update Through Odometry of Rover .................................. 40
Figure 3.14: Rover Instrumentation and control ........................................................ 41
Figure 3.15: Rover Connectivity and control ............................................................. 41
Figure 3.16: RP-Lidar Description ............................................................................. 42
Figure 3.17: Raspberry Pi 3 Model B ........................................................................ 43
Figure 3.18: Arduino Uno .......................................................................................... 45
xi
Figure 3.19: Motor Driver Module L298N ................................................................ 46
Figure 3.20: DC Encoder motor ................................................................................. 47
Figure 3.21: Lidar Zero-Degree Transformation. ...................................................... 47
Figure 4.1: Simulated Map in Gazebo. ...................................................................... 50
Figure 4.2: 2D Grid Map of Simulated Map in Rviz and its PNG ............................. 50
Figure 4.3 (a): 3D Point-Cloud of Simulated Corridor. ............................................. 51
Figure 4.3 (b): 3D Point-Cloud of Simulated Corridor. ............................................. 51
Figure 4.3 (c): 3D Point-Cloud of Simulated Corridor .............................................. 51
Figure 4.4: Real Image of Corridor. ........................................................................... 52
Figure 4.5: 2D Map of Corridor. ................................................................................ 52
Figure 4.6: 2D Map of Corridor with 3D Point-Cloud Generation ............................ 53
Figure 4.7 (a): 3D Real-Time Point-Cloud of Real Corridor. .................................... 53
Figure 4.7 (b): 3D Real-Time Point-Cloud of Real Corridor. .................................... 53
Figure 4.7 (c): 3D Real-Time Point-Cloud of Real Corridor. .................................... 54
Figure 4.7 (d): 3D Real-Time Point-Cloud of Real Corridor. .................................... 54
Figure 4.8: The Point-Cloud of Desk and its Real Image. ......................................... 55
Figure 4.9: The Point-Cloud of Triangular Table and its Real Image. ...................... 55
Figure 4.10: The Point-Cloud of Window and its Real Image. .................................. 55
Figure 4.11: The Point-Cloud of Beam and its Real Image. ...................................... 56
Figure 4.12: The Point-Cloud of Chair and its Real Image. ....................................... 56
Figure 4.13: The Point-Cloud of Table and its Real Image ....................................... 57
xii
List of Tables
Table 3.1: Specifications of 2D Laser Scanner .......................................................... 47
xiii
List of Abbreviation
LIDAR
Light Detection and Ranging.
ROS
Robotic Operating System.
SLAM
Simultaneous Localization and Mapping.
2D/3D
2-Dimension/3-Dimension.
G-Mapping
Occupancy Grid-based Mapping.
RTAB-Map
Real-Time Appearance-Based Mapping.
BIM
Building Information Model.
Rviz
Robot Visualization Tool.
xiv
List of Symbols
Symbols
$
Dollar
V
Voltage
A
Ampere
Greek
ω
Omega
θ
Theta
xv
United Nations Sustainable Development Goals
The Sustainable Development Goals (SDGs) are the blueprint to achieve a better and
more sustainable future for all. They address the global challenges we face, including
poverty, inequality, climate change, environmental degradation, peace and justice.
There is a total of 17 SDGs as mentioned below. Check the appropriate SDGs related
to the project
□
No Poverty
□
Zero Hunger
□
Good Health and Well being
□
Quality Education
□
Gender Equality
□
Clean Water and Sanitation
□
Affordable and Clean Energy
□
Decent Work and Economic Growth
□
Industry, Innovation and Infrastructure
□
Reduced Inequalities
□
Sustainable Cities and Communities
□
Responsible Consumption and Production
□
Climate Action
□
Life Below Water
□
Life on Land
□
Peace and Justice and Strong Institutions
□
Partnerships to Achieve the Goals
xvi
Similarity Index Report
Following students have compiled the final year report on the topic given below
forpartial fulfillment of the requirement for Bachelor’s degree in Industrial Electronics
Engineering
Project Title: Localization and 3D Mapping for Indoor Environment
S. No
Student Name
Seat
Number
1.
Muhammad Hammad Khan
18021
2.
Syed Mazhar Hussain Rizvi
18043
3.
Syed Murtaza Hussain
18045
4.
Arsal Abbasi
17008
This is to certify that Plagiarism test was conducted on complete report, and overall
similarity index was found to be less than 20%, with maximum 5% from single source,
as required.
Signature and Date
…………………………...
Engr. Dr. Asif Ahmed Memon
Assistant Professor
xvii
xviii
xix
Chapter #1
Introduction
1.1 Background Introduction
The primary technology used by mobile robots is simultaneous localization and
mapping (SLAM), which is based on lidar. Lidar is frequently employed in highprecision map generation, map-based matching, and positioning because it can
accurately gather distance and intensity information from the environment. A more
accurate reflection of the 3D environment can be seen in the 3D point cloud map created
by 3D laser SLAM, which also offers more usable information for construction-related
navigation and improvement.
The development of sensor technology and the availability of affordable, efficient slams
have improved the function of slams. Slam's capacity to adapt to different situations by
employing heterogeneous sensors is the primary factor that has elevated it to one of the
most popular fields for household and industrial applications. The slam has evolved
into a crucial component of many industrial processes, and it is being used in cleaning,
instructional, and assistive applications. Any slam's most difficult duty when carrying
out a task is to locate itself in the area and simultaneously sense its surroundings.
Wheeled mobile robots have been evolving since the turn of the past century, although
the most well-known ones only recently came into being. Due to their straightforward
mechanism of motion, they are the obvious choice for many researchers to generate
particular solutions. Below is a list of some goods that are extremely popular.
1) Neobotix Rovers.
2) Pioneer 3-DX.
3) FLOBOT.
4) KUKA Omni Robot.
5) Ackerman Rovers.
To do this, we want to build a real-time 3D map-based SLAM vehicle that can drive
smoothly from its starting point to its destination (the surveyed zone), scan any items
1
that are in the way as it travels through the area, and then produce a 3D point cloud of
those things. Another objective is to create a cheap 3d map-based slam's car utilising a
2d laser scanner (rplidar A1). To scan a wide area, we first need to create a 2D map of
the area, and then we need to employ a combination of lidars to create a 3D map of the
whole scanned area in real time. Here are some examples of 2D and 3D maps:
Figure 1.1 Example of 3D Mapping [1]
Figure 1.2 Example of 2D Mapping [1]
2
1.2 Motivation of the Project
Every student in the last year of the graduating programme for any engineering subject
is required to complete a project, either alone or in a group. The goal of the senior year
project is to demonstrate how information acquired in an engineering school can be
used in the actual world. The fact that this requirement is part of the final year timetable
gives the students a chance to showcase their abilities to a wider audience. Working on
a novel and sincere idea might finally provide one the assurance that they are prepared
to enter the professional world of work.
The chance for an imaginative mind at the beginning of his profession is the final year
project. Many individuals notice the inventions and discoveries they may work on while
completing the various engineering degrees. The ideal time to showcase their abilities
is during the senior project.
The project gives us the opportunity to go deep into the world of technologies and
unearth innovative gems. For example, in our project, we developed a 3-D Real-time
map utilising two inexpensive Rplidars and ROS. Being able to work with this much
cutting-edge technology as a recent graduate allowed us to have a brilliant start to our
careers.
The following are some more project motivational factors:
•
The current solutions are expensive and not viable for the local market in the
nation.
•
As a result, the primary driving force behind our project is to provide a costeffective, locally marketable autonomous mapping solution.
•
The primary goal is to use ROS to create a real-time 3D Map of the corridor.
The whole task has been broken down into a number of objectives in order to achieve
the required project goals, as follows:
•
Construction of a compact mapping rover
•
Classifying and choosing 2D scanners.
•
Real-time and simulated robot transformations using the joints, plugin, and
transformations of the Lidar.
•
Rover-created environment-specific real-time maps.
Human Ease and Safety, however, is what appealed to us the most and inspired us to
start on this project. In contrast to every other goal in the world, human life safety and
3
convenience are prioritized. Their lives are in severe danger due to the natural disasters
and variations in the weather cycle that pose an ever-growing threat to them.
1.3 Problems and Challenges
One of the most concentrated multi-disciplinary study fields that has developed via the
use of the Simultaneous Localization and Mapping (SLAM) technology is the
exploration of new environments and the construction of their 3D representations. One
of the most recent topics for robotic researchers has been surveying and mapping the
environment, which has been progressing during the previous ten years. There are
several types of mapping robots that can conduct surveys both inside and outside of
buildings. A laser scanner and odometry setup backed by a single board computer
might be used to create an indoor surveying robot. However, an outside surveying
robot transforms into a complicated sensor array supported by a fast computer.
The following are the issues encountered while using the SLAM approach and creating
real-time 3-D maps:
While various simultaneous localization and mapping (SLAM) solutions have been
developed in recent years, creating 3D real-time maps and scanning huge areas remain
difficult tasks that need system autonomy through the use of several robotic units.
Using more robotic units to survey a broad region presents a variety of issues,
including those related to power consumption, data transport, and determining the
precise lidar transformation and perfect joining of them for real-time 3D mapping.
As a result, the total system's high power consumption solution for 3D surveying and
real-time mapping surpasses the allowed cost.
1.4 Possible Solutions
Researchers have developed a substantial body of literature by addressing the
aforementioned problems in a variety of ways. Real-Time Python custom package
implementation, environment scanning, 2D mapping, robot control algorithm, and 3D
map generation of the entire environment are only a few of the strategies needed for a
Real-Time 3D SLAM
To make the project easier to grasp, it is split into two primary sections.
•
RTAB-Map Robot Mapping is the first.
•
Using a 3D scanner to create 3D maps.
•
3D mapping using the servo-Lidar combo.
4
•
A combination of two or more inexpensive lidars that produce a 3D map of the
surroundings using the G-Mapping and Hector SLAM software.
•
The localization and use of the Real-Time package
1.4.1 Robotic Mapping
Robot mapping aims to make it possible for a robot to build (or utilise) a map (for
outdoor usage) or a floor plan (for interior use) and locate itself on it. Robotic leg is the
area of study and application that concerns with the capacity of a robot to locate itself
on a map or plan and, on occasion, to create the map or floor plan itself.
1.4.2 RTAB Mapping
Real-Time Appearance-Based Mapping, or RTAB-Mapping, is a graph-based SLAM
strategy. Appearance-based SLAM refers to an algorithm that locates the robot and
maps the surroundings using information gathered from vision sensors. The robot uses
a method known as loop closures to identify whether it has visited a particular spot
before. The map is increased when the robot explores new regions of its surroundings,
and as a result, there are more photographs for each new image to be compared against.
Due to this, loop closures take longer to complete yet with linearly increasing
complexity. RTAB-Map uses a variety of algorithms to enable real-time loop closure,
making it optimal forarge-scale and long-term SLAM.
Figure 1.3 Example of 3D Mapping - RTAB Mapping [2]
5
1.4.3 3D Scanner Mapping and Point Cloud Generation
A Velodyne VLP-16 LiDAR was utilised to collect the 3D point clouds for the mapping
process. A portable 3D scanner with 16 laser/detector pairs and a 360° horizontal and
30° vertical field of view is called the VLP-16. The divergence of the laser beams is 3
mrad. With frequencies of 2 Hz, complete 360° scans of 300,000 points are collected.
The scanner typically functions with a 3 cm accuracy. Depending on how reflective the
target surface is, the VLP-16's maximum range can reach up to 100 m.
Figure 1.4 Example of 3D Velodyne Scanner on Rover Example of 3D Mapping Using 3D Velodyne
Scanner [2]
1.4.4 Servo Motor and RP-Lidar-Based 3D Scanner Mapping and Point Cloud
Generation
Despite the fact that lidars are becoming more affordable, 3D lidars, like the VLP-16,
can still be rather expensive. An alternate strategy is to rotate a 2D lidar, such as the
one indicated that uses Hokuyo 2D lidar, which is still pretty pricey and costs a
thousand dollars. An very cheap 2D Lidar, a servo motor, and a few 3D printed
components were used to create a 3D Lidar with a final cost of around $100.
In this technique servo motor is programmed and move 0 to 180 degree. By doing this
lidar can capture all x,y and z axis.
Figure 1.5 Example of servo motor with lidar with lidars tf-tree [1]
6
Figure 1.6 Example of rover with servo-lidar combination and its 3D point-cloud [1]
1.4.5 Such a Combination of 2 or More lidars for 3D Point-Cloud
We can create a 3D point cloud of the environment using this sort of method and the
G-Mapping and Hector SLAM software packages.
Figure 1.7 Example of combination of 3 lidars for 3D point-cloud [1]
Three RP-lidars are mounted on a single surface in the image above. By installing all lidars in
accordance with the figure, we can cover all the axes of the real world and create a 3D point
cloud of the whole environment. We may utilise the packages G-Mapping and Hector SLAM
to do this.
1.5 Mapping Technique Packages
Following are Mapping Technique Packages
7
1.5.1 G-Mapping
In the G-mapping The ROS package is an implementation of the G-mapping SLAM
method. Localization and mapping simultaneously (SLAM). The robotic challenge of
creating a map of an uncharted region while simultaneously maintaining track of the
robot's position on the map is known by this moniker.
Figure 1.8: Example of 2D Mapping – Gmapping [1]
1.5.2 Hector Mapping
Hector mapping is a SLAM technique that may be used to platforms that experience
roll and pitch motion as well as without odometry (of the sensor, the platform or both).
It is a mapping algorithm that solely extracts the map of the environment using laser
scan data.
Figure 1.9: Example of flow control of Hector Mapping and its 2D map [3]
8
1.6 Localization
Robot localization is the process of situating a mobile robot in respect to its
environment. Localization is one of the most crucial abilities required by robots as
determining one's position is a key step in determining what to do next. The area is
mapped out and the robot is equipped with sensors that keep an eye on both its
surroundings and its own movement in a typical robot localization scenario. The
localization issue then comprises figuring out the robot's position and orientation inside
the map using the information gathered from these sensors. Robot localization methods
need to be able to deal with erratic data and generate not just an estimate of the robot's
position but also a measure of the location estimate's uncertainty.
1.7 3D Point Cloud and Implementation of Real-Time Mapping
A point cloud is a collection of data points in space. The points depict a 3D object or
shape. The X, Y, and Z coordinates of each point are distinct. Real-time objects on the
map may be viewed thanks to the installation of the real-time package in ROS, and a
3D map in the form of a 3D point cloud may be constructed.
1.8 Kinematical Analogy of Skid-Steering with Differential Drive
Figure 1.10 The kinematics schematic of skid-steering mobile robot [25]
9
10
1.9 Skid Steer / Differential Drive
Long mathematical computations are solved using inverse kinematics, forward
kinematics, linear and angular velocity instructions, and arc-based commands.
1.9.1 Arc Based Commands
1.9.2 Linear & Angular Velocity Commands for ROS
It is common practise in ROS to set linear velocity in the linear.x field and angular
velocity in the angular.z field when utilising the Twist Topic, which is the default for
drive messages.
11
1.9.3 Forward Kinamatics
12
1.10 Proposed Solution
The proposed solution has different steps which are given as follows:
•
Our proposed solution is meant to design a “SLAM BASED 3D MAP
BUILDING USING 2 LOW COST 2D LASER SCANNERS (RPLIDAR
A1)”and perform motion in real-time from the initial position to the final
destination (surveyed region).
•
Main theme of “SLAM BASED 3D MAP BUILDING USING 2 LOW COST
2D LASER SCANNERS (RPLIDAR A1)” is an orthogonal combination of two
RP-Lidar 2D laser scanners which we will used on slam’s vehicle to build realtime 3D point cloud map of the explored region.
•
The interfacing of scanners, transformation joints, zero-degree transformation
and the processing of simultaneous localization and mapping (SLAM)
technique will achieve using Robot Operating System (ROS).
•
The developed 3D point cloud map will further utilize to establish Building
Information Model (BIM) to found validity for the surveyed region
13
Figure 1.10 (a) and (b): (a) Simulated model, (b)Real model
1.11 Objectives
The major objectives of our project is given as follows:
•
The main objective is to make the real-time 3D Map development of the corridor
using ROS.
•
The existing solution are costly and not economically feasible to the local
market of the country.
•
Therefore, the main motivation of our project is to develop an economical
autonomous mapping solution acceptable to the local market.
To accomplish the desired project goals the overall work has been divided into
multiple objectives as follows:
•
Fabrication of small mapping rover.
•
Characterizing and selection of 2D scanners.
•
Simulated and Real robot transformation Lidar’s joints, plugin and it’s
transformations on real-time bases.
•
Real-Time Map development of the environment by rover.
1.12 Methodology
The methodology of our project is that how we design our rover in simulation, how we
make simulated map and what is the software control and hardware control in following
manner:
•
Designing and Development of Rover Based 3D Scanning System.
•
Simulated and Real robot transformation Lidar’s joints, plugin and it’s
transformations on real-time bases.
•
Check visualization in Rviz (the debugging tool of ROS).
•
Check tf-tree broadcasting to check data is publishing and subscribing correctly.
•
Lidar zero-degree transformation.
•
Implementation of customize real-time 3d map development package on ROS.
Its name is IRA laser tools.
•
Rover Instrumentation and control.
•
Real-Time 3D Map Development using Rover.
•
Hardware components details
14
•
Software details
1.13 Organization of the Report
The rest of the report is organized as follows:
The next chapter of the report describes the literature review of the project including
the theoretical and practical aspects of the project and also the approaches that helped
pave the way for the success of this project. Chapter 3 discusses the methodology used
to complete the project including the hardware, components, assembling and algorithms
for vision assisted pick and place autonomous mobile manipulator. Chapter 4 discusses
the results obtained from the project including the simulations and experimental results
along with detailed comparisons between the results followed by chapter 5 that
discusses the conclusions and recommendations, which finally ends with the chapter
for all the references that helped us in our journey.
15
Chapter 2
Literature Review
2.1 Introduction
This portion of the review of the literature lists the prior research that is pertinent to our
study. Many researchers employed a variety of techniques to produce the 3D pointcloud of the environment, and they discussed the challenges associated with mapping
the environment and possible alternatives. For example, some researchers combined
lidar with other technologies to produce an environment map and its 3D point-cloud.
A research group stops two rovers at a rendezvous location, merges the data from both
rovers' installed horizontal and vertical lidar, and then plots a 3D point cloud offline in
MATLAB using the recorded laser readings. This technique is used to create a 3D map
of the environment.
2.2 Building a 3D Map with the Rover Team Using Low cost 2D Laser Scanners
2.2.1 Introduction
This research project offers a creative, low-cost method for creating a 3D map
representation of the location that has been investigated utilising a pair of rovers. The
solution has been developed to be used with indoor surveying applications. The
deployment of these rovers is possible in any organised interior area with the goal of
exactly modelling the scanned location. In this study, a new method for scanning
interior rooms and producing Building Information Models is given. It makes use of
two little, economical rovers (BIM) both rover has undergone a unique orthogonal
integration of two economical RPLidar 2D laser scanners together with the required
electronic and computational components to carry out 3D scanning and mapping of the
environment using the Robot Operating System (ROS).By situating each rover at a
unique remote location nearby and operating them independently to scan their
individual areas and capture all sensory data, the surveying task was accomplished.
Once the surveying is finished, a face-to-face meeting between the two group members
has been planned to establish mutual transformation through sensor fusion using
Kalman filtering (KF). A complete 3D point cloud map of the analyzed space was
created using sensory data and transformations, and this map was afterwards processed
to provide BIM for the area.
16
2.2.2 Methodology
Wireless connections are made between several scanners, sensors, and electrical boards
and the instrumentation circuits of the experimental rover configuration, which are
managed by ROS running on a laptop. ROS is able to collect data from a variety of
sensors and store it for offline analysis. The raspberry pi serves as the main controller
and directly connects with the camera, two RPLidars A1 scanners, and UWB
DWM1000 sensor (model 3b). One scanner is positioned vertically and the other
horizontally on an acrylic pedestal. One Arduino nano controller is used as a slave
controller to communicate with a UWB sensor and update the main Raspberry Pi
controller. According to the commands issued by the primary controller, the second
Arduino UNO controller is used to control the rover. According to the commands issued
by the primary controller, the second Arduino UNO controller is used to control the
rover. It controls the wheel with L298N, a twin H bridge driver unit. Using optical
encoders built into the front wheels, this Arduino UNO can perform one additional task:
collect input from the wheels. The rover's main controller's interface with the attached
slave controllers and other components The whole data from on board sensors has been
wirelessly transferred to the ROS PC for offline processing of the data using MATLAB
code to combine both horizontal and vertical scans, carry out 3D point cloud data
segmentation, and merge different maps (laptop).A precise 2D posture estimation is a
key component of the technique for creating a 3D map of the explored area. The
fundamental idea behind SLAM is to factories the whole SLAM posterior p(x1:t,
m1:q|z1:t, u0:t1) into factored form in order to estimate the most likely posture x and
map m.
The vertical scanner's translational (x2, y2, z2) and rotational (0, y, z) values in relation
to the referenced scanner have previously been determined during the building of the
right-angled support, where both scanners have been incorporated in the centre of their
respective planes. So, each and every scan point P2 of the vertical scanner has been
transformed using the standard transformation method. been converted
.
17
Here, the scan from the vertical scanner that was transformed into a horizontal
frame is S2T, which has been linked with the reference scan S1. There are 720 total
scan points in the aggregated scan SC since each scanner has 360 scan points per unit
scan.
The collected scan SC may now be translated into the SLAM referencing system
utilising the distinct recorded posture (xn, yn, and n) at the appropriate moment,
eventually producing a reconcilable 3D point cloud map. the entire conversion of these
SLAM-coordinated frames from the scan SC.
Through the offline combination of all recorded scans with unique time stamps and the
accompanying rover posture data, a complete 3D point cloud map of the studied region
has been produced.
2.2.3 Multi Rover Based 3d Map Merging
The UWB sensor data is utilised in the framework's initial stage to roughly locate the
second rover that was moving in the direction of the first one. When there are more than
two rovers in a team, the UWB sensor's distinctive data tags can quickly identify which
two rovers will be in close proximity to one another. As a result, the UWB sensor
ensures the identify of the approaching rover when it is close enough to be seen by
another team member. When the distance between the rovers is getting smaller, it is
possible to visually identify them using on-board camera images by merely seeing their
circular tags.The first rover's horizontal scans were utilised to identify oncoming rovers
during the final stage. The state vector X = [th; h] and the covariance matrix P have
been predicted using the standard KF equations, as shown in Examples (5) and (6).
The new predicted values of the state vector and covariance matrix in this situation are
XP and PP, respectively. B is the control coefficient matrix, which contains all zerovalued elements for this purpose, and Un is the control vector, which is not used in this
calculation. The matrix A is used to illustrate the departure from the unity condition.
By using a set of KF correction equations to the newly observed values of
18
transformation parameters of th and θh among rovers from the present scan, fresh
estimates of X and P have been created, as shown in (7) and (8).
The new estimates Xn and Pn are included in this instance, along with H is the
measurement Jacobean with unity elements and K and Y, the standard matrices created
during the KF processing. The final estimated values of θh and h were used to carry out
map merging. The translational h parameter has been separated into its XY components.
The iterative KF procedure was terminated at the same time as the approaching rover
was instructed to halt a few centimeter closer to the first rover. Finally, using the
decomposed translational and rotational features, each mapping point PG2 of the
second rover has been transformed into the coordinate system of the first rover. (9).
By adding the altered map SG2T to the first rover's map SG1 using the method, the
combined SW map of the region traversed by both rovers is produced. (10).
2.2.4 Summary
This suggested study has shown how well a scanning and mapping system composed
of several economical rovers operates. When compared to physical dimensions of the
regions that have been investigated, the system's established mapping findings have
been shown to be 98% accurate. The overall time spent producing the surveying outputs
has been cut by 85% or more when compared to the regional market's present manual
surveying procedures. The system's functioning offers simple use at reasonable prices.
The use of an efficient map merging technique has been utilized to show how a group
of rovers may be used to scan relatively vast areas. Additionally, the BIM development
has been validated for multiple testing zones utilizing the point cloud maps that were
generated. The important outcome of the research effort is being highlighted in order
to assist individuals working in the structural and architectural industries as soon as
possible. In order to improve the rover's localization and the visualization of the
examined regions, more vision and inertial sensors must be used in further testing and
19
analysis of the system, and this will be taken into account in subsequent improvements
to the rover system.
2.3 Hectorslam 2d Mapping for Simultaneous Localization And Mapping (Slam)
Performance of HectorSLAM has been assessed in this study.
2.3.1 Introduction
This project makes use of the ROS (Robot Operating System), a program that allows
computers to operate robotic parts. The ROS system is made up of a number of free
nodes that interact with one another according to a subscription-based order
mechanism. Robot software may be created using the Robot Operating System (ROS),
a customizable framework It is a collection of protocols, tools, and libraries designed
to make it simpler to design dependable robot behavior across various robot platforms.
The ROS is constructed from the ground up to encourage the creation of robotic
collaborative software. Performance of HectorSLAM has been assessed in this study.
HectorSLAM combines a strong scan matching-based 2D SLAM system. This
experiment looked at several scanning rate variables from the LiDAR sensor as well as
real-time robot movement calculation.RPLIDAR A1 Laser Scanner with 360-degree
2D Lidar has been utilized in this project. It uses a low-cost laser triangulation
technology created by SLAMTEC, and as a result, performs well overall in all kinds of
enclosed situations as well as in outdoor settings without direct sunlight. The state-ofthe-art in particle filter-based mapping is Hector SLAM. This SLAM technique may be
used to platforms with sensor roll or pitch motion and without an odometer.
2.3.2 Methodology
In order to operate the system, this project used ROS, which combines data from several
nodes. The connection with the RPlidar laser scanner is handled via laser scanner data,
which enables the vehicle to see the direction. The laptop will show the map and use
data from the laser scanning nodes and SLAM nodes to locate the car.
A program (ROS) Robotic Operating System that is allows computers to control robotic
parts. The ROS system is made up of several autonomous nodes that interact with one
another using the publish/subscribe messaging mechanism. Robot software may be
created using the adaptable ROS framework. To make it simpler to design complex and
dependable robot behavior on a variety of robotic systems, a collection of protocols,
libraries, and tools have been created.. In order to promote cooperative robotics
20
software development, ROS was designed from the bottom up. For this project,
hardware is involved. A simplistic ground vehicle is used to operate the system across
a predetermined site in order to evaluate the performance of mapping development and
localization based on HectorSLAM in ROS. By analyzing the mapping process in a
recognizable environment while modifying several parameters, such as the LiDAR
sensor's scanning frequency, the success of the experiment was evaluated. It has also
been noted that this species is able to pinpoint its position after moving. HectorSLAM
algorithm was used throughout this entire experiment in ROS.
2.3.3 Summary
Hector SLAM is capable of creating high-quality 2D mapping and localization.
However, there is still a significant amount of work that can be done to enhance the
system's capability for localization and mapping by integrating it with various types of
sensors, such as an inertial measurement unit (IMU). Other mapping methods, such
Gmapping, can be tested in the following experiment. Practically speaking, we are
constantly there and can only guess the surroundings from the map while trying to
match it. Although the features on this map were constructed with their positions in
mind, they frequently read incorrectly or are mistakenly labelled as two independent
features.
2.4 Mapping with Photogrammetric Approach
Utilizing photogrammetry is one way to create 3D maps. Construction progress can be
seen in great detail in photographs, which can be automatically analyzed and
transformed to 3D formats using Structure from Motion [5-7]. Golparvar-fard [8] first
proposed an image-based as-built modelling approach based on computation from
images, the locations and orientations of the photographer, and a sparse 3D geometric
representation of the as-built landscape utilizing daily progress photos. Bae et al[9] .'s
proposal of a quick and scalable approach for 3D modelling utilizing cameras on
mobile devices expanded upon photogrammetry frameworks. The availability of
texture information, which enables material recognition [10] and CAD model-based
object recognition [11], is a benefit of image-based modelling techniques. However, it
is challenging to apply time-lapse photography for performing consistent picture
analysis at obscured and dynamic site situations [8,12,13] because to variable lighting
(for example, inside vs. outdoor) and weather conditions. Furthermore, if similar
21
characteristics from numerous photos cannot be identified because of plain or repeating
textured surfaces, the geometry of the area will be incorrect [14]. Finding common
feature points in the gathered photographs would be difficult if there had been
significant building progress during that time for which photo images were not obtained
or if some things (such as equipment or scaffoldings) had been moved. Additionally,
the discontinuity of spatial information could not entirely be avoided in photographs
that were manually captured [8,12]. Bhatla and co. [15] shown that the technology is
unsuitable for simulating infrastructure projects in its current condition.
2.5 Lidar-Based Mapping
Unlike photography, which is limited by the operational environment, laser scanning
allows for wide-range measurements with improved resolution and precision [16].
Laser scanning can rapidly and fully gather geometric data while simultaneously
resolving all of the stated inefficiencies linked to the current practice of progress
monitoring, as compared to alternative 3D remote sensing approaches [12]. Numerous
research in the disciplines of construction and facility management have focused on the
use of laser scanners for a range of applications, including quick workspace modelling
for equipment operations [2,17–22], construction progress monitoring [23–26], defect
identification [27–96 29], as-built modelling [30–34], deflection evaluations of bridges
[35–38], and pavement thickness assessments [36]. However, a thorough laser scan
requires a lot of time to complete; depending on the size of the building site, a crew of
two people may need several days to do this. To produce as-built models of construction
projects, scans obtained at various locations must be registered due to their constrained
data collection ranges and on-site occlusions [16,39]. The act of integrating and
coordinating several scans obtained at diverse locations into a world coordinate system
or into a full model is known as point clouds registration. Numerous planar, sphere
targets or marks must be put in preparation in overlapping scanning zones, which is
expensive, labor-intensive, and time-consuming in order to register numerous scans in
one coordinate system. The ability to produce thorough as-built models during the
building phase is still not fully enabled, despite major advancements in automating
point cloud data collection activities. This is because, even when employing state-ofthe-art data gathering equipment, engineers still need to manually adjust data collection
parameters and pick where to capture data. Given that it depends on the engineers'
knowledge and experience in handling sensors, such manual correction is prone to
22
human error. On the other hand, an automated robotic scanning system would increase
the frequency, accuracy, and breadth of data collection and modelling for the as-built
conditions of construction sites.
2.6 Simultaneous Localization and Mapping (SLAM) Strategy
Robotic mapping techniques like SLAM allow a robot to estimate its location,
orientation, and a map of its surroundings. Although extensively researched in the
literature, SLAM approaches still have a number of drawbacks. For instance, Davison
et al. [40] used a monocular camera in an unfamiliar environment to retrieve the 3D
trajectory. This technique, known as "Visual SLAM," has problems with abrupt
movements and had trouble building a precise map of the area. With the use of visionbased approaches, Lebel et al. [41] created a photogrammetric 3D process over the
directly measured laser point clouds. The density of surface points was greater while
using this approach, but real-time SLAM needed many cameras and gear. "Roca et al."
[42] created a 3D model of a structure using the Kinect sensor, and the outcome was
compared to a model made using a laser scanner. They discovered that, despite the
vision-based system's ability to detect discontinuity sites, the measurement outcomes
are significantly influenced by the material and lighting circumstances. As a result, the
resulting point cloud is uneven and noisy. Cameras are often less costly, lighter, and
use less power per instrument than laser scanners, which is advantageous for visual
SLAM. Additionally, visual SLAM has the ability to recognize colors and other visual
characteristics of things that are challenging to identify using laser scanners in an
uncharted area. Despite its benefits, visual SLAM has significant drawbacks, such as
the fact that it makes mistakes at long distances, is sensitive to lighting, and is
challenging to employ in dynamic environments [43]. The use of laser scanners in
uncharted locations to create 3D maps using SLAM has been the subject of several
research projects. Using data from 2D Light Detection and Ranging (LiDAR) and
odometry, Chong et al. [44] investigated exact localization in 3D urban contexts. The
3D map, however, is insufficient for accurate as-built site modelling because the effort
was solely localization-focused. A mobile platform was created by Chen and Cho [4]
to create a 3D point cloud map for interior spaces utilizing an orthogonal pair of LiDAR.
Although the approach produced a 3D map with good precision, visual data was not
included. Using numerous robots and merging their posture estimations and individual
maps to produce a more accurate picture of the environment is another method for
23
resolving the SLAM problem [45]. The approach did not, however, enable the operator
to examine a real-time map prior to the completion of the exploration job and the
optimization procedure.
2.7 Summary
First, we gain an understanding of point clouds, ROS, and how to use it. Next, we
discover which parts are needed for this project and how to put lidars on a rover to get
a 3D point cloud of the surroundings. How connected is the project? How we build
robot models, what ROS nodes, messages and instructions are used for a differential
drive rover how we control the motors. how we get lidar data from the raspberry pi.
how we record our real-time measurements. and how we display point clouds in Rviz
are all covered in this article.
In general, the main goal is to make it apparent what the project is, what it does, how it
may be used, and how it is made.
24
Chapter 3
Methodology
3.1 Introduction
In this chapter we describe the methodology of our project that what is the procedure
to design our rover in simulation, what is the platform to make simulated map and what
is the software flow control and hardware control, procedure is for Designing and
Development of Rover Based 3D Scanning System, Simulated and Real robot
transformation Lidar’s joints, plugin and it’s transformations on real-time bases,
Checking visualization and correction the error in Rviz (the debugging tool of ROS),
Check tf-tree broadcasting to check data is publishing and subscribing correctly, Lidar
zero-degree transformation for hardware flexibility, Implementation of customize realtime 3d map development package on ROS. Its name is IRA laser tools, Rover
Instrumentation and control and in last the Real-Time 3D Map Development using
Rover.
3.2 System Model
These are the procedure of system model
3.2.1 Designing and Development of Rover Based 3D Scanning System
•
The motive of our project is to scan the environment using two 2D scanners and
develop a 3D model of the scanned vicinity. The experimental setup is
consisting of
•
A rover, equipped with an appropriate control system and instrumentations as
discussed later
•
A communication system.
•
Computer.
•
We are using a vehicle its fabrication as discussed later.
•
This vehicle were installed with a control system and sensors with support of
additional mechanical attachments.
•
The communication system we are using is simple a Wifi network between our
rover and pc.
•
Our pc which is a laptop is installed with Robot Operation System (ROS), and
some packages we required for data accusation and control.
25
Figure 3.1 Simulated and Real model of scanning system
3.2.2 The CAD Model
The CAD model of the experimental setup is shown in Figure 1.2. The model is
developed on Autodesk Fusion 360, an online CAD model development software. The
model developed was further used for simulation in the ROS environment. The model
is indicating the desired connections of two orthogonal RPLIDARs.
Figure 3.2 Simulated Rover CAD model setup in Rviz
26
3.2.3 Selection of Rover
•
The rover we selected is customize 4WD mobile platform model.
•
There are two sections where we can mount our additional hardware. These
sections may be called as lower section and upper section.
Figure 3.3 Selected Rover Hardware
3.2.4 Modifications in Rover
•
Additionally, for the support of LIDARs, an acrylic stand in orthogonal shape
has been designed and fabricated. Two surfaces have been developed on the
stand.
Figure 3.4 Rover with supported RPLidars
•
We use the 1000 Rpm encoder motors to get the feedback from encoder for
correct odometry of the rover in the environment.
27
Figure 3.5 Rover Odometry from Encoders and 1000 Rpm 12V Dc gear motor
with encoder.
3.2.5 Real Rover
•
Picture of Real Rover with using acrylic sheet.
(A)
(B)
Figure 3.6 Real Rover with using acrylic sheet A,B
28
(C)
Figure 3.6 Real Rover with using acrylic sheet C
3.3 Software Details and Control
ROS is the software environment utilised to carry out the project (Robot Operating
System). It is a set of software frameworks for the creation of robot software, not a real
operating system like Linux or another. It offers features including hardware
abstraction, low-level device control, the implementation of frequently used
functionality, message-passing between processes, and package management that are
intended for heterogeneous computer clusters. Linux is the operating system used by
ROS. There are several Linux distributions, often known as Distros. Ubuntu 18.04 is
the most suited distribution for our job. because compared to other distributions, it is
user-friendly, simple to learn, and has widespread support.
3.3.1 (Robot Operating Software) ROS
Robot Operating System is a frequent acronym. Even though it isn't an operating
system, ROS offers tools for a heterogeneous computer cluster, including hardware
abstraction, low-level device control, the implementation of frequently used
functionality, message-passing between processes, and package management. The ROS
Melodic flavour was used.
29
3.3.1.1 Build System
An end user's potential "targets" are created by a build system from the raw source code.
These created targets might be exported interfaces (like C++ header files) or libraries,
executable programs, generated scripts, or anything else that isn't static code. In ROS,
source code is organized into "packages," and each package, when generated, generally
consists of one or more targets.
The build systems GNU Make, GNU Auto Tools, CMake, and Apache Ant are wellknown and often used in the software development industry (used mainly for Java).
Additionally, almost every integrated development environment (IDE) adds its own
build system setup tools for the specific languages it supports, such as Qt Creator,
Microsoft Visual Studio, and Eclipse. The build systems in these IDEs are frequently
little more than front ends for console-based build systems like Auto tools or CMake.
In order to construct targets, the build system requests information such as the locations
of the source code, code dependencies, external dependencies, and where those
dependencies are located. It also requests information regarding which targets should
be built, where the targets should be built, and where they should be installed The build
system reads a certain collection of configuration files that express this behaviour. This
data is typically saved in an IDE as part of the workspace/project meta-data (such as
the Visual C++ project file). With CMake, it is described in a file known as
"CMakeLists.txt," while with GNU Make, it is located inside a file known as
"Makefile." This data is used by the build system to process and build the source code
in the proper sequence to produce targets. ROS employs a unique build.
CMake is extended by the catkin system to create package dependencies. The Robot
Operating System's official build system is called catkin. Catkin combines Python
scripts and CMake macros to add some functionality to the standard CMake workflow.
Although catkin adds support for automated package structure finding and concurrent
construction of numerous, dependent projects, its process is fairly similar to that of
CMake.
You must first create and construct a catkin workspace before you can begin creating
and testing packages in ROS:
$mkdir -p~ / catkin_ws / src
$ cd~ / catkin_ws /
$ catkin _ make
30
Working with catkin workspaces is suitable for the catkin make command. When this
was initially run in the workspace, a CMakeLists.txt link was created in the'src' folder.
Additionally, a "build" and "devel" folder should now be present in the existing
directory if you check inside. You might see that there are now numerous setups inside
the devel folder. **sh** files Any of these files would be downloaded and placed on
top of your environment as this workspace. Now Source your new setup before
proceeding. "*.sh" file
$ source devel / setup .bash
As well as making sure the ROS PACKAGE PATH environment variable contains the
directory you're in, ensure that the setup script has successfully overlaid your
workspace.
$ echo $ROS PACKAGE PATH
/home /your user /catkin_ws /src: /opt/ ros /melodic /share
3.3.1.2 ROS File System
•
In a ROS system, Files are arranged in a specific way.
•
Packages: The ROS code is organized into packages. Libraries, executables,
scripts, and other items might be included in each package. One typical package
in Robot Operating System is roscpp (ROS C++). It is a ROS C++ application.
In order to quickly interact with ROS Services, Topics, and Parameters, it
delivers a client library to C++ Programmers.
•
Metapackages: These special packages solely serve to identify a collection of
related other packages. Most typically, modified ROS-build Stacks are placed
in backwards-compatible metapackages.
•
Package Manifests: Package manifests (package.xml) include metadata about a
package, such as its name, version, description, licensing details, dependencies,
and a few more meta details like exported packages. A package's dependencies
are also provided by them. Repositories: A collection of packages that use the
same VCS system. By using the catkin release automation tool bloom, packages
that share a VCS may be released concurrently and have the same version.
These repositories frequently map to ROS-build Stacks that have been
modified. Repositories could also just have one package in them.
31
•
v. Message descriptions describe the data format for messages transmitted in
the Robot Operating System and are saved in my package / msg /
MyMessageType . msg
•
Service (srv) types: The request and response data format for services in ROS
is defined by service descriptions, which may be found in my package / srv /
MyServiceType . srv.
3.3.1.3 ROS Node
•
A ROS node is fundamentally one executable that represents a subprogram
inside your ROS application. Some of the great advantages you get from ROS
are:
•
Code reusability
•
Simple layer separation for the application you are creating (motor driver,
camera, joystick, controller).
Since ROS is language-agnostic, you may write code in whatever language you
choose (the most popular ones are Python and C++).When employing nodes, you gain
these benefits.
In other terms, a ROS node is a standalone component of the program that handles a
single task and may interact with other nodes using the Robot Operating System's
communication capabilities (topics, services, actions).
A Robot Operating System program is built by first creating packages. Packages are
substantial (or perhaps not) standalone objects that may include nodes. As an
illustration, suppose someone wants to use ROS to design a portion of an application
for a certain camera. A bundle would be made specifically for that camera.
You could find a ROS node for the driver, a ROS node to do some image processing,
another ROS node that offers a high-level interface for changing the settings, etc., inside
the package.
This diagram shows an illustration of three packages inside a ROS-based application.
The red arrow indicates communication between the nodes, which are represented by
the blue boxes.
32
Figure 3.7: Communication of nodes in ROS network
3.3.1.4 ROSCORE
•
A ROS-based system requires a number of different nodes and applications,
collectively known as Roscore. Roscore must be operating for nodes to
communicate with one another. Using the "roscore" command, it is
started.NOTE: If you use the roslaunch command, roscore will launch itself if
it isn't already running in another terminal (unless the —wait argument is
supplied). When Roscore would launch,
3.2.1.5 ROS Topic
The named buses that ROS nodes communicate across are called topics. Because topics
have undefined publish / subscribe semantics, Information is produced and consumed
separately.. In the majority of ROS applications, the nodes are not aware of their
communication partners. Instead, ROS nodes that are curious about data subscribe to
the pertinent subject, and ROS nodes that produce data publish to the connected topic.
A topic may have several publishers and subscribers
The conversation on topics is one-way and streaming. Instead, nodes should use the
services whenever they need to execute a remote procedure call, which is to get a
response to a request. Parameter Server is another option for keeping a little bit of state
33
3.3.1.6 ROS Messages
Publishing messages to various topics allows ROS nodes to exchange messages with
one another. A message is just a straightforward data structure with typed fields. Both
arrays of basic types and common primitive types like integer, floating point, and
boolean are supported. Messages may include arrays and structures with arbitrary
nesting levels (much like the C structs).
As part of a Robot Operating System service call, nodes may also exchange a request
and response message. In "srv" files, certain request and response messages are defined.
3.3.1.7 Msg Files
Msg files are straightforward text files used to establish a message's data structure.
These are the files that a package's msg subfolder contains. The msg format has further
details about these files, such as type definition.
3.3.1.8 Msg Types
The name of the package + the name of the message's.msg file are the typical ROS
naming rules for message kinds. For instance, the message std msgs/msg/String.msg
has the std msgs/String message type.
Additionally, each.msg file's MD5 sum serves as a version number for messages. Only
messages that match in terms of message type and MD5 sum might be sent between
nodes.
3.3.1.9 Building
The message generators that convert.msg files into source code are implemented in the
ROS Client Libraries. Although the most of the grisly details are handled by include
some of the standard build rules, these message generators must be called from the build
script. According to tradition, the whole package's "msg" directory houses all of the
msg files. If msgs are specified there, adding the line rosbuild genmsg() to the
CMakeLists.txt file is all that is necessary
3.3.1.10 Header
There is a unique message type called "Header" that may be present in a message and
contains certain typical metadata elements, such as a timestamp and a frame ID. It
34
would be strongly recommended to use these fields, since the ROS Client Libraries
would be able to configure some of them for you automatically if you so want.
In the header, there are three fields. As messages are issued from a certain publisher,
an increasing id is assigned to the "seq" field. The time information that should be
connected to the contents in a message is stored in the "stamp" column. The stamp may
be related to the moment the scan was taken, like in the case of a laser scan, for instance.
The "frame id" column holds frame data that must be connected to the message's
contents. This would be set to the scanning frame in the event of a laser scan.
3.3.1.11 GAZEBO And RVIZ
With the help of several plugins covering a wide range of accessible topics, you may
visualize a lot of information using the Robot Operating System's graphical interface,
or RVIZ. A three-dimensional visualization tool for ROS applications is called RVIZ.
It offers a view of the robot model, records data from the robot's sensors, and plays back
the recorded data. It might show images and point cloud data as well as data from lasers,
cameras, and other 3D and 2D devices. A 3D simulator called Gazebo is employed,
while ROS serves as the robot's interface. Combining the two results in a powerful robot
simulator. Using Gazebo, you may create a 3D scene on your computer that includes
obstacles, robots, and other objects. Gazebo also makes use of a physical engine to
provide things like light, gravity, inertia, etc. During assessment or testing in difficult
or dangerous settings, your robot won't sustain any harm. It usually takes less time to
run a simulator than to start the complete scenario on your actual robot.
3.3.2 Python
Python is a well-liked programming language. It was produced by Guido van Rossum
and released in 1991.. It is employed in server-side web development, software
development, arithmetic, and system programming. On a server, Python might be used
to build various web apps. It might be connected to the database systems and used in
conjunction with the applications to create processes. The files might also be read and
changed by it. Python might potentially be used to produce software that is ready for
production or for quick prototyping.
Python was created with readability in mind and has several characteristics with the
English language thanks to its mathematical roots. In addition, it employs new lines
rather than semicolons or parentheses, as do other programming languages, to finish a
35
command. The definition of the scope, including the scope of loops, functions, and
classes, depends on indentation and whitespace.
3.3.3 C++
Bjarne started working on the middle-level programming language C++ in 1979 at Bell
Labs. Several operating systems, including Windows, Mac OS, and various UNIX
variants, support the C++ programming language. The C++ lesson is a straightforward
and useful technique to explain the various C++ principles to software engineers at all
levels.
Because C++ is so near to the hardware, programmers have the opportunity to work at
a low level, giving them more control over memory management, improved
performance, and resilient software development.
3.3.4 Ubuntu (OS in Raspberry PI)
Ubuntu is a Linux-based operating system. It may be used with desktop computers,
mobile phones, and network servers. The system was developed by Canonical Ltd., a
company with its main office in the UK. Open source software development is the
cornerstone of all the guiding principles used to construct the Ubuntu software. The
Linux-based OS was modelled after the Unix operating system. In multiuser systems,
the Linux kernel has multitasking capabilities. The fact that GNU/Linux is open source
and free to use is a positive thing. Linux is the perfect operating system for computer
hackers and nerds because you have complete control over it. a few of the most wellliked Linux. Tech pioneers use Ubuntu to create highly secure, software-defined robots.
The most widely used Linux distribution for giant embedded devices is Ubuntu.
Innovative IT firms look to Ubuntu to achieve their vision of a robotic future as
autonomous robots become more sophisticated. Long-term support (LTS) editions of
Ubuntu offer customer support for a period of up to five years. These reasons have
caused the ROS developers to continue with Ubuntu, and ROS only fully supports
Ubuntu. Robot programming works best when done with Ubuntu and ROS.
3.3.5 ROSSERIAL and Arduino Ide
There is also the Arduino Software (IDE), sometimes known as the Arduino Integrated
Development Environment. It has a text editor for writing code, many menus, a toolbar
36
with buttons for frequently used tasks, a message box, and a text terminal. To upload
programmed and communicate with hardware, it connects to both.
In order to create computer programs known as sketches, one must utilize the Arduino
software (IDE). These drawings were created in a text editor and saved as ino files. In
the editor, there are tools for text substitution and text searching. The messaging box
provides commentary.
also displays issues while exporting and saving. The terminal displays data and text
produced by the Arduino Software (IDE), including complete error messages. The
lower right-hand corner of the window shows the configured board and serial port.
Utilizing the buttons on the toolbar, you may create, save, and save drawings as well as
validate and upload programs, see the serial monitor, and more.
The purpose of ROS Serial, a point-to-point implementation of ROS communications
over serial, is to integrate inexpensive microcontrollers like the Arduino into ROS. A
universal peer-to-peer protocol, libraries for usage with Arduino, and nodes for the
PC/Tablet side make up ROS serial (currently in both Python and Java).
3.3.6 Simulated and Real Rover Transformation
To visualize the movement of the robot in Rviz we must have to do the robot parts
transformation for broadcasting and make joints for visualize the movement of the
rover.
Figure 3.8: Rover Transformation
3.3.7 TF-Tree and Topics Publishing by Frames
To check the transformation that broadcasting of the frames is perfect or not we make
TF-Tree of our rover
37
Figure 3.9: TF-tree (transformation tree) of Rover
Figure 3.10: Topics Used in ROS For Communication
3.3.8 Worlds Modeling Models for Indoors Environments
These are the world modeling model for indoor environments
3.3.8.1 Hector Mapping
Hector mapping is a SLAM technique that may be used to platforms that experience
roll and pitch motion as well as without odometry (of the sensor, the platform or both).
It is a mapping algorithm that solely extracts the map of the environment using laser
scan data.
38
Figure 3.11: Flow Diagram and 2D Map
3.3.9 Implementation of IRA Laser Tools (The Real Time) Customize Package
•
IRA Laser Tools is a customize package which we use here to merge both
(vertical and horizontal) Lidar readings and publish into a single point cloud.
•
It is used to hold the real sample or reading which comes from both Lidars to
make 3D Real time map of the whole environment
Figure 3.12: 3D Point-Cloud using IRA Laser Tools (The Real Time) Package
3.3.9.1 Lidar’s Pose Update
The working of Ira-laser tools is that is merge the both lidar (vertical and horizontal)
and produce single point cloud another is important think is that its transformation use
to update the the pose of lidar w.r.t odometry. Map to odom transformation is used to
update the map using odometry. If we don’t do this issue in visualization in Rviz.
39
Figure 3.13: Lidar Pose Update Through Odometry of Rover
3.4 Hardware Details and Control
Hardware is the physical part of robot. Details of each component is described below
starting from Raspberry Pi all the way to Motors. Each component has been chosen
after complete research about its working and compatibility with the task required.
Though other similar components could also have been used but the following are
selected on the basis of price, compatibility, ruggedness and flexibility of working
under diff conditions.
40
3.4.1 Rover Instrumentation and Control
Figure 3.14: Rover Instrumentation and control
3.4.2 Rover Connectivity and Control
Figure 3.15: Rover Connectivity and control
3.4.3 Connectivity and Control
In the above given figure, you can see that both lidars connected with our raspberry pi
and Arduino also connected. Arduino used to control the feedback motors of rover and
get the feedback from encoder and send to raspberry pi through rosserial for trajectory
of rover drivers are used to control motor using Arduino raspberry pi connected
wirelessly with ROS-PC. The communication between pc and pi is 2 types ssh and
41
through export ROS master URI. The lidar data is subscribed by pc form pi and this
become master slave communication. SSH communication used for accessing
raspberry to allow the communication of Arduino with raspberry.
3.4.4 Rp-Lidar and Its Working
The RPLIDAR A1 360-degree 2D laser scanning system was produced by SLAMTEC.
The device can do a 360-degree scan within a 6 meter radius. The 2D point cloud data
that is produced may be used for mapping, localization, and modelling of objects and
environments. Every round, RPLIDAR A1 sampled 360 points using a 5.5 hz scanning
frequency. Further more its maximum configuration is 10 hz. RPLIDAR A1 is at its
most basic level, a laser triangulation measurement apparatus. Any indoor or outdoor
environment with or without sunshine, may use it without a hitch.
3.4.4.1 System Connection
A motor system and a range scanning system are both present in RPLIDAR A1. Before
starting to rotate and scan in a clockwise orientation, RPLIDAR A1 powers up each
subsystem. The user can get range scan data by using the serial port or USB
communication link.
Figure 3.16: RP-Lidar Description
The speed detection and adaptation system is included with the RPLIDAR A1.
According to the speed of the motor, the system will automatically modify the laser
scanner's frequency. And using a communication interface, the host system may obtain
the true speed of the RPLIDAR A1. RPLIDAR A1 is more simpler to operate and has
a lower BOM cost because to the straightforward power supply design. The
42
next sections include detailed specifications about power and communication
interface.
3.4.5 Raspberry pi
In the UK, the Raspberry Pi Foundation developed a range of small single-board
computers. Early on, the Raspberry Pi movement aided in advancing the introduction
of basic computer science education in classrooms and developing countries. Later, for
applications like robotics, the internet of things, and other areas where it was far more
prevalent than anticipated, the initial Raspberry Pi model was marketed outside of its
intended market. It is presently widely used in a number of industries, including weather
monitoring, because to its low cost and high mobility.
Figure 3.17: Raspberry Pi 3 Model B
The Raspberry Pi Foundation makes the Debian-based (32-bit) Linux distribution
Rasbian operating system, along with third-party Ubuntu, Windows 10 IoT Core, RISC
OS, and LibreELEC, accessible for download (specialized media center distribution).
In addition to promoting Python and Scratch as the two main programming languages,
it also supports a large number of other languages. Although the original open source
firmware is unavailable, there is an unauthorized open source substitute. A number of
different operating systems work on a Raspberry Pi as well. Also available through the
official website are third-party operating systems including Windows 10 IoT and
Ubuntu MATE.
43
3.4.5.1 Specifications for the Raspberry Pi Model 3B
3.4.6 Arduino Uno
•
Because Arduino is an open-source platform, anybody may customize and
optimize the board based on the quantity of instructions and the goals they have
in mind.
•
This board has an integrated regulation function that maintains voltage control
when an external device is attached. The following basic information about
Arduino is provided:
•
The Arduino Uno development board, created by Arduino.cc, is based on the
"atmega 328" microcontroller and is referred to be the original Arduino board.
•
The operational voltage for the Arduino Uno is 5V, while the input voltage
ranges from 7V to 12V.
•
As long as the load doesn't exceed the board's maximum current rating of 40
mA, no damage will be done.
•
The operational frequency of the Arduino Uno is 16MHz, and a crystal
oscillator is included with the board.
•
The Arduino Uno Pinout comprises of 6 analogue pins from A0 to A5, as well
as 14 digital pins from D0 to D13.
•
The board may be reset programmatically using the one reset pin that is there.
44
We must set this pin to LOW in order to reset the board.
•
Additionally, it includes six power pins that offer various voltage levels.
•
Six of the 14 digital pins are utilized to create PWM pulses with an 8-Bit
resolution. The Arduino UNO's PWM ports are D3, D5, D6, D9, D10, and D11.
•
The Arduino UNO includes three different types of memory, including:
•
SRAM: 2KB.
•
EEPROM: 1KB.
•
Flash Memory: 32KB.
•
The three types of communication protocols that the Arduino UNO offers for
interacting with external devices are:
•
Serial Protocol.
•
I2C Protocol.
•
SPI Protocol.
Figure 3.18: Arduino Uno
3.4.7 Motor Driver Module L298N
The ST business manufactures the L298N Motor Driver Module, sometimes referred
to as a high voltage Dual H-Bridge. It is made to operate with common TTL voltage
levels.With the ability to adjust speed, forward direction, and reverse, inductive loads
like DC motors and stepper motors are powered by H-bridge drivers.You are able to
drive voltages up to 46V and continuous currents up to 2A per channel with this dual
H-Bridge driver.
The following are some of the specifications:
45
Figure 3.19: Motor Driver Module L298N
3.4.8 DC Encoder Motor
A side shaft DC gear motor, model number Rhino GB37 12V. This engine is small and
made entirely of metal. It may be utilized continuously in consumer electronics and
industrial machines due to its low current consumption. A D type output shaft and a
37mm diameter are also features of the small spur gearbox that enable excellent
connection. The Rhino GB37 motor is available with several gearbox ratios, giving it
an output rpm range of 20 to 1600 that is appropriate for a variety of applications. Rhino
GB37 motors are ideal for this application since there are numerous tiny machinery
applications that call for a compact and dependable industrial motor. several uses,
including vending machines, coffee makers, slot machines, various kinds of pumps,
tissue dispensers, gaming zone equipment, and many more. This GB37 encoder motor
can aid in precise motor position control.
3.4.8.1 Features:
46
3.4.8.2 Encoder Specifications:
3.4.8.3 Eccoder Connection
Figure 3.20: DC Encoder motor
3.5 System Analysis and Specifications
Following are the System Analysis and Specifications
3.5.1 Lidar Zero-Degree Transformation
•
The decrease in the length of the lidar's frame (reduction in the lidar's frame,
compact in size) is the goal of the zero-degree transformation.
•
A further goal is to make lidar more adaptable.
Figure 3.21: Lidar Zero-Degree Transformation.
47
3.5.2 Performance Evaluation and Selection of 2D Laser Scanners
There are different 2D laser scanners on the market that offer the advantage of noncontact sensing of the examined region and can function in a variety of situations.
Based on general features of scanners such as
•
compactness
•
interfacing
•
cost
Following three 2D laser scanners specifications are as follows:
Table 3.1: Specifications of 2D Laser Scanners.
48
Chapter 4
Results And Discussion
4.1 Introduction
In this result and discussion chapter we divided our results in three sections in simulated
section we show the simulated map ,its 2D map and the 3D point-cloud of simulated
map in second section we show the real image of map its 2D map and the real-time 3D
point cloud of the map further we show the results of some scanned places in map and
some other experimental results in last section we show the comparison of the simulated
and hardware results .
4.2 Rover Based Localization and Mapping
•
we are using two RPLIDARs.
•
One is for horizontal scanning
•
The other is for vertical scanning
•
One LIDAR scanning horizontally is being used for 2D SLAM
•
The 2D map has been developed using hector SLAM.
•
There is no requirement for odometer data from wheels in Hector SLAM
•
The package matches the current scan data with previous data and estimates the
value for u
•
As a result, an occupancy grid map of the environment is generated, along with
the related pose of the rover.
•
4.3 Real-Time Point Cloud Generation
•
For taking the data and generation of real-time point cloud an experiment has
been performed with a rover at the 3rd-floor corridor in the IIEE.
•
It is clear that the environment has different pillars and walls, and the path is Eshape. The rover was controlled through a wifi network using teleop twist
keyboard package.
4.4 Simulated Results
These are the simulated results.
49
4.4.1 Simulated Map Generation
•
In Ros there are 2 tools which are widely use.
•
Gazebo (a simulation plateform).
•
Rviz (the debugging tool).
Figure 4.1: Simulated Map in Gazebo.
4.4.2 2D Grid Map Generation
•
The data of both scanners, were also transmitted using the same network. The
SLAM package used a Horizontally mounted scanner and estimated the 2D map
and pose in the simulated corridor. The result of these two values is shown in
Figure
•
Red arrows shows the odometry of the simulated rover at every movement.
Figure 4.2: 2D Grid Map of Simulated Map in Rviz and its PNG
50
4.4.3 Real-Time Simulated 3D Point-Cloud Generation
•
Real-Time point cloud generation of simulated corridor as shown in given is
follows:
Figure 4.3 (a): 3D Point-Cloud of Simulated Corridor.
Figure 4.3 (b): 3D Point-Cloud of Simulated Corridor.
Figure 4.3 (c): 3D Point-Cloud of Simulated Corridor
51
4.5 Hardware Results
These are the results of Our hardware.
4.5.1 Real Image of Corridor
Figure 4.4: Real Image of Corridor.
4.5.2 2D Grid Map Generation of Corridor
•
Similarly, in hardware data of both scanners, were also transmitted using the
same network. The SLAM package used a Horizontally mounted scanner and
estimated the 2D map and pose in the real corridor.
•
In the given 2D grid map of the corridor the is little bar of extra scan shows that
the room is open.
Figure 4.5: 2D Map of Corridor.
52
Figure 4.6: 2D Map of Corridor with 3D Point-Cloud Generation.
4.5.3 Real-Time 3D Point-Cloud of Corridor
Figure 4.7 (a): 3D Real-Time Point-Cloud of Real Corridor.
Figure 4.7 (b): 3D Real-Time Point-Cloud of Real Corridor.
53
Figure 4.7 (c): 3D Real-Time Point-Cloud of Real Corridor.
Figure 4.7 (d): 3D Real-Time Point-Cloud of Real Corridor
4.5.4 Some Scanned Areas in Map
These are also results of our hardware.
54
4.5.4.1 The Desk
Figure 4.8: The Point-Cloud of Desk and its Real Image.
4.5.4.2 The Triangular Table
Figure 4.9: The Point-Cloud of Triangular Table and its Real Image.
4.5.4.3 The Window
Figure 4.10: The Point-Cloud of Window and its Real Image.
55
4.5.4.4 The Beam
Figure 4.11: The Point-Cloud of Beam and its Real Image.
4.5.5 Some Other Experimental Result
These are the result of object detection.
4.5.5.1 The Chair
Figure 4.12: The Point-Cloud of Chair and its Real Image.
56
4.5.5.2 The Table
Figure 4.13: The Point-Cloud of Table and its Real Image
4.5.6 Comparison Between Simulation and Hardware Results
Simulation
•
The total volume of the simulated corridor developed by the rover is 27.7 x 2 x
3.2 in meters.
•
In simulated 2D map there is no bad scanning present.
•
Simulated rover weight is about 1.5 KG so the linear and angular motion of the
motors are perfect due to less load.
•
The total operation completed by the simulated rover is about 5 min.
Experiment
•
The total volume of the real corridor developed by the rover is 27.37 x 1.92 x
3.04 in meters.
•
In real 2D map there is some bad scanning present at one place.
•
Real rover weight is about 4.5 KG so the linear and angular motion of the motors
are good but not perfect due to enough load.
•
The total operation completed by the simulated rover is about 10 min.
57
Chapter 5
Conclusion And Recommendations
5.1 Conclusion
This study demonstrated the effective operation of a scanning and mapping system that
uses a cheap rover. When compared to the physical dimensions of the regions that have
been investigated, the system's established mapping findings have been shown to be
88% accurate. When compared to the current manual surveying methodologies of the
regional market, the overall time spent creating the surveying outputs has been reduced
to 85% or more. The system's functioning offers simple use at reasonable prices. The
effective Real-Time 3D Mapping method has been used to demonstrate how a rover
may be used to survey relatively vast areas. In order to improve the rover's localization
and the visualization of the examined regions, more vision and inertial sensors must be
used in further testing and analysis of the system, and this will be taken into account in
subsequent improvements to the rover system.
5.2 Limitations
•
Still power consumption is very high and high mAH batteries required to do
this work.
•
Because our raspberry Pi is wirelessly connected to our ROS-PC so the
wireless network should be very strong. If the network will disconnect all the
collected data of Map will vanish.
•
In the customize scanning system the lidars should be connected
approximately 90-deg. little bit error is negligible but the large error in angle is
not negligible because the Point-Cloud which will make will tilted.
•
It can easily scan the static map and objects present in the map but problem
occurs when there are dynamic changes present in the map.
5.3 Recommendations for Future Work
For the future recommendation first we suggest the implementation of autonomous
navigation of the robot then suggestion for artificial intelligence for features recognition
of the map like the height, length and width of the map objects detection present in the
map objects length, width and height and in last our suggestion is to implementation of
58
different path planning algorithms like RRT, A* that robot choose shortest path to reach
its destination in minimum time.
APPENDIXS
ROS Programming Codes ,Arduino Code for Differential Drive
59
60
61
62
63
64
65
66
67
68
69
70
References
 [1] Jan Weingarten and Roland Siegwart,“EKF-based 3D SLAM for structured
environment reconstruction”. https://ieeexplore.ieee.org/document/1545285

[2] Peter Zhang, Evangelous Millos & Jason Gu,“General Concept of 3D SLAM”.
https://www.researchgate.net/publication/221786534_General_Concept_of_3
D_SLAM#:~:text=Simultaneous%20localization%20and%20mapping%20(SL
AM,map%20that%20includes%20feature%20locations.

[3] Asif Memon, Syed Riaz un Nabi Jafri, Syed Muhammad Usman Ali,” A Rover
Team based 3D Map Building using Low Cost 2D Laser Scanners”.
https://www.researchgate.net/publication/357363038_A_Rover_Team_based_3D_Ma
p_Buildin g_using_Low_Cost_2D_Laser_Scanners
 [4] PileunKim,JingdaoChen ,Yong K.Cho,“SLAM-driven robotic mapping and
registration of 3D point clouds”.
https://www.sciencedirect.com/science/article/abs/pii/S0926580517303990

[5] R. Navon, Research in automated measurement of project performance indicators,
Automation in Construction. 16 (2007) 176–188. doi:10.1016/j.autcon.2006.03.003.
 [6] I.K. Brilakis, L. Soibelman, Shape-based retrieval of construction site
photographs, ASCE Journal of Computing in Civil Engineering. 22 (2008) 14–20.
doi: 10.1061/(ASCE)0887- 3801(2008)22:1(14)

[7] M. Golparvar-fard, F. Peña-Mora, Application of visualization techniques for
construction progress monitoring, ASCE International Workshop on Computing in
Civil Engineering, Pittsburgh, PA, July 24-27. (2007) 216–223. doi:
10.1061/40937(261)27

[8] M. Golparvar-fard, F. Peña-Mora, S. Savarese, A 4-Dimensional augmented
reality model for automating construction progress monitoring data collection,
processing and communication, Journal of Information Technology in Construction.
14 (2009) 129–153. doi: http://itcon.org/paper/2009/13
 [9] H. Bae, J. White, M. Golparvar-fard. Y. Pan, Y. Sun, Fast and scalable 3D cyberphysical modeling for high-precision mobile augmented reality systems, Personal and
Ubiquitous Computing. 19 (2015) 1275–1294. doi:10.1007/s00779-015-0892-6.

[10] A. Dimitrov, M. Golparvar-fard, Vision-based material recognition for
automated monitoring of construction progress and generating building information
modeling from unordered site image collections, Advanced Engineering Informatics.
28 (2014) 37–49. doi:10.1016/j.aei.2013.11.002.
71

[11] C. Kim, B. Kim, H. Kim, 4D CAD model updating using image processingbased construction progress monitoring, Automation in Construction. 35 (2013) 44–
52. doi:10.1016/j.autcon.2013.03.005.

[12] M. Golparvar-Fard, J. Bohn, J. Teizer, S. Savarese, F. Peña-Mora, Evaluation of
imagebased modeling and laser scanning accuracy for emerging automated
performance monitoring techniques, Automation in Construction. 20 (2011) 1143–
1155. doi:10.1016/j.autcon.2011.04.016..
 [13] J.S. Bohn, J. Teizer, Benefits and barriers of construction project monitoring
using highresolution automated cameras, ASCE Journal of Construction Engineering
and Management. 136 (2010) 632–640. doi: 10.1061/(ASCE)CO.1943-7862.0000164

[14] H. Fathi, I. Brilakis, Machine vision-based infrastructure as-built documentation
using edge points, ASCE Construction Research Congress(CRC), West Lafayette, IN,
May 21- 23. (2012) 757– 766. doi: 10.1061/9780784412329.077
 [15] A. Bhatla, S.Y. Choe, O. Fierro, F. Leite, Automation in construction evaluation
of accuracy of as-built 3D modeling from photos taken by handheld digital cameras,
Automation in Construction. 28 (2012) 116–127. doi:10.1016/j.autcon.2012.06.003.

[16] E.J. Jaselskis, Z. Gao, A. Welch, Pilot study on laser scanning technology for
transportation projects, Proceedings of the 2003 Mid-Continent Transportation
Research Symposium, Ames, IW, August 21-22. (2003). doi: 10.1.1.201.4620

[17] Y.K. Cho, C.T. Haas, K. Liapi, S.V Sreenivasan, A framework for rapid local
area 30 modeling for construction automation, Automation in Construction. 11 (2002)
629–641. doi: 10.1016/S0926- 5805(02)00004-3
 [18] Y.K. Cho, M. Gai, Projection-recognition-projection method for automatic object
recognition and registration for dynamic heavy equipment operations, ASCE Journal
of Computing in Civil Engineering. 28 (2013). doi: 10.1061/(ASCE)CP.19435487.0000332

[19] C. Wang, Y.K. Cho, Performance test for rapid surface modeling of dynamic
construction equipment from laser scanner data, IAARC 31st International
Symposium on Automation and Robotics in Construction (ISARC), Sydney,
Australia, July 9-11. (2014) 134–141. doi: 10.22260/ISARC2014/0018

[20] S. Kwon, F. Bosche, C. Kim, C.T. Haas, K.A. Liapi, Fitting range data to
primitives for rapid local 3D modeling using sparse range point clouds, Automation in
Construction. 13 (2004) 67–81. doi:10.1016/j.autcon.2003.08.007.
72

[21] J. Chen, Y. Fang, Y.K. Cho, C. Kim, Principal axes descriptor for automated
construction equipment classification from point clouds, ASCE Journal of Computing
in Civil Engineering. (2016) 1–12. doi:10.1061/(ASCE)CP.1943-5487.0000628.

[22] Y. Fang, Y.K. Cho, J. Chen, A framework for real-time pro-active safety
assistance for mobile crane lifting operations, Automation in Construction. 72 (2016)
367–379. doi:10.1016/j.autcon.2016.08.025.

[23] F. Bosche, C.T. Haas, Automated retrieval of 3D CAD model objects in
construction range images, Automation in Construction. 17 (2008) 499–512.
doi:10.1016/j.autcon.2007.09.001.

[24] F. Bosche, C.T. Haas, Towards automated retrieval of 3D designed data in 3D
sensed data, ASCE International Workshop on Computing in Civil Engineering,
Pittsburgh, PA, July 24-27. (2007) 648–656. doi: 10.1061/40937(261)77
 [25] S. El-omari, O. Moselhi, Automation in construction integrating 3D laser
scanning and photogrammetry for progress measurement of construction work,
Automation in Construction. 18 (2008) 1–9. doi:10.1016/j.autcon.2008.05.006.
 [26] D. Rebolj, C. Nenad, A. Magdic, P. Podbreznik, M. Pšunder, Automated
construction activity monitoring system, Advanced Engineering Informatics. 22
(2008) 493–503. doi:10.1016/j.aei.2008.06.002.

[27] B. Akinci, F. Boukamp, C. Gordon, D. Huber, C. Lyons, K. Park, A formalism
for 31 utilization of sensor systems and integrated project models for active
construction quality control, Automation in Construction. 15 (2006) 124–138.
doi:10.1016/j.autcon.2005.01.008.

[28] C. Gordon, B. Akinci, Technology and process assessment of using LADAR and
embedded sensing for construction quality control, ASCE Construction Research
Congress, San Diego, CA, April 5-7. (2005) 1–10. doi: 10.1061/40754(183)109
 [29] F. Bosche, E. Guenet, Automating surface flatness control using terrestrial laser
scanning and building information models, Automation in Construction. 44 (2014)
212–226. doi:10.1016/j.autcon.2014.03.028.
 [30] C. Wang, Y.K. Cho, C. Kim, Automatic BIM component extraction from point
clouds of existing buildings for sustainability applications, Automation in
Construction. 56 (2015) 1–13. doi:10.1016/j.autcon.2015.04.001.
73

[31] G.S. Cheok, W.C. Stone, R.R. Lipman, C. Witzgall, Ladars for construction
assessment and update, Automation in Construction. 9 (2000) 463–477. doi:
10.1016/S0926- 5805(00)00058-3
 [32] C. Kim, C.T. Haas, K.A. Liapi, Rapid on-site spatial information acquisition and
its use for infrastructure operation and maintenance, Automation in Construction. 14
(2005) 666– 684. doi:10.1016/j.autcon.2005.02.002.
 [33] A. Adan, D. Huber, 3D reconstruction of interior wall surfaces under occlusion
and clutter, IEEE International Conference on 3D Imaging, Modeling, Processing,
Visualization and Transmission, Hangzhou, China, May 16-19. (2011). doi:
10.1109/3DIMPVT.2011.42
 [34] E.J. Jaselskis, Z. Gao, R.C. Walters, Improving transportation projects using
laser scanning, ASCE Journal of Construction Engineering and Management. 131
(2005) 377– 384. doi: 10.1061/(ASCE)0733-9364(2005)131:3(377)

[35] R.C. Walters, E.J. Jaselskis, Using scanning lasers for real-time pavement
thickness measurement, ASCE International Conference on Computing in Civil
Engineering, Cancun, Mexico, July 12-15. (2005). doi: 10.1061/40794(179)36
 [36] X. Xiong, A. Adan, B. Akinci, D. Huber, Automatic creation of semantically rich
3D building models from laser sscanner data, Automation in Construction. 31 (2013)
325–337. doi:10.1016/j.autcon.2012.10.006.
 [37] P. Tang, B. Akinci, D. Huber, Quantification of edge loss of laser scanned data at
spatial 32 discontinuities, Automation in Construction. 18 (2009) 1070–1083.
doi:10.1016/j.autcon.2009.07.001.

[38] A.J. Davison, I.D. Reid, N.D. Molton, O. Stasse, MonoSLAM : Real-Time
single camera SLAM, IEEE Transactions on Pattern Analysis and Machine
Intelligence. 29 (2007) 1052–1067. doi: 10.1109/TPAMI.2007.1049
 [39] F. Leberl, A. Irschara, T. Pock, P. Meixner, M. Gruber, S. Scholz, A. Wiechert,
Point Clouds : Lidar versus 3D vision, Photogrammetric Engineering and Remote
Sensing. 76 (2010) 1123–1134. doi: 10.14358/PERS.76.10.1123

[40] D. Roca, S. Lagüela, J. Armesto, P. Arias, Low-cost aerial unit for outdoor
inspection of building façades, Automation in Construction. 36 (2013) 128–135.
doi:10.1016/j.autcon.2013.08.020.
74

[41] J. Fuentes-pacheco, J. Ruiz-Ascencio, J.M. Rendón-mancha, Visual
simultaneous localization and mapping : a survey, Artificial Intelligence Review. 43
(2012) 55–81. doi:10.1007/s10462-012- 9365-8.

[42] Z.J. Chong, B. Qin, T. Bandyopadhyay, M.H.A. Jr, E. Frazzoli, D. Rus,
Synthetic 2D LIDAR for precise vehicle localization in 3D urban environment, IEEE
International Conference on Robotics and Automation, Karlsruhe, Germany, May 610. (2013) 1554– 1559. doi: 10.1109/ICRA.2013.6630777

[43] H.J. Chang, C.S.G. Lee, Y.C. Hu, Y. Lu, Multi-robot SLAM with topological /
metric maps, IEEE International Conference on Intelligent Robots and Systems, San
Diego, CA, Oct 29-Nov 2. (2007) 1467–1472. doi: 10.1109/IROS.2007.4399142

[44] M. Bosse, R. Zlot, Continuous 3D scan-matching with a spinning 2D laser,
IEEE International Conference on Robotics and Automation, Kobe, Japan, May 1217. (2009) 4312–4319. doi: 10.1109/ROBOT.2009.5152851
 [45] J. Zhang, S. Singh, LOAM : Lidar odometry and mapping in real-time,
Proceedings of Robotics: Science and Systems Conference, Berkeley, CA, July 12-16.
(2014). doi: 10.15607/RSS.2014.X.007
75
Glossary
Term
Definition
Page at which it
First appeared
Odometry
Odometry is the use of data from motion sensors
Pg. 4
to estimate change in position over time
Localization
The localization is integrated in ROS by emitting
a transform from a map-frame to the odom frame
that “corrects” the odometry.
76
Pg. 5
Download