Uploaded by raghu.eee

Autonomous Vehicle Design: Mini Project Report

advertisement
A
Mini project Report
On
AUTONOMOUS VEHICLE DESIGN
Submitted in partial fulfillment of the
Requirement for the award of the degree of
BACHELOR OF TECHNOLOGY
In
ELECTRICAL AND ELECTRONICS ENGINEERING
From
JAWAHARLAL NEHRU TECHNOLOGICAL
UNIVERSITY HYDERABAD
By
M.SANDEEP
B.SAIKUMAR
P.CHANDRA SHEAKAR
K.TEJA KIRAN
(21C11A0209)
(21C11A0209)
(22C15A0201)
(21C11A0214)
Under the Esteemed guidance of
Mr. BOILLA SRINUM.Tech
Assistant Professor
DEPARTMENT OF ELECTRICAL & ELECTRONICS ENGINEERING
ANURAG ENGINEERING COLLEGE
AN AUTONOMOUS INSTITUTION
(Accredited by NAAC with A+ Grade, Approved by AICTE, New Delhi & Affiliated to JNTUH, Hyderabad)
ANANTHAGIRI (V&M), SURYAPET (DT), T.S, INDIA -508206
2024-2025
ANURAG ENGINEERING COLLEGE
AN AUTONOMOUS INSTITUTION
(Accredited by NAAC with A+ Grade, Approved by AICTE, New Delhi & Affiliated to JNTUH, Hyderabad)
ANANTHAGIRI (V&M), SURYAPET (DT), T.S, INDIA -508206
www.anurag.ac.in
CERTIFICATE
This is to certify that the Mini project report entitled “AUTONOMOUS VEHICLE DESIGN” is a
bonafide work done by M.SANDEEP(21C11A0209),B.SAIKUMAR(21C11A0207), P.CHANDRA
SHEKAR (22C15A0202),K.TEJA KIRAN(21C11A0213) in partial fulfilment of the requirement
for the award of the degree of Bachelor of Technology in Electrical and Electronics Engineering
from JNTUH, Hyderabad during the academic year 2024-2025.
Project Guide
Head of the department
Mr. BOILLA SRINU
M.Tech
Assistant Professor
EXTERNAL EXAMINAR
Mr.RAGHU
M.Tech.
Associate Professor
ACKNOWLEDGEMENT
We would like to express our sincere acknowledgment to our Mini project guide Mr.
BOILLA SRINU, Assistant professor, Department of Electrical and Electronics Engineering
for his kind co-operation and encouragement, which help us in completion of this internship.
It would have been very difficult to complete this project without the enthusiastic support,
encouragement and insight advice given by him.
Also we would like to express our sincere thanks to T.RAGHU, Associate Professor
& Head of the Department, Electrical and Electronics Engineering, for his support and
valuable suggestions for the completion our mini project.
We would like to express our sincere thanks to our Principal Dr. T.SURESH KUMAR, for
giving permission and his constant motivation towards higher education and research.
It is our privilege to thank all Mini Project Review Committee members for their
continuous guidance and monitoring.
We are indebted to the computer support group and library of our college for giving us
access to the software and books required for this mini project.
Finally, we wish to thanks our parents and friends for their help and encouragement during
the mini project work.
Thank you
M.SANDEEP
(21C11A0209)
B.SAI KUMAR
(21C11A0207)
P.CHANDRA SHEKAR(22C15A0201)
K.TEJA KIRAN
(21C11A0213)
DECLARATION
We hereby declare that the mini project work which is being entitled
”AUTONOMOUS VEHICLE DESIGN” was submitted towards the partial fulfilment of
the requirements for the award of the degree of Bachelor of Technology in Electrical and
Electronic Engineering from Anurag Engineering College, Ananthagiri(V&M), Affiliated
to Jawaharlal Nehru Technological University Hyderabad is an authentic record of our own
work carried out during the academic year 2024-2025. This work has not been submitted to
any other university.
M.SANDEEP
(21C11A0209)
B.SAI KUMAR
(21C11A0207)
P.CHANDRA SHEKAR(22C15A0201)
K.TEJA KIRAN
(21C11A0213)
S.NO
LIST OF CONTENT
PAGE
NO
LIST OF FIGURE
ABSTRACT
CHAPTER – I
1
1.1
1.2
1.3
1.4
1.5
INTRODUCTION
1
1.1.1
MICRO-CONTROL UNIT
1
1.1.2
THE SENSING UNIT
1.1.3
THE COMMUNICATION UNIT
1.1.4
THE MOTOR DRIVE UNIT
1.1.5
THE DISPLAY UNIT
1
2
2
2
COMPUTER VISION
3
SENSOR FUSION
4
CIRCUIT DIAGRAM OF AUTONOMOUS VECHILE
DATA FLOW DIAGRAM OF AUTONOMOUS VECHILE
4
5
CHAPTER – II
2
COMPONENTS OF AUTONOMOUS VECHILE
2.1
THE RADAR
6
2.2
THE LIDAR
6
2.3
ULTRASONIC SENSORS
7
2.4
GPS/IMU
7
2.5
VIDEO CAMERAS
8
3
CHAPTER-III
LOCALIZATION&PATH PLANNING
4
3.1
LOCALIZATION
9
3.2
PATH PLANNING
10
CHAPTER-IV
i
CONTROL AND MANEUVERING & LONGITUDINAL
CONTROL
4.1
CONTROL AND MANEUVERING
14
4.2
LONGITUDINAL CONTROL
14
4.3
LEVELS OF AUTONOMOUS VECHILES
17
4.3.1
LEVEL 0 (NO DRIVING AUTOMATION)
17
4.3.2
LEVEL 1 (DRIVER ASSISTANCE)
17
4.3.3
LEVEL 2 (PARTIAL DRIVING AUTOMATION)
17
4.3.4
LEVEL 3 (CONDITIONAL DRIVING AUTOMATION)
17
4.3.5
LEVEL 4 (HIGH DRIVING AUTOMATION)
18
4.3.6
4.4
LEVEL 5 (FULL DRIVING AUTOMATION)
18
BENEFITS OF AUTONOMOUS VEHICLES
18
4.4.1
SAFETY
19
4.4.2
CONGESTION AND TRAFFIC MANAGEMENT
19
4.4.3
HUMAN BEHAVIOR
19
4.4.4
ADOPTION BY SPECIFIC SECTORS
20
IMPLEMENTATION ISSUES
21
4.5.1
TECHNOLOGY ISSUES
21
4.5.2
VEHICLE COSTS
21
4.5.3
LIABILITY AND LAW ISSUES
21
4.5.4
SECURITY AND PRIVACY ISSUES
21
CONCLUSION
23
REFERENCES
24
4.5
ii
S.NO
LIST OF FIGURE
Fig .1.1
PAGE NO
Block Diagram
1
Autonomous Car Works
2
Computer Vision
3
Position Of Vehicle
3
Fig.1.3
Sensor Fusion
4
Fig.1.4
Circuit Diagram Of Autonomous Vehicles.
4
Fig1.5
Data Flow Diagram Of Autonomous Vehicle
5
Fig.2.1
Radar
6
Fig.2.2
Lidar
6
Fig.2.3
Ultrasonic Sensors.
7
Fig.2.4
Gps/Imu
7
Fig.2.5
Video Camers
8
Fig.3.1
Localization Algorithms
10
Fig.3.2
Path Planning
10
Fig.3.2.1
Behavior Planner
11
Fig.3.2.2
Jerk Minimized Trajectory(Jmt
12
Fig.3.2.3
Trajectory
12
Fig.3.2.4
Path Converter
Control And Maneuvering - Longitudinal And
Latitudinal Control
Longitudinal Control
13
Fig.4.2.1
.Pid Controller
15
Fig.4.2.2
Pid Controller
16
Fig.4.2.3
Servo Motor
16
Fig4.2.4
CAN Bus
16
Levels Of Autonomous Vechiles
17
Fig.1.1.5
Fig.1.2
Fig.1.2.1
Fig.4.1
Fig.4.2
Fig.4.3
iii
14
15
ABSTRACT
Autonomous vehicles also commonly known as driverless or self-driving vehicles , are
automobiles that require no human involvement for operating or controlling them. In recent
years, advancement in automated vehicle . concepts has progressed but still some human input
is required, depending upon the level of automation. Experts anticipate that automobiles will
be capable of driving themselves within 3-7 years. This paper describes current status, recent
trends and research of self driving vehicles in the automobile industry.
A detailed analysis of the technologies used by automated vehicles to sense their
environment and the level of automation in such vehicles is also included. The expected shortterm and long term, positive and negative, beneficial and harmful impacts of driverless
technology such as greenhouse gas emission, energy consumption etc, are assessed. As
widespread adoption of self-driving vehicles is considered to be inevitable, therefore
requirement of certain technical and legal guidelines will be essential for safe and tension-free
travel. The potential concerns regarding autonomous vehicles must be discarded with safe
policies and technologies as discussed in the paper.
iv
1.1. INTRODUCTION
The concept of self-driving cars has always intrigued many people. However, within the
last decade, the idea has rapidly turned into reality. With IoT, programs are interconnected
via a network of networks and they have the ability to analyze and process the data
according to the user requirement.
The functioning of these autonomous cars involves an enormous quantity of data
collection and real-time processing of the collected data which it does with the help of
sensors. These sensors, namely, Lidar, GNSS/IMU acts as an input source for the processing
system (ECU) of the car. As the running car is a dynamic environment, ECU gets
multiple requests simultaneously to execute various tasks at once, it uses dynamic scheduling
system to prioritize the importance of the request and sequence in which the tasks will be
executed. As the driverless cars attain increasing levels of autonomy, the importance of
memory technologies, both from a safety and performance point of view makes it
significantly important.
Every self-driven car engages below mentioned 5 core components: Computer Systems
and Vision, Sensor Fusion, Localization, Path Planning, Control, and Maneuvering.
The intention of the project is to explore the aforementioned components used in
designing this complex system.
Figure 1.1.Block diagram
1.1.1. MICRO-CONTROL UNIT
The microprocessor control unit of our system is implemented with a stan-dard
Arduino Uno powered by an ATmega328 processor. The system board has 14 digital
input/output pins, 6 analog inputs, a 16 MHz ceramic res-onator, a USB connection, a power
jack, an ICSP header, and a reset button.Among the 14 digital I/O pins, six of them can be
used as PWM outputs.
Furthermore, it can be programmed by a computer via USB connection and powered by an
AC-to-DC adapter or a battery unit.
1.1.2. THE SENSING UNIT
In our system, two types of sensing devices, a sonic range finder and a set of microswitches, are used for collision detection and ball counting, respec-tively.To prevent from
collision, a sonic range finder, HC-SRF04, is deployed for range finding and obstacles
detection. To determine the distance between our vehicle and the obstacles, we have to
1
measure the elapsed time betweenthe generation of ultrasonic wave and the detection of
corresponding echo.However, various environmental issues, such as noises, light, dust,
surface material, etc., may severely interfere with the detected results. According to the
specification of HC-SRF04, the HC-SRF04 ultrasonic sensor module is capable of measuring
distances of length between 2 and 400 centimeters at a range accuracy of 3 millimeters ,
sensing angle no greater than 15 degrees,and effective area measuring range no less than 50
cm.The micro switches located at the top of the ejection arms of each collector channel are
used for counting the collected tennis balls. In the mean while,the number of collected balls is
displayed concurrently on the LCD screens of the vehicle and the smart phone.
1.1.3. THE COMMUNICATION UNIT
In our system, the user can remote control the autonomous vehicle by sending motor
control signals through the Bluetooth wireless communication channels between the
Communication Unit and user’s Android-based smart phone after their pairing is established.
To bridge the gap between Arduino and Android, we have adopted Amarino in developing
applications for Bluetooth communication between the Android-based smart-phone and the
communication unit of autonomous vehicle.
1.1.4. THE MOTOR DRIVE UNIT
The motor drive unit, denoted as MDU, contains a dedicated motor driver(L298), a
voltage regulator (78M05), and two H-bridges driving the two DC motors. Master Enable is
mainly used for either enabling or disabling the MDU regardless of the logic inputs, which
accepted input from SU to either halt or start the movement of our vehicle. With the four logic
inputs,we may control the direction of vehicle movements.
1.1.5. THE DISPLAY UNIT
To allow displaying collected ball count on the vehicle, we have adopted the
HD44780 from Hitachi as the display unit (DU) of our smart vehicle, which comprises an
LCD controller, an LCD driver, and an LCD screen supporting screen clearing, display
shifting, cursor flashing, and a set of 160 built-in 5x7 dot-matrix fonts.
Recent Examples:
An example of an autonomous vehicle today is the cars by the company, Waymo (Fig.2).
Waymo develops self driving cars to pick people up and drop them at their destination similar to Uber, but using the autonomous vehicle technology. Waymo cars utilize lidar
sensors which can detect objects 360 degrees around the vehicle as well as 3D objects to
make quick decisions. These sensors are more precise than the human eyes because it can
detect objects that would be otherwise hidden to the human.
Figure.1.1.5.How autonomous car works
2
1.2. COMPUTER VISION
Self-driving vehicles needs sensor and input devices like cameras, radar and lasers which
enable the cars to “see” the world around it and creating a digital map. Self-driving cars have
multiple cameras installed. Example Tesla equips its cars with 8 surround cameras that
provide 360 degrees of visibility around the cars up to 250m of range [14]. Cameras are responsible for many essential tasks including lane finding, road curvature estimation, obstacle
detection, traffic sign detection, traffic sign classification, traffic light detection, traffic light
classification, etc. Some cars have special type of cameras installed like fish eye camera for
panoramic view or stereo camera which helps in depth perception of the surrounding
area [10].
Figure.1.2. Computer vision
One of the critical functions of self-driving cars is detection and tracking of stationary and
moving objects, which it does by combining signals from several sensors including cameras,
radar and lidar sensors. It helps to estimate the position, velocity, trajectory and class of
object like whether its other vehicles or pedestrians [9]. Object detection is done in two parts:
image classification and image localization. Image classification is detection and
differentiation of the type of object in the image, like a car or person or traffic sign. For image
classification neural network is trained to recognize var- ious objects. It does this by
performing convolution operations on images in order to classify them [15]. Image
localization provides the specific location of the detected objects [15].
Figure 1.2.1. Position of vehicle
vehicles,etc; LiDAR applications are important for calculating the position the vehicle of,
radars are important for estimating speed. Sensor fusion is the combination of these and other
autonomous driving applications which, when smartly bundled and set up, give autonomous
vehicles an all-encompassing and thorough 360-degree view of the environment.It merges all
3
the information from the sensors and camera together and makes decisions from there. This
merged data is then fed into a high performance, centralized AI computer. An example of this
is the NVIDIA DRIVE AGX platform.
1.3.
SENSOR FUSION
Figure.1.3. Sensor fusion
Figure.1.3. sensor fusion
1.4.
CIRCUIT DIAGRAM OF AUTONOMOUS VEHICLE
Figure.1.4.circuit diagram of autonomous vehicles.
Many companies including Google are involved in the development of these
autonomous cars and several of the Google cars have competed around 2 million miles across
various cities in the United States (US). But other companies like Uber technologies and Tesla
motors are fast catching up and introducing their own autonomous vehicles on the roads .
This paper will be looking at the market penetration of autonomous vehicle and security
issues related to the adoption of autonomous vehicle. Technological and non-technological
issues related security of autonomous vehicles implementation will be looked at. The first
section of the paper will address the benefits of autonomous vehicles, and the later part
4
will cover the implementation issues with a specific look at security attacks against
autonomous vehicles. The paper will be finished off by policy recommendations to
address these issues with related to security and general adoptability of autonomous
vehicles.
1.5 DATA FLOW DIAGRAM OF AUTONOMOUS VECHILE
Figure.1.5.Data flow diagram of autonomous vechile
5
2.1.
THE RADAR
Figure.2.1.Radar
Radar technology has been around for quite some time, radar sensors adds on with
camera vision in times of low visi- bility like night driving, fog or if the camera is
covered.Radars work by transmitting radio waves in pulses and when these waves hit an object
they bounce back to the sensor, giving data about the speed and area of the item. Just like
cameras, radars surround the car so that the car can see at every angle.However, they can’t
distinguish between different types of objects, that’s why there are cameras. One of the
limitations of radar is that the object must be large enough to be detected.
Sensor fusion is about merging the data from Cameras and other sensors that helps
car in making decision. It is an idea which means to give the vehicle a mix of the best data
accessible from every one of its frameworks while overlooking the rest. The sensors are used
in combination because each of the radars, cameras and other sensors used by self-driving
cars has its own limitations which is why each individual system must be combined in order to
contribute to functions like cross-traffic assistance or obstacle avoidance. For example, just
like camera systems are ideal for identification of roads, reading traffic signs, detecting traffic
signals, recognizing other
2.2.
THE LIDAR
Figure.2.2.Lidar
6
Lidar or Light Detection and Ranging is a sensor that allows autonomous vehicles to have 3D
vision. LiDar works well in low light conditions, or when the camera is covered (just like
radars). LiDAR does not use radio waves it uses light waves that are reflected by the objects in
the surrounding and detected by the Lidar sensor, which are then brought together as a point
cloud, creating a three-dimensional image of the environment. It allows the vehicle to sense
everything in its environment, be it vehicles, buildings, pedestrians or animals. Hence why so
many development vehicles feature a large 360-degree rotating LiDAR sensor on the roof,
providing a complete view of their surroundings[10] [12].
2.3.
ULTRASONIC SENSORS
Figure.2.3.Ultrasonic sensors.
Ultrasonic sensors are commonly used in cars as parking sensors since the 1990s and are
very inexpensive. Their range can be limited to just a few metres in most applications,but they
are ideal for providing additional sensing capabilities for low-speed use cases[12].
2.4.
GPS/IMU
Figure.2.4.Gps/Imu
The GPS/IMU system helps the self-driving vehicles locate itself by reporting both
inertial updates and a global position estimate at a high rate. GPS gives fairly accurate
location but its update rate is slow (10Hz) and therefore it is not capable of providing realtime updates. IMU provides location updates more frequently at a minimum rate of 200Hz but
7
IMU errors accumulate over time, which leads to corresponding degradation in position
estimation. Thus, self-driving cars use a combination of both GPS and IMU for accurate and
real- time location updates [13].
2.5. Video Cameras
Figure.2.5.video camers
From photos to video, cameras are the most accurate way to create a visual representation of
the world, especially when it comes to self-driving cars. Autonomous vehicles rely on
cameras placed on every side front, rear, left and right to stitch together a 360-degree view of
their environment. Some have a wide field of view as much as 120 degrees and a shorter
range. Others focus on a more narrow view to provide long-range visuals. Some cars even
integrate fish-eye cameras, which contain super-wide lenses that provide a panoramic view, to
give a full picture of what’s behind the vehicle for it to park itself. Though they provide
accurate visuals, cameras have their limitations. They can distinguish details of the
surrounding environment, however, the distances of those objects needs to be calculated to
know exactly where they are. It’s also more difficult for camera-based sensors to detect
objects in low visibility conditions, like fog, rain or nighttime.
8
3.1.
LOCALIZATION
The goal of localization is to pinpoint a specific location in the world, specifically
where the autonomous car is currently geographically located in. Localization uses a series of
algo-rithms, techniques to locate precisely, and sensors to locate the autonomous vehicle to
help it make decisions for the user.Global Positioning System (GPS) using satellites could
lead to an error of 1 to 10 meters, which can be fatal to car passengers as well as the people
near the car . GPS also loses signals in tunnels. Localization tries to solve the precision and
signal loss problems of GPS.Localization is the implementation of algorithms of the
autonomous vehicle position so that the error is less than 10cm .
The algorithm follows these 4 steps in chronological order: perception, localization,
path planning, and then control.Localization algorithms also use the information about the
spatial location around objects from the data collected in sensor fusion. Autonomous cars
need to make decisions for vehicle positions based on data from what’s around (sensor fusion)
and where the vehicle is currently in the world (localization).Localization uses some
techniques to locate precisely namely, odometry, kalman filter, particle filter, and
Simultaneous Localization And Mapping (SLAM).
Odometry uses a starting position and vehicle displacement calculation to estimate
position with respect to time. However, the measurements from odometry are not accurate.
The Kalman filter estimates the states of nearby vehicles to define the state of the current
vehicle. The Kalman filter uses an implementation of the Bayes Filter with prediction and
update phases. The particle filter compares vehicles observations through creating particles
with map of the environment. It uses the GPS to define the particles. Each particle is then
assigned a weight,and the weight of the particle is the probability that the vehicle is at the
location of the particle. Particle filters are effective and can locate a vehicle in seconds.
Particle filters also need a map of the environment. Simultaneous Localization And
Mapping (SLAM) estimates the position of objects in a map using landmarks, for example,
traffic lights.The sensors used for localization include the Inertial Measurement Unit (IMU)
and the GPS. The IMU defines the movement of the vehicle along the yaw, pitch, and roll
axis. IMU uses the X, Y, and Z axes to calculate orientation,inclination, and altitude. GPS
uses data from satellites to position the vehicles.The localization algorithm has 4 stages:
Initialization, prediction, update, and then sampling. This algorithm uses the data collected
from GPS, IMU, speeds of vehicle, and measurements of the landmarks around the vehicle.
In the first step of the localization algorithm, initialization uses initial estimate of GPS
and add noise as it initializes the chosen number of particles. Each particle has an (x, y)
position and an angle theta. All of the particles gives particle distribution through the GPS all
with equal weights.The prediction stage of the localization algorithm is using the speeds and
rotations of the car.The update stage matches between the measurements and vehicle position
on the map. It also uses the data from sensor fusion to determine the update the weights of the
surrounding objects near the vehicle.In the last stage of the localization algorithm, re sampling
is used to select the highest weight particles and destroy the less likely ones. The higher the
weight indicates the higher likelihood for the particle for last a long time and survive.
9
Figure.3.1.Localization algorithms
3.2 PATH PLANNING
Path planning is the most crucial part of the car designing.The path planned should not only
abide by the rules but itshould smartly, safely and comfortably navigate a car arounda
highway with other traffic. On a broader level, the predicted path should fall into below
classes:
• Feasibility : Should not lead to collision.
• Safety: Maintain a safe distance from other objects.
• Legality: Should follow the traffic rules, speed limits etc.
• Comfort: Should be jerk free.
• Efficient: Should take the shortest path.
Figure.3.2.Path planning
10
A. Overview of Pipeline
The Goal is to create comfortable navigation path on the highway.The input to this
pipeline is sensor fusion data and localization data of our car as well as nearby cars and
objects. The output will be a set of points(x,y) on the map that a controller will execute in
every fixed time interval. For initialization, start the engine and drive to a sufficient speed.
And then, every time the vehicle’s controller about to run of something to do:
1) Fetch latest analysed data from sensors about the car and nearby cars(Where am I? How
fast am I going? How fast are nearby cars going? Where are they?).
2) Based on the data received, predict a behaviour to do( Should I go left? Go right? Stay on
my lane?). 3) Then generate an actual trajectory that provides a smooth, comfortable and safe
ride to accomplish the goal behaviour.
4) Finally, convert this path into map coordinates for controller to understand.
To build this pipeline, following structures can be used in general:
1.Vehicle:
This object is responsible for making sense and storing data about a vehicle. Vehicles
like ours and other vehicles nearby.In general, it can answer: - Where it is? (This information
comes from localization data). - How fast is it? (This information comes from sensors data). Is
there a lane in the right or left? (This information comes from cameras). - How close are we
from vehicle in front of us and how fast is it since we last checked? (This information comes
from LIDAR and RADAR)
2. Behaviour Planner:
This object is responsible for suggesting which behaviour to do based on we have
about our vehicle and other vehicle. The output is a maneuver state like go left, go right, slow
down, change lanes
Figure.3.2.1.Behavior planner
3.Jerk Minimized Trajectory(JMT)
It changes start state to final state without jerks.Jerk is the instantaneous change in
acceleration over time like acceleration is the instantaneous change in speed over time.
To make the drive comfortable, we want to minimize the jerk of our path. So if we have
a position of a vehicle, we can calculate its velocity through which we can determine
acceleration eventually getting jerk parameter.
This can be done by using quintic polynomial function which take six coefficients. The
11
JMT gets these coefficient from Vehicle, maps and route data. The output of these
equations is a jerk free trajectory with longitudinal and lateral positions.
Figure.3.2.2. Jerk Minimized Trajectory(JMT)
Trajectory
This object is responsible to output the actual path along the road that the vehicle
should take given the de- sired behavior and current predicted state (latency considered) of
the vehicle. The input to this object are two JMT object, one with respect to the cars moving
ahead of the vehicle and other for the ones moving sideways. Hence, the trajectory object
selects suitable the target states.
Figure.3.2.3.Trajectory
Path Converter.
This object has a representation of the global map of the highway.The controller
understands discrete points along the map in cartesian coordinates. Path Converter changes
the path received from Trajectory object into discrete points when we give it the distance
between points along the path and number of points
12
Figure 3.2.4.Path converter
13
4.1.
CONTROL AND MANEUVERING
Autonomous car has many small control units which act a mini computers
integrated with an embedded software.But for moving from point A to point B lateral and
longitudinal maneuvers are considered. A control unit is divided into two aspects that is
longitudinal control and lateral control[3]. The goal of control system is to take the upcoming
trajectory output by the planner and generate system inputs (throttle/brake and steering ) in
order to follow the trajectory
Figure 4.1 Control and maneuvering - longitudinal and latitudinal control
4.2.
LONGITUDINAL CONTROL
The basic functions of longitudinal control is to keep the vehicle at a safe distance
behind another vehicle, maintaining a relatively constant speed with the least brake use
and applying the brake as fast as possible in emergency situations. 3 types of information
is usually considered for the longitudinal control:
(1) the speed and acceleration of the host vehicle; (2) the dis- tance to the preceding
vehicle; (3) the speed and acceleration of the preceding vehicle[3]. This information is
used by PID controller to calculate the required speed[4].This controller uses the feedback
approach where it considers current speed and desired speed both and with the help of
algorithm it calculates the speed and on the basis of that, it determines that if the desired
speed is less that current speed then vehicle needs to push its brake to decrease the speed
or If the desired speed is greater than the current speed then, it needs to press the throttle
actuator which is to increase the speed to reach the desired speed. Once the decision is
made, this output is passed on to the DC motor.So with the help of PWM signal, Dc motor
controls the throttle actuator and brake[5].
PID Controller:
The term e(t) stands for the current tracking error be- tween desired pose variable and actual
pose variable.The variable, tracks the longitudinal difference along a trajectory,
angle/curvature difference at various trajectory points, or even a comprehensive
combination of these vehicles pose variables. The P controller stands for the feedback of the
current tracking error, whose coefficient is represented by Kp ; I and D controllers speaks
for the integral and differential part, whose coefficients are separately considered by Kl and
KD[4].
14
Figure.4.2.Longitudinal control
Figure.4.2.1.PID Controller
A. Lateral Control
Figure.4.2.2. PID Controller
Lateral control includes the steering of the vehicle. It is mainly concerned with lane keeping,
turning, lane changing and avoiding objects that might appear in front of the vehi- cle.This
whole lateral maneuver is done by steering of the car[4]. So to change the lane or to be in the
same lane or to take a turn, steering needs to turn in an angle[6]. So that angle is computed
using the trajectory information and the calculated output is passed to the module called servo
motor which is attached to the steering wheel.
15
Figure.4.2.3.Servo motor
Servo motor takes PWM signal as an input. The PWM signal sent to the motor determines the
position of the shaft, and based on the duration of the pulse sent via the control wire, the rotor
will turn to the desired position.
Figure.4.2.4.CAN bus
The controller Area Network(CAN) is used in vehicles for communication between ECU’s
,Sensors and Actuators[7].It is a message based protocol. Every device transmits data packet
sequentially but in a way that if more than one device transmits at the same time the highest
priority device is able to continue while the others back off. Packets are received by all the
devices, including by the transmitting device[7].
Advantages of CAN bus: - Low cost: ECUs communicate via CAN bus that is not direct
analogue signal which reduces the error and costs - Centralized: The CAN bus allows for
central error dignosis and configuration across all ECUs
- Robust: The system is strong towards electric unsettling influences and electromagnetic
impedance, making it perfect for example vehicles. - Efficient: CAN frames are prioritized by
ID. Top priority gets bus access.
16
\4.3.
LEVELS OF AUTONOMOUS VECHILES
Figure.4.3. levels of autonomous vehicle
4.3.1. Level 0 (No Driving Automation)
Most vehicles on the road today are Level 0: manually controlled. The human provides the
"dynamic driving task" although there may be systems in place to help the driver. An example
would be the emergency braking system―since it technically doesn’t "drive" the vehicle, it
does not qualify as automation.
4.3.2. Level 1 (Driver Assistance)
This is the lowest level of automation. The vehicle features a single automated system for
driver assistance, such as steering or accelerating (cruise control). Adaptive cruise control,
where the vehicle can be kept at a safe distance behind the next car, qualifies as Level 1
because the human driver monitors the other aspects of driving such as steering and braking.
4.3.3. Level 2 (Partial Driving Automation)
This means advanced driver assistance systems or ADAS. The vehicle can control both
steering and accelerating/decelerating. Here the automation falls short of self-driving because
a human sits in the driver’s seat and can take control of the car at any time. Tesla Autopilot
and Cadillac (General Motors) Super Cruise systems both qualify as Level 2.
4.3.4. Level 3 (Conditional Driving Automation)
The jump from Level 2 to Level 3 is substantial from a technological perspective, but subtle if
not negligible from a human perspective.
Level 3 vehicles have “environmental detection” capabilities and can make informed
decisions for themselves, such as accelerating past a slow-moving vehicle. But―they still
require human override. The driver must remain alert and ready to take control if the system is
unable to execute the task.
Almost two years ago, Audi (Volkswagen) announced that the next generation of the
17
A8―their flagship sedan―would be the world’s first production Level 3 vehicle. And they
delivered. The 2019 Audi A8L arrives in commercial dealerships this Fall. It features Traffic
Jam Pilot, which combines a lidar scanner with advanced sensor fusion and processing power
(plus built-in redundancies should a component fail).
However, while Audi was developing their marvel of engineering, the regulatory process in
the U.S. shifted from federal guidance to state-by-state mandates for autonomous vehicles. So
for the time being, the A8L is still classified as a Level 2 vehicle in the United States and will
ship without key hardware and software required to achieve Level 3 functionality. In Europe,
however, Audi will roll out the full Level 3 A8L with Traffic Jam Pilot (in Germany first).
4.3.5. Level 4 (High Driving Automation)
The key difference between Level 3 and Level 4 automation is that Level 4 vehicles can
intervene if things go wrong or there is a system failure. In this sense, these cars do not
require human interaction in most circumstances. However, a human still has the option to
manually override.
Level 4 vehicles can operate in self-driving mode. But until legislation and infrastructure
evolves, they can only do so within a limited area (usually an urban environment where top
speeds reach an average of 30mph). This is known as geofencing. As such, most Level 4
vehicles in existence are geared toward ridesharing. For example:




NAVYA, a French company, is already building and selling Level 4 shuttles and cabs
in the U.S. that run fully on electric power and can reach a top speed of 55 mph.
Alphabet's Waymo recently unveiled a Level 4 self-driving taxi service in Arizona,
where they had been testing driverless cars―without a safety driver in the seat―for
more than a year and over 10 million miles.
Canadian automotive supplier Magna has developed technology (MAX4) to enable
Level 4 capabilities in both urban and highway environments. They are working with
Lyft to supply high-tech kits that turn vehicles into self-driving cars.
Just a few months ago, Volvo and Baidu announced a strategic partnership to jointly
develop Level 4 electric vehicles that will serve the robo taxi market in China.
4.3.6. Level 5 (Full Driving Automation)
Level 5 vehicles do not require human attention―the “dynamic driving task” is
eliminated. Level 5 cars won’t even have steering wheels or acceleration/braking pedals. They
will be free from geofencing, able to go anywhere and do anything that an experienced human
driver can do. Fully autonomous cars are undergoing testing in several pockets of the world,
but none are yet available to the general public.
4.4.
BENEFITS OF AUTONOMOUS VEHICLES
The automation system of automated vehicles follows three phases depicted in Figure
2 (i.e. `Sense', `Understand', `Act'). The automotive sector is going to be revolutionized by
the adoption of autonomous vehicles. It needs to be seen if the adoption of these vehicles
will outweigh and counteract the negatives associated with it. There are several benefits
associated with the use of autonomous vehicles in transportations. The technology can be
used in different type of vehicles like buses, where they can co-exist with a smart city to offer
adaptive routes based on low and high-demand routes. They can also be used as
18
4.4.1. Safety
These vehicles can be used to reduce the number of crashes on the road. It has been
found that drivers are getting increasingly distracted behind the wheel with respect to
drunk driving, speeding and increased usage of smartphones. This in effect is contributing
to large losses in terms of currency, human lives and human injuries. The autonomous
vehicles are programmed to avoid accidents and cause less disruptions in road traffic.
However, having said that it was recently found that a driverless car from Uber went through
a red light in San Francisco without stopping . The autonomous vehicles are designed to
navigate through any road infrastructure and has been successful to do so, so far
advancesments in technologies used in these vehicles have made them much more
dependable and safer even though the maturity of this technology has been challenging.
Humans comprehending objects and traffic while driving is easier than autonomous
vehicles and these have to be explicitly programmed and the evasive behavior depends
on the object encountered by the vehicle. The vehicle has to successfully understand the
situation at hand and come with a suitable counter-measure. If loss of human life is
inevitable, then an important question to be asked is whether the safety of the vehicle
occupants is more important than the safety of the pedestrians. These types of liabilities can
be a huge hindrance for successful adoption. Another aspect of safety can be brought about
by the creation of an ecosystem of cooperative vehicles on the roads . For autonomous
vehicles to be successful, an ecosystem consisting of road-side units (RSUs) and vehicle
to vehicle (V2V) communication infrastructure needs to be developed. There needs to
be a high level of interaction between these devices which will give rise to cooperative
driving to increase the safety and functional benefits of autonomous vehicles.
4.4.2. Congestion and traffic management
The current state of road infrastructure is facing a huge amount of traffic congestion
and tailback during peak times. These are primarily due to human’s not anticipating slow
upcoming traffic and normal stop and go durations. Another contributing factor is the low
level of patience of drivers, where they change lanes frequently. The use of autonomous
vehicles can help in coordinating traffic on highways and reduce tailbacks .
Autonomous vehicles can reduce the distances between themselves to have more
vehicles on the road and the stop and go durations are considerably reduced. These in effect
produces more efficient use of the roads infrastructure. Another benefit due to this, is the
reduction in fuel consumption as the vehicle slowdown is reduced and thus carbon emissions
are also reduced. Parking issues can also be considerably reduced in highly commercial areas.
The autonomous cars can drop off their passengers where needed and go and self-park at a
further parking area without human intervention. These vehicles can then be summoned to
pick up its passengers when needed. These can give rise to parking savings on a large scale.
4.4.3. Human behavior
The autonomous cars can be a huge advantage for the people too young to
drive, elderly and physically disabled . These populations can be effectively transported
from one location to another without any external person or aid. The cars can be used as
mobile offices for working professionals and can provide entertainment while on a long
distance commute. The benefit includes reduction of driver fatigue and a smoother travel
experience. A recent survey has found that more and more young people are opting to use
19
public transportation rather than driving, as they would have more time to engage in other
activities like browsing the internet or reading books.
These people could be early adopters of autonomous vehicles. If there is an absence
of public transportation in a city, shared autonomous vehicles can support the need of the
residents of these cities. Humans depending on older technologies of mass transportation like
trains can thus be reduced. Another potential benefit is avoiding the need to have private
chauffeurs or the need to get driving licenses. Human error which can cause accidents,
will be efficiently negated by the use of autonomous vehicles.
4.4.4. Adoption by specific sectors
The autonomous vehicle technology is being more and more adopted by various
sectors like mining, freight transportation and the military among other industries. The
trucking industry is adopting autonomous vehicle technologies to aid transportation over a
large distance .The adoption has reduced the need for drivers and increased fuel economy of
these trucks. A behavior called as platooning is being used to by these trucking companies
using a large number of autonomous vehicles which travel in tandem. In the mining
industry, large autonomous earth movers are being used highly effectively.
These earth movers have a fixed route from a source to a destination and are
being very effective. The autonomous vehicles assist in improving the safety of humans
as there are lesser number of people working around heavy equipment With autonomous
mining vehicles, specialized equipment operators are not needed, which in turn improves
productivity. The military sector is a huge embracer of autonomous vehicle
technology. The military considers the autonomous vehicles as a huge enabler for its soldier
protection . Driverless military trucks can be deployed in sensitive areas to deliver essential.
4.5.
IMPLEMENTATION ISSUES
Even though there are many advantages to implementing autonomous vehicles, its
adaptability on a large scale is still in question. Studies on large scale implementations and
its ramifications are still being done in real life environments and continuous further
research is needed. The success of implementing autonomous vehicles depends highly on
the costs associated with manufacturing, liability when accidents occur and licensing issues.
Furthermore, as these are vehicles are operating using computing technologies, the security
and privacy aspects is a very important field of study . The following section will look some
of the major inhibitors of adopting autonomous vehicles in a major scale.
4.5.1. Technology issues
The autonomous vehicles depend solely on its onboard sensors, GPS, light imaging
detection and ranging (LIDAR) and camera systems. Any failure of these devices can
contribute to serious consequences. Failures in sensors will need to be efficiently
communicated and should be easy to detect and replace. Not just sensors, but computer and
systems malfunctions can bring about disastrous consequences. These could be a major failure
or even a minor glitch for essential items of an autonomous vehicles, like its braking
system. All technologies on the autonomous vehicles have responded well during optimal
conditions. But during extreme weather conditions, like hail storms, heavy rains or snowy
conditions, the system will not operate perfectly and will interfere with the sensors and
camera systems. Sufficient research on these areas still needs to be done to ascertain the
20
reaction of these vehicles. Even if the autonomous vehicles can selfdrive from a location to another, human intervention would still be needed to operate it safely.
This could be an issue, as the future drivers could be heavily reliant on the technology
and may have forgotten the necessary skills to successfully drive a vehicle. Preprogramming is done on the autonomous vehicles and the presence of artificial intelligence
technologies help the vehicles to get accustomed to newer environments and situations on
the road. A known limiting factor is that the vehicles are not programmed to interpret hand
signals shown by other drivers or situations where the traffic is controlled by police officer
manually using hand signals. Another issue is the reliability of dependency on GPS systems.
Various examples have been found where people have been navigated to non-existent roads
and bridges due to the inaccuracy of the technology.
4.5.2. Vehicle costs
A large scale adoption of autonomous vehicles will be seriously impeded due to the
manufacturing costs of these vehicles. Various technologies including sensors, global
positioning systems, communications from the vehicles and artificial intelligence software
are needed for each vehicle. An essential piece of equipment, LIDAR, is very expensive
currently and so are the processing requirement equipment. These prices put the
autonomous vehicles cost beyond the purchasing power of average people who own cars. The
success of autonomous vehicles can only be successful, if adopted on a large scale, which
cascades on to ower prices of the same vehicles in the future. This is like any technological
adoptions, be it computers or
electric cars, future prices can be reduced only with technological advances and large scale
production. Even though early adopters may pay higher prices, the costs can be brought down
in the future. This will remain a significant implementation challenge for the foreseeable
future.
4.5.3. Liability and law issues
Even though autonomous cars have been tested for many thousand kilometers on
public roads, still there are chances of accidents happening. This raises the concern of liability
in case of such accidents. Who is at fault? Is it the person in the driver seat of the vehicle,
the manufacturer or the algorithm developer? Another aspect is the insurance issues for
such accidents [15]. In most cases humans are exempted from penalty when accident occur
beyond the control of the driver. Legal precedents are also not existing in most countries
where autonomous vehicles may be deployed. These are no existing laws or legalities
around accidents involving autonomous cars. There are no central regulatory bodies
existing country-wide to
regulate the use of autonomous vehicles. Laws and rules applicable in one state may be
different from another state.
4.5.4. Security and privacy issues
The most major issue with autonomous vehicles is in the field of security as there
are numerous computing devices and communications occurring from the vehicle to other
vehicles or various components within the vehicle itself. Hackers can get into the system and
manipulate the operations of the vehicle . This can be extremely dangerous as the vehicle
can be controlled by people to do nefarious activities. Terrorist can use the vehicle loaded
with bombs to target key establishments . They can be also used as rolling missiles to target
21
and create chaos on the roads. As vehicles are interconnected and communicate with each
other, any malware can spread quickly through the entire vehicular network to penetrate a
large number
of vehicles. These malwares can be dangerous and can be used to do controlled and
coordinated attacks.
A security breach of autonomous vehicles will allow a hacker to do simple
attacks like relaying false information from the sensors to taking complete control over
all the operations of the vehicle. Security attacks against autonomous vehicles will be
looked at in more detail in the next section. For the successful operation of an autonomous
vehicle, lots of personal data is collected and stored. These data are shared among other
vehicles and RSUs. The ecosystem of cooperative vehicles is built on the principle of sharing
data . This is a major concern for privacy advocates. Questions have been raised about what
type of data will be stored, what will be shared, who will it be shared to, and what will the
data be used for? Most humans do not want share the vehicular data as the data may be used
against them in case of accidents and a court case. Also, the driving mannerisms will be
used by insurance providers to increase the insurance costs if erratic driving behavior is
observed. Location data is also tracked and shared by autonomous vehicles.
22
CONCLUSION
All in all, autonomous cars are a technology that could revolutionize the way people transport
to their destinations. With the use of computer systems and vision, sensor fusion, localization,
path planning, and control/maneuvering technolo- gies, they have the potential to be a part of
everyday life very soon, especially seen in the works of the companies Waymo and Tesla.
23
REFERENCES
[1] Waymo
launches
its
first
commercial
self-driving car service. (2018,
December 5). Retrieved from https://www.engadget.com/2018/12/05/waymo-onelaunches/
[2] Cohen, J. (2019, November 25). Self-Driving Cars Localiza- tion. Retrieved from
https://towardsdatascience.com/self-driving-car- localization-f800d4d8da49
[3] Babak, S., Hussain, S. A., Karakas, B.,
Cetin, S. (2017). Con- trol of autonomous
ground vehicles: a brief technical review. IOP Conference Series: Materials Science and
Engineering, 224, 012029. doi:10.1088/1757-899x/224/1/012029
[4] Liu, S., Li, L., Tang, J. (2017). Creating Autonomous Vehicle Systems. Synthesis
Lectures on Computer.
[5] Peter, Waltermann (1996). Modelling and Control of the Longitudinal and Lateral
Dynamics of a Series Hybrid Vehicle. Proceedings of the 1996 IEEE International
Conference on Control Applications Dearborn, M
[6] Robert Fenton, FELLOW , IEEE AND Ibrahim selim , “On the Optimal Design of an
Automotive Lateral Controller,” IEEE TRANSACTIONS ON VEHICULAR
TECHNOLOGY. VOL. 37, NO. 2. MAY 1988 .
[7] CSS Electronics.CAN Bus Retrieved
[8] Wilbur de Souza. (2017, June 16). Sensor Fusion Algorithms For Autonomous Driving:
Part 1 ? The Kalman filter and Extended Kalman? Retrieved from
https://medium.com/@wilburdes/sensor- fusion-algorithms-for-autonomous-drivingpart-1-the-kalman-filter-and- extended-kalman-a4eab8a833dd
[9] Sensor Fusion: Technical challenges for Level 4-5 self-driving vehicles. (2019, October
21). Retrieved from https://www.automotive- iq.com/autonomous-drive/articles/sensorfusion-technical-challenges- for-level-4-5-self-driving-vehicles
[10] Mehta, R. (2019, October 26). Sensors in Autonomous Vehicles. Re- trieved from
https://medium.com/swlh/sensors-in-autonomous-vehicles- 5c8929d460e2
[11] How Does a Self-Driving Car See? (2019, April 16). Retrieved from
24
https://blogs.nvidia.com/blog/2019/04/15/how-does-a-self-driving- car-see/
[12] Sensors Used In Autonomous Vehicles. (2019, January 14). Retrieved from
https://levelfivesupplies.com/sensors-used-in-autonomous-vehicles/
[13] David Silver.How Computer Vision Works for Self-Driving Cars Retrieved from
https://www.linkedin.com/pulse/how-computer-vision- works-self-driving-cars-davidsilver/
[14] Albert Lai. How do Self-Driving Cars See? Retrieved from
https://towardsdatascience.com/how-do-self-driving-cars-see- 13054aee2503
25
Download