using autonomous vehcile technology to improve our infrastructure

advertisement
B11
2312
USING AUTONOMOUS VEHCILE TECHNOLOGY TO IMPROVE OUR
INFRASTRUCTURE
Zachary Romitz (zrr4@pitt.edu), Brice Very (bev18@pitt.edu)
Abstract— This paper will describe and analyze
innovations in the field of autonomous automobile
technology. This paper will then continue discuss how
robotic vehicle technology will influence our future. This
paper will elaborate on the current state of autonomous
automobile technology. For instance, Google Inc., a leader
in this technology, uses the Toyota Prius, and this paper will
analyze what makes their vehicles safe and reliable.
Autonomous vehicles require a variety of sensors and
databases for them to function properly and safely on our
road networks. Almost all successful implementations of this
technology make use of Global Positioning Systems (GPS),
video cameras, light detection and ranging (LIDAR), and
radio detection and ranging (RADAR) sensors which must
flawlessly work together. We will then explain how the
computer must analyze the information acquired and
determine the best way make the vehicle travel through its
surrounding environment. In most implementations of this
technology, the computer needs more than just the
information the sensors provide. The computer must use
GPS and other technologies to acquire local road rules,
traffic patterns and the state of its surrounding environment
in real time. This paper will then touch on how the impact
of how autonomous cars would then have on society and
infrastructure. There would be many different aspects of
society that would benefit with the adoption of autonomous
vehicles. This paper will look into positive and negative
aspects of self-driving vehicles.
Autonomous vehicle
technology is the future.
Autonomous vehicles operate with a variety of systems that
must work together to allow the vehicle to traverse its
environment. Those systems include a variety of sensors:
Global Positioning Systems, video cameras, LIDAR, and
RADAR sensors. The vehicle must also utilize a central
computer to process all of data the sensors provide it. That
computer uses different algorithms to determine the path the
vehicle must take. Finally the central computer must
transfer its commands to the vehicle computer. The vehicle
computer follows that those commands and makes the car do
the corresponding action. The technology and concepts
introduced will be explained in much detail later in the paper
.
BACKGROUND AND MOTIVATION
Autonomous vehicle technology has a rather long history.
The first working prototype of an autonomous car was in the
1980’s and used cameras to navigate its way through 100
kilometers of empty road. With the success of this initial
prototype there where many more projects throughout the
80’s and 90’s that used similar systems to navigate through
highways with either light or no traffic. With the advent of
this new technology the United States military became very
interested in it. Robotic vehicles would have a drastic
change on the ways which the members of the military
would be put in harm’s way. Their interest was that forces
could send a robot into combat and have it complete a
mission, and return to where it came from. The impact it
would have on the way which the military functions would
be huge. Missions could be run with no risk to a soldier’s
live. To help speed up the rate at which the technological
breakthrough for autonomous vehicles were being developed
the Department of Defense (DOD) began a contest for
innovators across the nation could compete. The Defense
Advanced Research Projects Agency (DARPA), part of the
DOD, created the Grand Challenge. In 2002, DARPA
director Dr. Tony Tether thought that autonomous ground
vehicles were the next best step to protecting our men and
women in uniform [1]. His plan was that DOD would host
the event and few different people would respond to the
challenge. His call was an autonomous vehicle would be
submitted that must travel 140 miles from Barstow,
California to Primm, Nevada [1]. The vehicles would
traverse over miles of dirt, trials, lakebeds, rocking terrain,
and gullies in 10 hours. The fastest vehicle to complete the
course would receive one million dollars. In the first event
none of 15 vehicles completed more than 7.4 miles. After
Key Words— Autonomous Vehicle Technology, Camera,
Computer, GPS, LIDAR, Robotic car, Self – Driving,
INTRODUCTION
This paper will look into autonomous vehicle technology,
robot vehicle technology, and describe the technology
behind it. Autonomous vehicles are a new technology which
has the potential to change the way society functions, more
specifically the transportation systems that move society
every day. The purpose of Autonomous vehicles is to make
our transportation networks much safer. Autonomous
vehicles are vehicles which use a variety of technology to
navigate through its environment. These are vehicles that
operate safely and effectively without human input. The
autonomous vehicle is a computer controlled vehicle.
University of Pittsburgh
Swanson School of Engineering
March 1, 2012
1
Zach Romitz
Brice Very
the first event’s relative success the DOD held another event
though the prize this time was two million dollars. There
were 195 applicants and 23 of them made the finals [ 1].
This time 22 vehicles made if further than 7.4 miles and 5
vehicles successfully completed the course [1]. This was
excellent for the military as they now had a success test of
the technology needed to radically change the battlefield, but
the technology still had a lot more purposes. To move this
technology to a level that would benefit the most people
DARPA created a new challenge. They created an Urban
Challenge in which vehicles would be faced with situations
that humans must deal with every day. This Urban
Challenge, which started in 2007, would once again invite
anyone interested in competing. This time the grand prize
was two million dollars and second place was one million
dollars. The Urban challenge was the first time that
completely autonomous vehicles would be driving with both
manned and unmanned vehicles in an urban environment
[2]. 89 teams entered the event and where put through three
different tests with each vehicle’s performance accessed to
see if it could compete in the finals [2]. The final test
involved the vehicles driving through an urban course
traveling through specific checkpoints, and those
checkpoints weren’t given until immediately before the race.
The vehicles had to maneuver their way through the
checkpoints to the finish line, but they had to follow all the
rules of the road and maneuver safely around other vehicle,
manned or unmanned. There were six competitors to
successfully complete the Urban Challenge, but an important
milestone was achieved one that day as they proved robotic
vehicles could successfully navigate through urban
environments.
controlling. The car’s computer controls all off the steering,
braking, and throttle systems using electric motors and
actuators. The accelerator, brake pedal, and steering wheel
are only connected to sensors which feed information to the
in car computer. This makes the Toyota Prius an excellent
choice as it doesn’t need to be as heavily modified. Google
didn’t need to recreate a human with motors and actuators as
all of the controls are already installed in the vehicle. The
main people behind Google’s venture into autonomous
vehicle technology are Sebastian Thrun and Chris Urmson.
[3]. Thrum and Urmson came from the world of the DARPA
Challenges and bring a lot of experience in this technology
with them. Google’s fleet of autonomous vehicles has travel
more then 190, 00 miles in city traffic, busy highways, and
mountainous roads [3]. Google’s vehicles have had a good
track record with their vehicles. They have thus far proved
they can be safe and effective at getting people for point A
to point B. The Google vehicle uses LIDAR, RADAR,
cameras, GPS, and road network information [3]. The
vehicle uses all of those systems in conjunction with one
another to successfully navigate through the road network.
The one major system they use is records about the road
information. The Google car takes that information so it
knows the speed limit and road rues for a given area [3].
This helps make the car a more safe and effective.
Pros and cons
The autonomous vehicle has many different pros and cons.
The largest thing that the vehicles have going against them
are the fact that they are still a technology in their infancy.
While that is rather irrelevant because they gains the
technology has made sense the first Grand Challenge is
immense, the public perception of this technology its
greatest obstacle. In every state in the nation, except
Nevada, autonomous vehicles are illegal as a human must at
the controls of the vehicle. People don’t want to have a
technology which they believe to be unsafe and unreliable to
travel the road with other unsafe and unreliable drivers and
technology. The gains that this technology has the chance to
make would be very large. In the United States alone, there
are 42,000 people killed and over 2.7 million wounded in
automobile related accidents every year [4]. Autonomous
vehicle technology would have the chance to either
eliminate all of statistics or dramatically reduce them. The
best way to keep people safe in their automobile is to avoid
the collision.
An autonomous vehicle is constantly
monitoring all of its surroundings, much better than any
human could. That fact alone could reduce many of the
automobile accidents in the country. Next the autonomous
vehicle would improve the efficiency of our road networks.
Autonomous vehicles would be able to drive much closer
together to one another and reduce the amount of empty
Google
As the success of the technology increased more companies
started looking into autonomous vehicle technology. While
there are a few different models of this vehicle, Google Inc.
has a very successful fleet of prototypes that it has driving
on public roads in the United States. Google uses the
Toyota Prius as the base for its autonomous vehicle. There
are many different choices of automobile that Google could
have chosen for its autonomous vehicle, but there a couple
of reasons the Prius made for an excellent by Google. The
smaller of the two is that the Prius offers good fuel
economy. While improved fuel economy isn’t a necessity
for an autonomous vehicle, Google chose to help reduce its
carbon impact in creating its autonomous car. The second
and more important reason Google chose to pick the Prius is
that Prius utilizes drive by wire technology. Drive by wire
technology has the unique advantage over conventional car
systems in which the driver of the vehicle has no mechanical
linkage between him or her and the systems which he is
2
Zach Romitz
Brice Very
space on the highway. Finally the speed limits on the
highways would be able to safely increase as now the
autonomous vehicle’s computer can react to threats and
danger much faster and any human could. The technology
that makes this all possible will be explained in the next
section of the paper.
The ground scanning sensors project parallel horizontal lines
to detect the ground and other vehicles. The rotating laser
sensor takes a three-dimensional scan of the surrounding
environment at a frame rate of 10 Hz, which produces 1
million readings per second. Data coming from the lasers
needs to be filtered to remove useless information for
vehicle tracking purposes.
For example, treetops,
underpasses and other things located above the car do not
need to be considered. Only two-dimensional data is needed
for tracking. In order to analyze the data coming from the
ground-scanning laser a virtual sensor is created. The virtual
sensor creates a 360° polar coordinate grid it then divides it
into cells by projecting virtual rays emanating from the
center of the vehicle. It then uses the laser data to detect free
space within each cell, the distance to the nearest occupied
space in the cell, and the space that cannot be seen behind
occupied. The virtual scan creates a simpler way to access
data because all grid cells originate from one point. This
allows the vehicle to detect where an object is at any time.
Resolution is very important to the virtual scan, the finer the
resolution the better detection long-range. In the vehicle
being discussed the rays our spaced one-half a degree apart.
The origin of the virtual rays move with the vehicle so
changes in surroundings are found by comparing current
scan to the previous scan in relation to the distance traveled.
The 3-D sensor provides much more data than the 2-D
sensor. Another virtual scanner is created to detect obstacles
within the 3-D data. For the purposes of vehicle tracking an
obstacle is defined as anything that the autonomous vehicle
cannot drive under even if it is not touching the ground.
This new virtual sensor creates a grid similar to that of the 2D sensor but it is spherical as opposed to circular. It then
detects the ground by looking at a point at a very low
vertical angle. Next, it creates two more points by raising
the vertical angle. If the slope between the first and second
point and the slope between the second and third point is
zero, those points can be treated as lying on the ground. It
does this all the way around the vehicle. Simultaneously the
virtual sensor detects and classifies obstacles as being low,
medium, or high in relation to the ground. It takes the
medium height obstacles and projects them onto the 2-D
scanners plane. For every vehicle detected one Bayesian
filter is used to track it. The filter is used to determine the
probability of point as being part of the detected vehicle. In
order to initialize a vehicle for tracking it must be present for
three frames. Detecting new vehicles is the most resource
intensive on the internal computer. Once a vehicle leaves
the sensor range or move far enough away from the road
tracking is discontinued for that vehicle. In order to detect a
vehicle three frames are needed. The minimum time needed
for a 10 Hz sensor is .3 seconds. Vehicle detection has three
stages. Stage I is to locate an object that is moving in
relation to the ground. Stage II is to determine its
philosophy using the tracking method. It then compares the
objects movement over three frames. With the 10 Hz
sensor, detection will only work for vehicles moving
TECHNOLOGY
In this section, we will discuss the vehicle tracking
technologies contained within one of the most promising and
functioning autonomous vehicles. More specifically, we
will detail the types of hardware and algorithms used by the
vehicle to observe its surroundings. We will also detail two
systems which have had promising test results but have not
been outfitted in a vehicle and tested in real traffic. These
technologies could greatly improve the performance and
reliability of an autonomous vehicle.
Prototype Vehicle
The current and functioning prototype of the autonomous
vehicle we will discuss uses laser-based vehicle tracking.
The method of tracking employed in this vehicle is superior
to other typical methods because it reduces the calculations
needed to accurately track another vehicle in relation to the
autonomous vehicle.
The computation time for this
technology is about 25 milliseconds per frame. This is
extremely fast compared to other autonomous vehicle
technology. The vehicle’s surrounding environment is
modeled in two dimensions and shapes of other vehicles are
represented by rectangles. Two-dimensional model is fine
because the height of other vehicles is not important for the
purpose of in navigating traffic. When detecting a vehicle
the center of the vehicle is based on perspective. As an
example, if the autonomous vehicle approaches another
vehicle the perspective changes as the autonomous vehicle
continues to get closer. As this happens, the observed center
of the vehicle begins to change in relation to the change in
its perceived shape. To circumvent this, a set of axes is
placed at the first perceived center as the origin. This
perspective changes the new perceived center is assigned
coordinates with relation to the created axes the real center
of vehicle can then be found by varying the length and width
of the rectangle that will represent the vehicle. Length and
width are manipulated using a Bayesian filter [6]. A
Bayesian filter uses statistical analysis to determine the
likelihood of an observed parameter, in this case vehicle size
and center point [6]. The velocity of the car can then be
determined by the rate of change of perspective as compared
to the autonomous vehicles speedometer. The two pieces of
hardware used in this implementation are ground scanning
laser sensors and rotating three-dimensional laser sensors.
3
Zach Romitz
Brice Very
between 5 - 35 mi./h (20 – 150 cm per frame). The detection
algorithm focuses on the movement of the front and back
ends of the detected vehicle. Since both ends cannot always
be seen due to positioning and perspective a 25% error
threshold is used to prevent discontinuing tracking of
vehicles. One major drawback to laser range finders is the
difficulty in seeing black objects. When the laser points a
black object, very little data is returned causing it not to be
seen. To overcome this, the absence of data must be
analyzed. If readings are not obtained over a range of
vertical angles in one direction the space can be considered
occupied by a black vehicle. To determine the distance
between the autonomous vehicle and the black vehicle the
distance to the last data point received is used. This method
only works for distances less than 30 meters. Black objects
are undetectable after 30 meters. Another issue with this
vehicle is that the algorithms used do not detect motorcycles,
bicycles, or pedestrians. This requires the need to install
more hardware to correct the problem [5].
Fuzzy PID Controller
Technology of a fuzzy PID controller is used to improve
smoothness and precision steering control and the
autonomous vehicle.
This system uses a traditional
proportional-integral-derivative controller (PID) and fuzzy
control links to control the vehicle steering. In the
laboratory testing of this technology infrared light sensors
were used to acquire data. The digital input into the light
sensors as compared with the digital output in used to create
what is called a deviation signal. The deviation signal is
then divided three ways. The first goes straight to the PID
controller. The second undergoes a derivation and then goes
to the fuzzy control links. The third is split in half. The first
half goes straight to the fuzzy control links while the other
undergoes a derivation before going to the fuzzy control
links. Signal going in to the fuzzy control links are used to
correct the deviation signal going into the PID controller in
real time.The fuzzy controller uses fuzzy logic to process
two inputs, the deviation signal and its deviation rate into
three outputs to correct the PID and center the vehicle over
the intended path automatically [8]. Fuzzy logic is a method
where a statement does not have to be absolutely true or
absolutely false but rather can vary to any degree in between
[9]. The use of fuzzy logic in this type of control system
optimizes stability, response time, and over steering. Before
the outputs from the fuzzy controller go to the PID
controller, they must go through the process of
defuzzication. This process translates a set of data from
each output of the fuzzy controller to a quantity that is used
by the PID controller. Through testing and computer
simulation, it has been found that this method of steering
control has a faster response rate and smoother steering than
a standard PID controller [8]. It has yet to be put to use in a
roadworthy vehicle.
Monocular Vision Based Detection
This method of tracking uses one omnidirectional camera.
The omnidirectional camera consists of a high-resolution
color camera and a hyperbolic mirror. The use of cameras
simplifies the input of data into vision-based algorithms,
similar to the approach outlined above, without the need for
virtual sensors or other types of calibration. This method
detects vehicles, pedestrians and other obstacles unlike other
methods. Camera-based tracking is not widely used due to
problems with previous technologies.
With recent
advancements in the technology has the potential to be
superior to other methods in use Tracking pedestrians and
vehicles requires three things. The first is an appearance
detector, which analyzes the camera feed. The second is a
two-dimensional lasers sensor that detects structure and
outline of objects. Third is a tracking module, which
analyzes data from the appearance detector and the laser
sensor then tracks motion. To detect appearance implicit
shape models (ISM) are used. An ISM has a detailed list of
shapes and features to look for called a codebook. For each
shape or feature in the codebook, it has a list of
displacements and scale factors called votes. To detect a
shape it checks through the codebook and votes for a match.
The match is found it is then passed to the tracking module.
The laser sensor is used to refine edges of detected shapes
and find the distance away from the autonomous vehicle.
The implications of this technology provide great step in the
development of autonomous vehicles the camera-based
approach greatly simplifies the need for complicated
algorithms and expensive hardware within a self-driving car.
In a test using prerecorded camera data this technology
performed excellently, processing frames at up to 400 Hz[7].
The Google Prius with labeled sensors
4
Zach Romitz
Brice Very
Autonomous vehicles will one day come to the market, but
probably not be autonomous vehicles all the time. The
initial appearance of the technology may not be on the
highways, but rather parking structures.
Google has
received a patent for a way to switch vehicles from human
controlled mode to autonomous [10]. Google envisions
using sensors in the ground to provide the vehicle with
information of where it is and where it needs to go [10].
This would allow people to simply pull up to a parking
structure and let the car worry about parking, and when you
are ready to leave the car would simply need to be
summoned. Once the technology becomes a little more
main stream and accepted this technology would probably
move to the highways. Freeway on ramps and off ramps
could act as the transfer systems between autonomous and
human control. This would for a safer and faster highway
system as the risks associated with humans could be all
eliminated. Finally the switch would be made to allow full
autonomous controlled vehicles on any road. While this
would greatly reduce the number of traffic accidents, it
would also greatly change the way society treats the
automobile. Right now the average American family has a
vehicle for every licensed driver. With fully autonomous the
use of vehicle sharing would be much easier. For example, a
family of 4 may have 2 vehicles and the parents use 1 for
commuting to work. Well, the children have the other car
drive them to their extracurricular activities. Another
example would be a company hosts a network of
autonomous vehicles which the customers could rent one of
their vehicles. The customer could call the vehicle from a
nearby garage and the vehicle comes and picks them up and
takes them to their destination. That vehicle could then be
used by another person. The idea behind this is that multiple
people could use that one vehicle every day purely because
that vehicle doesn’t have to be stuck in a parking lot.
Autonomous vehicle technology has the potential to greatly
change society.
THE FUTURE
Currently we are seeing some of the technology discussed in
this paper being used every day by drivers. The most well
publicized example of robotic vehicle technology is with
self-parking cars, cars that can parallel park themselves.
Many drivers find it difficult to parallel park their vehicle
and technology has now found an answer. The car will use
either cameras or RADAR technology to help it guide itself
to the parking space. The drive still needs to use control the
speed of the vehicle using the brake, but all of the steering is
controlled by the car. The car signals the driver for speed
and direction changes, but controls everything else. This is
just one example of how autonomous vehicles have begun to
bring themselves to the public. The next best example of
how autonomous vehicle technology is already being used is
with collision avoidance systems on some cars. Using a
RADAR sensor mounted in the front of the vehicle the car
computer can tell if it the car is going to hit something and
automatically apply the brakes if necessary. Finally,
vehicles have started adapting RADAR in the sides of a
vehicle will monitor the blind spot of a vehicle and alert the
driver if there is an object there. It is the exact same type of
system a fully autonomous vehicle would have. Those
systems are just the current uses of autonomous vehicle
technology in our current vehicles. With the public
acceptance of those systems it leaves much hope that a fully
autonomous vehicle would have a positive public response.
Impacts
Autonomous vehicle technology still has a while before it is
used as describe in this paper, but the effects it could have
on society will be very drastic. As stated earlier the
autonomous vehicles would improve the efficiency of our
current road network. The problem with humans is that we
require, relative to computers, a lot of time to react to
stimuli. For us to allow for that reaction time we must leave
a lot of space between us and other drivers in front of and
behind our vehicles. With autonomous vehicles we could
close that gap. Humans also only have two eyes and can
only look a small area of outside of the vehicle at any one
time, so once again we must leave much space on both the
left and right sides of our vehicles. With the RADAR and
LIDAR technology utilized by the vehicle the vehicle knows
where any object is around the vehicle at any given second.
Now more vehicles can occupy a given section of roadway
at a much greater rate of speed. The consequences of this
involve the fact that commuters can enjoy much faster and
smoother commutes.
REFERENCES
[1] K.Iagnemma, M. Buehler, (2006). “Special Issue on the DARPA Grand
Challenge.” Journal of Field Robotics, Wiley Periodicals Inc.
[2] “DARPA Urban Challenge.” Defense Advance Research Projects
Agency.
[Online:
Web
Site].
Available:
http://archive.darpa.mil/grandchallenge/
[3] (2011) “How Google’s Self- Driving Car Works.” IEEE Spectrum.
[Online: Web Site]. Available: spectrum.ieee.org/automaton/robotics/
[4]N. Kaempchen, B. Schiele, K. Dietmayer. (2009, December). "Situation
Assessment of an Autonomous Emergency Brake for Arbitrary Vehicle-toVehicle Collision Scenarios," Intelligent Transportation Systems, IEEE
Transactions on, Vol. 10, Issue 4
Changes
[5]A. Petrovskaya, S. Thrun. (2009, April 9). “Model based vehicle
detection and tracking for autonomous urban driving.” Autonomous Robots.
Vol. 26, no 2
5
Zach Romitz
Brice Very
[6]Weisstein, E. “Bayesian Analysis.” Math World. [Online: Web Site].
Available: http://mathworld.wolfram.com/bayesiananalysis.html
[7]D. Scaramuzza, L. Spinello, R. Triebel, R. Siegwart. (2010, July 4-7).
"Key technologies for intelligent and safer cars - From motion estimation to
predictive collision avoidance," Industrial Electronics (ISIE), 2010 IEEE
[8]L. Guangrui, B. Jingkai, H. Zhen. (2011, August 8-10). “Design of Fuzzy
Self-adaptive PID Servo Control System.” Artificial Intelligence,
Management Science and Electronic Commerce (AIMSEC), 2011 2nd
International Conference on.
[9]Weisstein, E. “Fuzzy Logic.” Math World. [Online: Web Site].
Available: http://mathworld.wolfram.com/fuzzylogic.html
[10] (2012) “Driverless car: Google awarded US patent for technology.”
BBC News. [Online: Web Site]. Available:
www.bbc.co.uk/news/technology-16197664
[10]J. Markoff, (2010, October 9). “Google Cars Drive Themselves, in
Traffic,” New York Times. 2010.
ADDITIONAL RESOURCES
E. Rosén, J. Källhammer, D. Eriksson, M. Nentwich, R. Fredriksson, K.
Smith. (2010, November). “Pedestrian injury mitigation by autonomous
braking,” Accident Analysis and Prevention, Vol. 42, Issue 6
D. Scaramuzza, L. Spinello, R. Triebel, R. Siegwart. (2010, July 4-7). "Key
technologies for intelligent and safer cars - From motion estimation to
predictive collision avoidance," Industrial Electronics (ISIE), 2010 IEEE
ACKNOWLEDGEMENTS
We would like to acknowledge Beth Newborg and Luis Bon.
They have provided critical information necessary to
complete this project. We also would like to acknowledge
our parents who have ensured we would make it this far.
6
Download