kratzke_rpt

advertisement
Application of Edge Detection in
Automated Driving Systems
ECE 533 Final Project Report
Tom Kratzke
Introduction
Perhaps one of the more interesting applications of image processing is its use in
the field of automated driving. As you may suspect, automated driving is the idea that
human input is not required to drive a car. Instead, a computer would drive for you based
on sensor data, camera data, and position information. This information is all processed
by a computer and it makes all the decisions. If a system could be made that would
decide how fast to go, what direction to go, and how to avoid obstacles it would be
conceivable to just relax and have the computer drive for you. There are a few systems
currently available, but nothing to the extent of a full automated system. Some vehicles
today come equipped with a collision detection system, where if the vehicle decides that
a collision is imminent; it will automatically break for you. There is still a lot of work to
be done before fully automated systems are available.
One of the methods currently being research in the area of automated driving is
computer vision systems. Based on sensor and camera data collected from the vehicle, a
vehicle can be directed on where to go without human input. There are a couple subfields in automated driving where this could be very beneficial: automated steering and
bumper to bumper driving.
Automated steering is exactly what it sounds like; a computer will decide what
angle to adjust the steering wheel based on external data like pictures and distance
sensors. Most systems currently being researched are learning systems, where a human
driver drives the vehicle for a period of time and the computer studies what situations the
driver has the steering wheel in certain positions. Then the driver enables the automated
steering system and the computer uses what it learned from the driver and steers the
vehicle by itself.
Bumper to bumper driving is different in the fact that there is no learning required.
In a gridlock situation, the driver can simply switch on the system and it would start to
function. This system is based on the idea that bumper to bumper driving is very simple
and tedious; something a computer should have no problem with. In one system, the
vehicle takes pictures of the vehicle in front of it and processes the picture to extract the
centerline and width of the vehicle image. From this data, it stays directly behind the
other car and calculates the distance behind that car from the extracted width.
Approach
The idea investigated in this project is if edge detection methods would aid in
either of these two subfields of automated driving.
Automated steering as described above would require an extremely large set of
learning data if no image processing was done. Edge detection could greatly reduce the
amount of learning data required by simplifying the image considerably. If the outline of
a road can be extracted from the image, that test data could be applied to all cases where
the road looks similar instead of taking into account non-essential information such as
surrounding terrain. I will implement the edge detection algorithm for all kinds of road
conditions starting with very simple cases and progressing to more complicated systems.
Empty straight roads, empty curving roads, roads in bad repair, crowded roads, and
combinations thereof will all be tested.
Bumper to bumper driving could also potentially benefit from edge detection. If
the outline of a vehicle can be extracted, its width and centerline would be easy to
calculate. I will test all types of different vehicles from a rear perspective to see if a
single operator can be used in all situations.
For all my tests, I will implement all of the following operators:
Sobel:
Prewitt:
Finds edges using the Sobel approximation of the derivative.
[-1 -2 -1
[-1 0 1
0 0 0
-2 0 2
1 2 1]
-1 0 1]
Finds edges using the Prewitt derivative approximation.
[-1 -1 -1
[-1 0 1
0 0 0
-1 0 1
1 1 1]
-1 0 1]
Laplacian of Gaussian:
Finds edges by looking at the zero crossing after filtering
image with the Laplacian of Gaussian filter
Canny:
Finds edges by looking for local maxima of the gradient of the image.
Work Done
Automated Steering
To test my method on the usefulness of edge detection on automated steering, I
performed the four different edge detection techniques to an array of images. I started
with the simplest situations, where there are no turns and no obstacles on the road and
worked my way up to more complicated situations. I started adding turns to the road,
additional noise outside the road, (trees, bushes, etc.) as well as obstacles to avoid on the
road. Finally, I took all of these cases to the extreme by using pictures of roads in bad
repair, crowded highways, and roads with a high amount of tree cover. I am looking for
an outline of the road that can be reasonable extracted by a computer program. This
outline would represent the boundary between drivable road and surrounding terrain and
obstacles in the road. The following pictures are a sample set of my data.
Note: All images of edge detection algorithm used for automated steering are in the
following order.
Top:
Prewitt
Sobel
LoG
Bottom:
Canny
In simple cases with no noise or obstacles, Sobel and Prewitt perform very well
With introduction of noise/obstacles, Canny and LoG still extract a reasonable road
With obstacles but with no surrounding noise, Prewitt and Sobel operators function correctly
Canny operator still extracts a reasonable road from a high amount of noise
With a lot of road obstacles, Canny still extracts road edges
Bumper to Bumper Driving
To test the usefulness of edge detection, I performed the four edge detection
algorithms on a series of rear view pictures of vehicles. First, commercial vehicles were
used, including cars, trucks, vans, and SUVs. Next, commercial vehicles were tested to
see if the same edge detection algorithm would work for all vehicles. Finally, I attempted
to find situations where background noise would disrupt the algorithms effectiveness to
see how useful this method is in real world scenarios. I am looking for the outline of the
car with little surrounding noise. This outline would be used by a computer program to
extract the centerline and width of the vehicle. The following is a sample set of my
results.
Note: All images of edge detection algorithm used for bumper to bumper driving are in
the following order.
Top Left:
Prewitt
Top Right:
Sobel
Bottom Left: LoG
Bottom Right: Canny
Shadows and likeness with background affects images considerably
Note for Sobel and Prewitt, outline is distinguishable from adjacent vehicles
Outlines affected by low light/shadows
Conclusion
After testing both of these situations, there are a few situations that affect both of
my tests. First, it is difficult to pick one edge detection operator for every situation. As
images became more complicated, it seemed that the LoG and Canny operators
performed better than the Sobel or Prewitt. But in simpler images, the Sobel and Prewitt
operators performed better. Because of this, a system would have to be implemented that
would chose which operator is best for every image.
Second, shadows and low light situations affected my results considerably. The
desired outlines did not appear as they should when shadows were present or if the entire
image was dark. One solution to this could be to simply add color intensity to darkened
pictures. However, I was unable to test this idea due to time constraints. This still
doesn’t solve the problem of shadows.
Automated Steering
Overall, I believe that edge detection is a viable option to simplify inputs images
for a computer to use in an automated steering system. However, there are some
problems that need to be addressed.
In the simplest test cases, the Sobel and Prewitt operators did very well in
extracting the outline of the road. In more complicated test cases, these operators failed
miserably. This is especially evident in cases where there was a lot of surrounding noise
(trees around the road). The opposite is true for the LoG and Canny operators. In the
simplest cases, these operators detected noise in the road that should have been passed as
irrelevant. In the complicated cases, when the Sobel and Prewitt operators failed, LoG
and Canny performed very well. Also, in the case of very high surrounding noise, Canny
still was able to extract a good outline to the road. Because of this, I believe that the
Canny operator is the best choice in automated steering systems.
That being said, there is still room for improvement. Roads in disrepair didn’t
adequately pass any of the four operators. This is an area that would require additional
image processing to recover a suitable image.
Bumper to Bumper Driving
In bumper to bumper driving situations, I believe that the Prewitt operator could
be used to find the outline of a vehicle. All of the other three operators also extracted the
outline of the vehicle, but also included a lot of noise that is unnecessary in measuring the
centerline and width of the vehicle. The Prewitt operator still had noise in it due to
background that would still have to be filtered out or ignored by the system, but it
contains much less than the other ones. This could either be done post extraction or
through another image processing technique.
The only problems encountered involved low light and shadows. In these cases,
outlines were much more difficult to extract from the image. Because of this, I don’t
believe this system would work at night.
Reference
1. Parent, M.; Daviet, P.; Denis, J.-C.; M'Saada, T. : “Automatic Driving in Stop and Go
Traffic,” Proceedings of the Intelligent Vehicles '94 Symposium
2. Kuehnle, A. “Automated stop and go driving with computer vision” Proceedings of the
SPIE - The International Society for Optical Engineering, 1992
3. Hirano, Hiroyuki; Kawai, Yasunori; Fujita, Masayuki : “Automated Driving for an
Experimental Vehicle using Visual Feedback Control” Kanazawa University, 2003
4. Schlegel, Nikolai : “Autonomous Vehicle Control using Image Processing” Virginia
Polytechnic Institute, 1997
Matlab M-file code
B = imread('car18.gif');
C = rgb2hsv(B);
A = C(:,:,3);
MH1
MH2
MH3
MH4
=
=
=
=
edge(A,'log');
edge(A,'sobel');
edge(A,'prewitt');
edge(A,'canny');
something = [ MH3 MH2; MH1 MH4];
imwrite(something,'car18ed.jpg','jpg');
Download