Vision-based Lane Detection using Hough Transform

advertisement
ECE 533 Course Project
(Digital Image Processing)
Vision-based Lane Detection using Hough Transform
By Zhaozheng Yin
Instructor: Prof.Yu Hen Hu
Dec.12 2003
Preface
Purpose
The purpose of the project component of this course is to demonstrate your ability to
apply the knowledge and technique learned during this course.
Types of Projects
Applications -- Applications of image processing to specific research area. In an
application project, it should contain the following: (a) explanation of the nature of
the application and why image processing is needed, (b) image processing techniques
that can be applied to the problem on hand, (c) preliminary results. You should have
results ready and report it. Just propose to apply image processing without any result
is not acceptable.
Final report content
Introduction
Related work
Approach
Results
Summary
Reference
Appendix
Vision-based Lane Detection using Hough Transform
1. Introduction
Lane detection is an important enabling or enhancing technology in a number of
intelligent vehicle applications, including lane excursion detection and warning,
intelligent cruise control and autonomous driving.
Various lane detection methods have been proposed. They are classified into
infrastructure-based and vision-based approaches. While the infrastructure-based
approaches achieve highly robustness, construction cost to lay leaky coaxial cables or
to embed magnetic markers on the road surface is high. Vision based approaches with
camera on a vehicle have advantages to use existing lane markings in the road
environment and to sense a road curvature in front view.
Vision-based location of lane boundaries can be divided into two tasks: lane
detection and lane tracking. Lane detection is the problem of locating lane boundaries
without prior knowledge of the road geometry. Lane tracking is the problem of
tracking the lane edges from frame to frame given an existing model of road geometry.
Lane tracking is an easier problem than lane detection, as prior knowledge of the road
geometry permits lane tracking algorithms to put fairly strong constraints on the likely
location and orientation of the lane edges in a new image. Lane detection algorithms,
on the other hand, have to locate the lane edges without a strong model of the road
geometry, and do so in situations where there may be a great deal of clutter in the
image. This clutter can be due to shadows, puddles, oil stains, tire skid marks, etc.
This poses a challenge for edge-based lane detection schemes, as it is often impossible
to select a gradient magnitude threshold which doesn’t either remove edges of interest
corresponding to road markings and edges or include edges corresponding to
irrelevant clutter.
Detection of long thick lines, such as highway lane markings from input images,
is usually performed by local edge extraction followed by straight line approximation.
In this conventional method many edge elements other than lane makings are detected
when the threshold of edge magnitude is low, or, in the opposite case, edge elements
expected to be detected are fragmented when it is high. This makes it difficult to trace
the edge elements and to fit the approximation lines on them.
2. Related work
In recent years, lane detection has been broadly studied and many state-of-the-art
systems for detecting and tracking lane/pavement boundaries have shown up.
Most lane detection algorithms are edge-based. They relied on thresholding the
image intensity to detect potential lane edges, followed by a perceptual grouping of
the edge points to detect the lane markers of interest. In many road scenes it isn’t
possible to select a threshold which eliminates noise edges without also eliminating
many of the lane edge points of interest.
University of Michigan’s AI lab uses a test bed vehicle as a platform for data
collection and testing. The approach used by Karl kluge looks for a deformable
template model of lane structure to locate lane boundaries without thresholding the
intensity gradient information. Chris Kreucher introduced a algorithm based the
frequency domain features that capture relevant information concerning the strength
and orientation of spatial edges.
Ohio State University’s Center for Intelligent Transportation Research has also
developed two systems: one performs curve identification in the image plane and use
heuristic optimization techniques to construct the road geometry, the other used a
general image segmentation algorithm to isolate lane marker segments and a voting
scheme to unite individual features into lane boundary contours.
Toyota Center R&D Labs describes a lane detection method using a real-time
voting processor for a driver assist system. For robust lane detection in various
environments, they proposed the method based on complete search in a parameter
space.
3. Approach
Lane detection is a complicated problem under different light/weather conditions.
In this class project we analysis the easy case first: the images are captured from the
crossover above the road, assume the lanes to be detected are straight, at daytime and
with good weather condition. The lane markings can be solid or dash lines. Other than
detecting the lane markers, the mid-line of each lane is also calculated to identify the
position of the vehicle with respect to lane makings, which is useful for autonomous
driving.
Fig.1 The lane detection algorithm
Fig. 1 is the flowchart of the lane detection algorithm, which is based on edge
detection and Hough Transform. First the RGB road image is read in and converted
into the grayscale image. Then we use the global histogram to find the road
background gray and subtract it from the grayscale image to get img1. Edge operation
is executed on img1 and lane marking features are preserved in img2. The key
technology here is using Hough Transform to convert the pixels in img2 from the
image coordinate ( x, y ) to the parameter space (  , ) , and then search in the Hough
space to find the long straight lines, which are lane marking candidates. The candidate
lines are post-processed: delete the fake ones, select one line from a cluster of closing
lines as a lane marking. Finally the lane makings are sorted by their position in the
road from left to right. Also the mid-line of each lane is computed to localize the lane.
Details of the algorithm are described below:
(1) Find the road background and subtract it from the original image
The ideal case of the road scene is like Fig.2: solid white line is the lane boundary,
white dash line is the lane separator and the double yellow solid line is used to
separate two driving directions. Due to the perspective transform, the parallel lines in
the road scene will converge to the vanishing point in the image.
Fig.2 Ideal road geometry
However the road geometry is not so ideal in the real world. For example, in
Fig.3(a) the lane boundaries are broken; there are vertical/horizontal scratches or
other clutter on the road surface; the vehicles on the road will also affect the detection
accuracy of the road geometry.
Lane edges are the objects of interest in this work. The features of interest are
those that discriminate between lane markings and extraneous (non-lane) edges. Fig.
3(b) is the edge image of Fig.3 (a). Most features of the lane markings are preserved
as edges in Fig.3(b), which is directly caused by the edge function of Matlab. But for
Fig.4(a), if we use the edge function directly on the original image, much lane
marking edge information is lost (Fig.4(b)). Then we consider other preprocess
methods to preserve the lane marking information before the edge operation.
Background subtraction is a solution.
Fig.3 (a) One image of University Ave
Fig.4 (a) One image of Loop 4 in Beijing
Fig.3 (b) Edge image of the left image
Fig.4 (b) Edge image of the left image
Assume most of the pixels in the image belong to the road background. So we
consider using the global histogram to find the road surface background gray, as
shown in Fig.5. The grayscale which is around the histogram maximum is taken as the
background gray.
Fig.5 Global histogram
The result image caused by subtracting the background gray from the original
image is shown in Fig.6 (a). From the edge image in Fig.6 (b) we can find that the
lane marking edge information is preserved.
Fig.6 (a) Subtract background from original image
Fig.6 (b) Edge image of the left image
(2) Hough Transform
The Hough transform is used in a variety of related methods for shape detection.
These methods are fairly important in applied computer vision; in fact, Hough
published his transform in a patent application, and various later patents are also
associated with the technique. Here we use it to detect the straight lines. Fig.7 is the
fundament idea to convert each pixel in the image to parameter space. We define the
origin of the image coordinate as the upper-left point.
Fig.7 Hough transform for detecting straight lines
A count array [  ][  ] is constructed for each candidate line and some other array
are constructed to record each line’s start/end position. Since the lane markings are
not close to the origin and they are not horizontal in the image (for autonomous
driving application, the camera is mounted on the vehicle with front view), we only
detect the straight lines with restriction   10, 30    150 , and also the calculation
time cost is reduced.
(3) Search in the Hough space for the long straight lines
There are many straight lines detected by Hough Transform, now we search in the
Hough space to find the long straight lines, which are lane marking candidates (shown
as red lines in Fig.8). Fig.8 (a) and (b) show the first 20 straight lines of each road,
which have the biggest count number, i.e. the lane markings include more edge pixels
than other lines in the image.
Fig.8 (a) Candidate lines
Fig.8 (b) Candidate lines & fake lines
(4) Decide the lane markings and mid-line of each lane
In Fig.8 there are many lines around the lane markings detected by the Hough
Transform, also in Fig.8 (b) some lines, which are caused by the edges of vehicle
queues, are counted as straight lines. We need to group the line cluster as one lane
marking and delete other fake lines.
First we sort the lines according to their position in the image from left to right.
Secondly for each line group consisting of closing straight lines, select the most
possible line as the lane marking and delete other fake lines (the distance between two
lines and their count numbers are used as criteria to judge whether or not this line is a
fake lane marking). Finally the mid-line of each lane is calculated from the sorted lane
markings. Fig.9 is the detection result for lane markings (shown as red lines) and
mid-lines of each lane (shown as green line). From Fig.9 (b) the fake lines caused by
the vehicle queue on the road are deleted.
Fig.9 (a) Decide the lane marking
Fig.9 (b) Decide the lane marking
4. Results
Fig.10 (a) and (b) is the final detection result for Fig.3 (a) and Fig.4 (a). In Fig. 10
(a), the lane markings detected by the algorithm almost match the lane markers in the
real world. Because the lane markers in this image are not completely straight and
some noise edge information exists around the lane markings, there is a little offset
between the detected lines and the lane markings in the world. In Fig. 10 (b) the
detected lane markings match the lane markers in the scene very well except the
leftmost lane, which is not detected by the edge function.
Fig.10 (a) Detection result of Fig. 3(a)
Fig.10 (b) Detection result of Fig. 4(a)
We have shown in Fig.6 (b) that the edge information is preserved by subtracting
the road background gray from the original image. Following is an example that
restricting the parameter space within a range (ex.   10, 30    150 ) can reduce
the bad effect of the road surface scratches. As shown in Fig.11 (a) there are many
vertical/horizontal scratches on the road surface and these scratches are evident in the
edge image of Fig.11 (b). When there is no limit to the angle  , the scratches will be
counted as lane marking candidates (Fig.11 (c)). Otherwise if we restrict the  within
a range, the Hough Transform detects the most possible lane marking candidates
(Fig.11 (d)). Fig.11 (e) and (f) is the final result.
(a) One image on East University Ave
(b) Edge image of left image
(c) without restriction to

(d) restrict

(e) Final result in the edge image
(f) final result in the original image
Fig. 11 The effect of the scratches
Comparing to the above example, Fig.12 is another example with opposite driving
direction, and also there are many long horizontal/vertical scratches on the road
surface. Through the background subtraction and Hough Transform, we can still
detect the lane markings without the bad effect of horizontal/vertical scratches. For
the left lane marking in Fig.12 (d), the detected line is a little far away from the lane
marking in the scene, because much of the edge information of this lane marking is
lost in Fig.12 (b) and Fig.12 (c).
Fig.12 (a) One image on University Ave.
Fig.12 (b) Hough Transform
Fig.12 (c) final result
Fig.12 (d) final result
5. Summary
From the above result, we find the algorithm works well for these cases. The key
method includes: find the background gray range, background subtraction, edge
detection, Hough Transform, find the long lane marking candidates, sort the lane
marking candidates, group the cluster lines as one line, delete fake lines and calculate
the mid-line of each lane. Since the algorithm is edge-based, it’s sensitive to the edge
information, just like the example of Fig.12 (d). To improve the robustness of the
algorithm, we can consider some other methods in the future, like: deformabletemplate, multi-resolution Hough Transform, B-snake, multi-sensor fusion.
Reference
1. Karl Kluge, Sridhar Lakshmanan, “A deformable-template approach to lane detection”,
2.
3.
4.
5.
6.
7.
8.
9.
10.
Intelligent Vehicles '95 Symposium., Proceedings of the, 25-26 Sept. 1995
Page(s): 54 -59
Kreucher, C.; Lakshmanan, S.; “A frequency domain approach to lane detection in roadway
ima”, Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on ,
Volume:2, 24-28 Oct.1999 Page(s): 31 -35 vol.2
Gonzalez, J.P.; Ozguner, U.; “Lane detection using histogram-based segmentation and
decision trees”, Intelligent Transportation Systems, 2000. Proceedings. 2000 IEEE , 1-3 Oct.
2000 Page(s): 346 -351
Bertozzi, M.; Broggi, A.; “Real-time lane and obstacle detection on the GOLD system”,
Intelligent Vehicles Symposium, 1996., Proceedings of the 1996 IEEE , 19-20 Sept. 1996
Page(s): 213 -218
Bertozzi, M.; Broggi, A.; “GOLD: a parallel real-time stereo vision system for generic
obstacle and lane detection”, Image Processing, IEEE Transactions on , Volume: 7 Issue: 1 ,
Jan.1998 Page(s): 62 -81
Yu, B.; Jain, A.K.; “Lane boundary detection using a multiresolution Hough transform”,
Image Processing, 1997. Proceedings., International Conference on , 26-29 Oct. 1997
Page(s): 748 -751 vol.2
Yue Wang; Eam Khwang Teoh; Dinggang Shen; “Lane detection using B-snake”,
Information Intelligence and Systems, 1999. Proceedings. 1999 International Conference on ,
31 Oct.-3 Nov. 1999 Page(s): 438 -443
Ma, B.; Lakahmanan, S.; Hero, A.; “Road and lane edge detection with multisensor fusion
methods”, Image Processing, 1999. ICIP 99. Proceedings. 1999 International Conference on ,
Volume: 2 , 24-28 Oct. 1999 Page(s): 686 -690 vol.2
Ma, B.; Lakshmanan, S.; Hero, A.O., III; “Simultaneous detection of lane and pavement
boundaries using model-based multisensor fusion”, Intelligent Transportation Systems, IEEE
Transactions on , Volume: 01 Issue: 3 , Sept. 2000 Page(s): 135 -147
Takahashi, A.; Ninomiya, Y.; Ohta, M.; Tange, K.; “A robust lane detection using real-time
voting processor”, Intelligent Transportation Systems, 1999. Proceedings. 1999
IEEE/IEEJ/JSAI International Conference on , 5-8 Oct. 1999 Page(s): 577 -580
Appendix
One output sample:
Download