Road Lane Detection by Discriminating Dashed and Solid Road

advertisement
sensors
Article
Road Lane Detection by Discriminating Dashed and
Solid Road Lanes Using a Visible Light
Camera Sensor
Toan Minh Hoang, Hyung Gil Hong, Husan Vokhidov and Kang Ryoung Park *
Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu,
Seoul 100-715, Korea; hoangminhtoan@dongguk.edu (T.M.H.); hell@dongguk.edu (H.G.H.);
vokhidovhusan@dongguk.edu (H.V.)
* Correspondence: parkgr@dongguk.edu; Tel.: +82-10-3111-7022; Fax: +82-2-2277-8735
Academic Editor: Felipe Jimenez
Received: 26 April 2016; Accepted: 15 August 2016; Published: 18 August 2016
Abstract: With the increasing need for road lane detection used in lane departure warning systems
and autonomous vehicles, many studies have been conducted to turn road lane detection into a
virtual assistant to improve driving safety and reduce car accidents. Most of the previous research
approaches detect the central line of a road lane and not the accurate left and right boundaries of
the lane. In addition, they do not discriminate between dashed and solid lanes when detecting the
road lanes. However, this discrimination is necessary for the safety of autonomous vehicles and the
safety of vehicles driven by human drivers. To overcome these problems, we propose a method for
road lane detection that distinguishes between dashed and solid lanes. Experimental results with the
Caltech open database showed that our method outperforms conventional methods.
Keywords: road lane detection; left and right boundaries of road lane; dashed and solid road lanes;
autonomous vehicles
1. Introduction
Accurate detection of road lanes is an important issue in lane departure warning systems and
driver assistance systems. Detecting lane boundaries enables vehicles to avoid collisions and issue a
warning if a vehicle passes a lane boundary. However, lane boundaries are not always clearly visible.
This can be caused, for instance, by poor road conditions, insufficient quantity of paint used for marking
the lane boundary, environmental effects (e.g., shadows from objects like trees or other vehicles),
or illumination conditions (street lights, daytime and nighttime conditions, or fog). These factors make
it difficult to discriminate a road lane from the background in a captured image. To deal with these
problems, current research applies various methods ranging from low-level morphological operations
to probabilistic grouping and B-snakes [1–3]. Detail explanations of previous works are shown in
Section 2.
2. Related Works
The methods for lane departure warning can be classified into two categories: sensor-based
methods and vision-based methods. Sensor-based methods use devices such as radar, laser sensors,
and even global positioning systems (GPS) to detect whether a vehicle departed a lane based on
the information of the vehicle ahead or the position calculated by GPS. These devices can also be
used for obstacle detection. Their main advantage is their scanning distance (up to 100 m) and their
high reliability in dust, snow, and other poor weather conditions. However, these methods cannot
accurately detect the lane positions, and the information they provide is unreliable inside a tunnel or if
Sensors 2016, 16, 1313; doi:10.3390/s16081313
www.mdpi.com/journal/sensors
Sensors 2016, 16, 1313
2 of 23
no other vehicle is ahead. Therefore, most of the recent research approaches have been focusing on
developing vision-based solutions and using additional sensors to enhance the results.
The vision-based methods detect road lanes based on the features of a camera image such as color
gradient, histogram, or edge. We can divide vision-based solutions into two main classes. One is the
model-based methods [1–17], which create a mathematical model of the road structure. They use the
geometric coordinates of the camera and the road as input parameters and depend on their accuracy.
To determine the parameters, the initial configuration information is merged with feature points of
the lane markings taken from an image of the road. For example, Xu et al. used a B-spline based road
model to fit the lane markings and a maximum deviation of position shift method for identifying the
road model’s control points [1]. Li et al. used an extended Kalman filter in addition to a B-spline curves
model to guarantee a continuous lane detection [2]. Tan et al. focused on detecting a curve lane using
improved river flow and random sample consensus (RANSAC) under challenging conditions based on
a hyperbola-pair lane model [5]. Zhou et al. presented a lane detection method based on a geometrical
model of the lane and Gabor filter [6]. In earlier work [13], they had proposed a lane detection method
that used gradient-enhancing conversion to guarantee an illuminating-robust performance. In addition,
they used an adaptive canny edge detector, a Hough transformation (HT), and a quadratic curve
model. Li et al. employed an inverse perspective mapping (IPM) model [14] to detect a straight line
in an image. Chiu et al. introduced a lane detection method using color segmentation, thresholding,
and fitting with a quadratic function model [15]. Mu et al. determined candidate regions of lane
markings by object segmentation, applied a sober operator to extract redundancy edges, and used
piecewise fitting with a linear or parabolic model to detect lane markers [17]. With model-based
methods, lane detection becomes a problem of solving mathematical models. The accuracy of the
detection depends not only on the initial input parameters of the camera or the shape of the road but
also on the feature points extracted from a captured image of the road.
The other class of vision-based methods are the feature-based methods [18–23], which can
discriminate feature points of lane markings from the non-lane areas by characteristic features of
the road, such as color, gradient, or edge. Chang et al. applied a canny edge detector to investigate
boundaries and proposed an edge-pair scanning method and HT to verify that the edges belonged
to lane markings [18]. In previous research [19], they had introduced a method for determining the
adaptive road region-of-interest (ROI) and locating the road lane. Chen et al. proposed a method to
detect a lane with a downward looking color camera and a binormalized adjustable temple correlation
method [20]. Benligiray et al. suggest detecting lanes by detecting line segments based on the EDLines
algorithm [21]. In previous research [22], they had detected lanes using canny edge detector and HT
based on vanishing points. Sohn et al. proposed an illumination invariant lane detection algorithm
using ROI generation based on vanishing point detection and lane clustering [23]. The feature-based
methods are simple, but they require a clear and strong color contrast of the lane and good road
conditions with little changes in the surrounding environment. Most of the previous (model-based
and feature-based) methods detect the central line of the road lane and do not locate the accurate
left and right boundaries of the road lane. In particular, they do not discriminate the dashed and
solid lanes when detecting the road lanes. In previous research [24], they classified lanes based on
a linear–parabolic lane model, an automatic on-the-fly camera calibration, an adaptive smoothing
scheme, pairs of local maxima–minima of the gradient, and a Bayesian classifier using mixtures of
Gaussians. Although their method can classify the kinds of lane such as dashed, solid, dashed solid,
solid dashed, and double solid ones, their method did not detect the starting and ending positions
of lane. That is, with the lane region within ROI of image, their method classified only the kinds of
lane without detecting the exact starting and ending points. That is because the correct classification
based on Bayesian classifier can be possible even with a little (detection) error of starting and ending
points, and the little amount of error can be compensated by the Bayesian classifier. Therefore, in their
research [24], they did not show the accuracy of detecting the starting and ending points, but showed
only the classification error of five kinds of road lane.
Sensors 2016, 16, 1313
3 of 23
Different from their method, we correctly detect the starting and ending positions of lane with the
discrimination dashed and solid lanes. By studying pros and cons of the existing research approaches,
we decided to use a feature-based lane detection method and to detect a lane’s accurate left and right
boundaries by discriminating the dashed and solid lanes. Our research is novel in the following four
aspects compared to other work.
-
-
-
-
In most previous researches, they detect only the centerline of the left and right boundaries of
a road lane. Different from them, our method can detect the accurate left and right boundaries of
a road lane.
In most previous studies, they detected the starting and ending positions of a road lane without
discriminating the dashed and solid ones. In some research, they just classified the kinds of road
lane such as dashed, solid, dashed solid, solid dashed, and double solid ones without detecting
the starting and ending positions of a road lane. Different from them, our method correctly detects
the starting and ending positions of lane with the discrimination of dashed and solid lanes.
We can remove incorrect line segments using the line segments’ angles and merging the line
segments according to their inter-distance. In order to detect curve lane, the angular condition is
adaptively changed within the upper area of ROI based on tracing information of angular changes
of line segments.
Using a perspective camera model, the adaptive threshold is determined to measure the distance
and used to detect the final line segments of the road lane’s left and right boundaries.
Table 1 presents a summary of our comparison of existing research on lane detection and our
proposed method.
The remainder of this paper is organized as follows. We provide an overview of the proposed
method and an algorithm for road lane detection in Section 3. Section 4 discusses the experimental
results, and Section 5 presents the conclusions.
Sensors 2016, 16, 1313
4 of 23
Table 1. Comparison of previous and the proposed methods.
Feature-Based Method
Category
Methods
Model-Based Method
-
Advantages
-
B-spline model [1–3,16].
Hyperbola-pair lane model [5].
Lane geometrical model [6].
Vehicle directional control model
(DIRCON) [9].
Quadratic function model [13,15]
IPM model [10–12,14,25].
Linear or parabolic model [17].
More accurate results of lane detection can
be guaranteed using mathematic models.
Its performance is less affected by noises
caused by shadows, water area,
and day light.
Not Discriminating
Dashed and Solid Lanes
-
-
-
-
Disadvantages
-
The accuracy of the detection depends not
only on the initial input parameters of the
camera or the shape of the road, but also
on the feature points extracted from a
captured road image.
Using edge features [18,22],
template correlation [20], EDLines
method [21], and illumination
invariant lane features [23].
Performance is not affected by the
initial input parameters of the
camera or the model parameters.
Simple and fast processing speed.
The methods require a clear and
strong color contrast of a lane and
good road conditions with little
effect from changes in the
surrounding environment.
They detect the central line of a road lane rather than locating the accurate left and
right boundaries of the road lane.
They do not discriminate the dashed and solid lanes when detecting the road lanes.
Discriminating Dashed and
Solid Lanes (Proposed Method)
-
-
-
-
Detecting the lanes based on line segments.
Incorrect line segments are removed based on
the line segments’ angles and by merging the
line segments according to their inter-distance.
Using adaptive threshold for detecting the
correct lane boundaries.
Detecting the accurate left and right boundaries
of a road lane.
Discriminating the dashed and solid lanes when
detecting the road lanes.
More processing power is necessary for
detecting the left and right boundaries of a road
lane by discriminating the dashed and solid
lanes compared to the methods that only detect
the central line of the road lane.
Sensors 2016, 16, 1313
Sensors 2016, 16, x FOR PEER REVIEW
5 of 23
5 of 22
3. Proposed
ProposedMethod
Method
3.1.
3.1. Proposed
Proposed Method
Method
An
An overview
overview of
of the
the proposed
proposed method
method is
is presented
presented in
in Figure
Figure1.
1.
Figure 1. Overall procedure of the proposed method.
Figure 1. Overall procedure of the proposed method.
Figure 1 depicts the whole procedure of our proposed method. In our lane detection system, the
Figurehas
1 depicts
the whole
of our proposed
method.
Inand
our post-processing
lane detection system,
algorithm
two main
stages,procedure
main processing
(Step 1 and
Step 2)
(Step 3the
to
algorithm
has
two
main
stages,
main
processing
(Step
1
and
Step
2)
and
post-processing
(Step
to
Step 5). In the first step, we define the ROI of the captured image and locate the line segments in3the
Step
In the
step, we
define line
the ROI
of the captured
and locate
line
segmentsand
in the
ROI. 5).
Then,
wefirst
remove
incorrect
segments
based on image
the angles
of thethe
line
segments
by
ROI.
Then,
we
remove
incorrect
line
segments
based
on
the
angles
of
the
line
segments
and
by
merging
merging them according to their inter-distance. Finally, using the perspective camera model, we
them
according
to their inter-distance.
Finally, the
using
the perspective
model,
we determine
the
determine
the adaptive
threshold to measure
distance
and use itcamera
to detect
the final
line segments
adaptive
measure the
distance
and use it to detect the final line segments of the left and
of the leftthreshold
and rightto
boundaries
of the
road lane.
right boundaries of the road lane.
3.2. Determination of ROI
3.2. Determination of ROI
Defining the ROI in the captured image gives us two advantages. First, lanes always appear
Defining the ROI in the captured image gives us two advantages. First, lanes always appear
within the predetermined region of the image when the position and direction of the camera are
within the predetermined region of the image when the position and direction of the camera are fixed.
fixed. This is shown in Figure 2–4. Therefore, we do not need to perform lane detection in the whole
This is shown in Figures 2–4. Therefore, we do not need to perform lane detection in the whole image,
image, but only in the restricted area. If the lane detection is done only in the ROI and not in the
but only in the restricted area. If the lane detection is done only in the ROI and not in the whole
whole image, the effect of environmental noises such as rain, fog, or poor weather conditions can be
image, the effect of environmental noises such as rain, fog, or poor weather conditions can be lessened.
lessened. In addition, the complexity and computational time of the lane detection can be
In addition, the complexity and computational time of the lane detection can be significantly reduced.
significantly reduced.
Previous research defined the ROI based on vanishing points [13]. However, this takes a lot
Previous research defined the ROI based on vanishing points [13]. However, this takes a lot of
of processing time and might determine an incorrect ROI if the vanishing points were incorrect.
processing time and might determine an incorrect ROI if the vanishing points were incorrect. Our
Our research is mainly focused on detecting the starting and ending positions of straight and curve
research is mainly focused on detecting the starting and ending positions of straight and curve
lanes with the discrimination of dashed and solid lanes within the ROI. Therefore, in our research,
lanes with the discrimination of dashed and solid lanes within the ROI. Therefore, in our research,
we do not automatically estimate ROI, but use the predetermined ROI shown in the image (the ROI of
we do not automatically estimate ROI, but use the predetermined ROI shown in the image (the ROI
red box in Figure 4b), for lane detection. In our experiments, we used two kinds of open databases
of red box in Figure 4b), for lane detection. In our experiments, we used two kinds of open
such as Caltech and SLD datasets (see details in Section 4). All images have the size of 640 × 480 pixels.
databases such as Caltech and SLD datasets (see details in Section 4). All images have the size of
Based on these dimensions, with Caltech dataset, the left-upper position (x and y coordinates) of ROI
640 × 480 pixels. Based on these dimensions, with Caltech dataset, the left-upper position (x and y
coordinates) of ROI is determined as (100, 245), and the width and height of the ROI are empirically
determined as 440 and 100 pixels. With SLD dataset, the left-upper position (x and y coordinates) of
Sensors 2016, 16, 1313
6 of 23
Sensors 2016, 16, x FOR PEER REVIEW
6 of 22
is determined as (100, 245), and the width and height of the ROI are empirically determined as 440 and
100
With SLDas
dataset,
(x and
y coordinates)
of ROI
determined
as
ROIpixels.
is determined
(120, the
320),left-upper
and the position
width and
height
of the ROI
are is
also
empirically
(120,
320), and
areofalso
empirically
440 and
100 pixels.
determined
asthe
440width
and and
100 height
pixels. ofInthe
theROI
case
using
images determined
of differentassize,
the left-upper
In
the
case
of
using
images
of
different
size,
the
left-upper
position,
width,
and
height
of
ROI
are
position, width, and height of ROI are proportionally changed based on the width and height
of the
proportionally
changed
based
on
the
width
and
height
of
the
original
image.
original image.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 2.
2. Examples
Figure
Examples of
of input
input images:
images: (a)
(a) input
input image
image without
without road
road markings,
markings, crosswalk
crosswalk and
and shadow;
shadow;
(b)
input
image
including
road
markings;
(c)
input
image
including
crosswalk;
and
(d–f)images
input
(b) input image including road markings; (c) input image including crosswalk; and (d–f) input
images
including
shadow.
including shadow.
Sensors 2016, 16, 1313
7 of 23
Sensors 2016, 16, x FOR PEER REVIEW
7 of 22
Sensors 2016, 16, x FOR PEER REVIEW
7 of 22
Figure 3. Region without or with road.
Figure 3. Region without or with road.
Figure 3. Region without or with road.
(b)
(a)
(b)
(c)
(c) (b) ROI; and (c) ROI
Figure 4. Defining the(a)
region-of-interest (ROI) in input image: (a) input image;
4. image.
Defining the region-of-interest (ROI) in input image: (a) input image; (b) ROI;
Figure
and
Figure
4.
Defining
the
region-of-interest
(ROI)
in
input
image:
(a)
input
image;
(b)
ROI;
and
(c)
ROI
(c) ROI image.
image.
3.3. Lane Detection by Locating Line Segments
3.3. Lane Detection
bythe
Locating
Segments
Some of
existingLine
research
approaches convert an input image into an IPM or bird’s eye view
3.3. Laneimage
Detection
by Locating
Line Segments
[10–12,14,25]
to represent
lanes as vertical and parallel in the image. However, in some cases,
Somethe
of the existing
approaches
convert
an input
image
anofIPM
or bird’s eye view
be research
vertical and
parallel because
we would
need to
adjustinto
the set
parameters
Some lanes
of thecannot
existing
research approaches
convert
an input
image
into
an IPM
or bird’sfor
eye view
image [10–12,14,25]
represent
as vertical
and parallel
in the image.
inthe
some cases,
obtaining thetoIPM
or bird’s lanes
eye view
image according
to the relative
position However,
of the lane to
image [10–12,14,25] to represent lanes as vertical and parallel in the image. However, in some cases,
position of
a vehicle
and
its camera
slightly
change
between
two
lanes.
the lanes camera.
cannot The
be vertical
and
parallel
because
wecan
would
need
to adjust
the
setroad
of parameters
for
the lanes
cannot be
and parallel
because
we would
need totoadjust
of parameters
for
Moreover,
the vertical
camera parameters
need
to be known
in advance
obtainthe
an set
accurate
image
obtaining the IPM or bird’s eye view image according to the relative position of the lane to the camera.
obtaining
the IPM
IPMororbird’s
bird’s
view
image
according
the relative
position
of theusing
lane to the
by the
eyeeye
view
image.
In other
researchto
approaches,
lanes
were detected
The
position
of position
a vehicleofand
itsHowever,
camera
change
between
twobetween
road
lanes.
Moreover,
HT
[2,8,13–15,18,19,22,23].
takes
a long processing
time change
and detects
too many
incorrect
camera.
The
a vehicle
anditcan
its slightly
camera
can slightly
two
road lanes.the
camera
parameters
need
to beinknown
inInadvance
obtain
an aaccurate
image an
by the
IPM or
bird’s
line segments,
as shown
Figure need
5a.
addition,
ittocannot
lane by
between
Moreover,
the camera
parameters
to be known
indetect
advance
to discriminating
obtain
accurate
image
the
dashed
and
solidresearch
lanes.
eyebyview
image.
In
other
approaches,
lanes
were
detected
using
HT
[2,8,13–15,18,19,22,23].
the IPM or bird’s eye view image. In other research approaches, lanes were detected using
To solve
theseprocessing
problems, we
use and
a linedetects
segmenttoo
detector
[26,27]line
for locating
the line shown in
However,
it takes
a long
time
many(LSD)
incorrect
segments,
HT [2,8,13–15,18,19,22,23].
However,
it takes
a long processing
time
and detects
too manyasincorrect
segments in an image. The LSD method is supposed to work on any digital image without
Figure
5a.depending
In addition,
it cannot
detect
aInlane
by
discriminating
between
the
dashed
and solid
lanes.
line segments,
as on
shown
in Figure
5a.The
addition,
it cannot
a laneofby
discriminating
between
parameter
tunning.
LSD
algorithm
controlsdetect
the number
false
detections based
solve
these
problems,
we
use
a
line
segment
detector
(LSD)
[26,27]
for
locating
the
line
theTo
dashed
and
solid
lanes.
on previous research [28], and uses a contrario validation method based on Desolneux and
coworker’s
research
[29,30].
Let
S
=
,
,
…
,
}
be
the
set
of
line
segments
extracted
from
an
ROI
segments
in anthese
image.
The LSD
supposed
to work
on[26,27]
any digital
imagethe
without
To solve
problems,
we method
use a lineis segment
detector
(LSD)
for locating
line
( = 1,to
image
using
the LSD
algorithm.
Each
linealgorithm
segment
, controls
2, …work
, ) number
is defined
as.digital
segments
an image.
The
LSD
method
is supposed
on any
image without
depending
oninparameter
tunning.
The
LSD
the
of
false detections
based on
depending
on parameter
The
algorithm
controls
of false detections
based
previous
research
[28], andtunning.
uses=a contrario
method
based
on Desolneux
and(1)coworker’s
) number
, LSD
, ,validation
, }, ( =
1, 2, … ,the
on
previous
research
[28],
and
uses
a
contrario
validation
method
based
on
Desolneux
research [29,30].
Let S = {s1(, s2, , . ). .are
, skthe
the set of line segments extracted from an ROI imageand
using
} be
coordinates of the starting point and the ending point of line
where ( , ) and
research
[29,30].
Let
S
=
,
,
…
,
}
be
the
set
of
line
segments
extracted
from
an
ROI
thecoworker’s
LSD algorithm.
Each
line
segment
s
,
i
=
1,
2,
.
.
.
,
k
is
defined
as.
(
)
segment , respectively.
is the angle
i of line segment and is calculated by Equation (2).
image using the LSD algorithm. Each line segment , ( = 1, 2, … , ) is defined as.
si = { x1i , y1i , x2i , y2i , θi } , (i = 1, 2, . . . , k)
(1)
(1)
=
, , , , }, ( = 1, 2, … , )
where ( x1i , y1i ) and ( x2i , y2i ) are the coordinates of the starting point and the ending point of line
where ( , ) and ( , ) are the coordinates of the starting point and the ending point of line
segment si , respectively. θi is the angle of line segment si and is calculated by Equation (2).
segment , respectively.
is the angle of line segment and is calculated by Equation (2).
180
y2i − y1i
θi =
arctan
, (i = 1, 2, . . . , k )
(2)
π
x2i − x1i
Sensors 2016, 16, x FOR PEER REVIEW
8 of 22
=
Sensors 2016, 16, 1313
180
arctan
−
−
, ( = 1, 2, … , )
8 of
(2)23
As shown in Figure 5, the LSD method can detect more correct line segments than the HT
As shown in Figure 5, the LSD method can detect more correct line segments than the HT method.
method.
(a)
(b)
Figure
5. Comparisons
Comparisonsofofline
line
detection:
detection
conventional
edge detector
and
Figure 5.
detection:
(a) (a)
lineline
detection
usingusing
conventional
edge detector
and Hough
Hough
transformation
(HT);
and
(b)
line
extraction
using
line
segment
detector
(LSD).
transformation (HT); and (b) line extraction using line segment detector (LSD).
We did not quantitatively compare the performance of line segment detection by LSD and HT.
We did not quantitatively compare the performance of line segment detection by LSD and
Instead, with some images of Caltech database and SLD database (used in our experiments of
HT. Instead, with some images of Caltech database and SLD database (used in our experiments of
Section 4), we checked the performance. We used the already published LSD method. However, the
Section 4), we checked the performance. We used the already published LSD method. However, the
main contributions of our research are not LSD method but the post-processing in order to detect
main contributions of our research are not LSD method but the post-processing in order to detect the
the accurate starting and ending points of straight and curve lanes with the discrimination of the
accurate starting and ending points of straight and curve lanes with the discrimination of the broken
broken (dashed) and unbroken (solid) lanes as shown in Section 3.4.
(dashed) and unbroken (solid) lanes as shown in Section 3.4.
3.4. Correct Lane Detection Based on Post-Processing
3.4. Correct Lane Detection Based on Post-Processing
We remove the line segments that were detected incorrectly and locate the accurate line
We remove the line segments that were detected incorrectly and locate the accurate line segments
segments based on the features of the road lane, such as angle and inter-distance between left and
based on the features of the road lane, such as angle and inter-distance between left and right
right boundaries of the lane.
boundaries of the lane.
3.4.1.
Based on
on Angle
Angle
3.4.1. Eliminating
Eliminating Incorrect
Incorrect Line
Line Segment
Segment Based
As
Figure 5b
5b illustrates,
illustrates,the
theLSD
LSDalgorithm
algorithmalso
alsodetects
detectsmany
manyincorrect
incorrect
line
segments.
need
As Figure
line
segments.
WeWe
need
to
to
eliminate
the
incorrect
line
segments
to
reduce
the
computational
time
and
complexity.
In
our
eliminate the incorrect line segments to reduce the computational time and complexity. In our research,
research,
we the
firstangle
use the
oflane
a road
to eliminate
the incorrect
line segments.
In Figure
6, left
the
we
first use
of angle
a road
to lane
eliminate
the incorrect
line segments.
In Figure
6, the
left
side
(the
rectangular
region
of
acgh)
and
right
side
(the
rectangular
region
of
cefg)
are
divided,
side (the rectangular region of acgh) and right side (the rectangular region of cefg) are divided, and
) and
0 ∼ ~75
0 ∼~155
and
angular
ranges
forthe
thecorrect
correct line
line segments
segments are
(105
two two
angular
ranges
(θle f(t 25(25
750 and
θright (105
1550 ))))for
are
determined
on
each
side,
as
shown
in
Equations
(3)
and
(4).
The
same
parameters
of
angular
ranges
determined on each side, as shown in Equations (3) and (4). The same parameters of angular ranges
of
of
Equations
(3)
and
(4)
are
used
in
two
open
databases
of
our
experiments
in
Section
4.
Equations (3) and (4) are used in two open databases of our experiments in Section 4. Because vehicle
Because
vehicle
including
camera
usually
moves
between
left
and
right
road
lanes,
the
left
and
right
including camera usually moves between left and right road lanes, the left and right road lanes can be
road
lanestocan
be estimated
toareas
be included
in the
areas
triangles
(bdh)
and (bdf) ofHere,
Figure
estimated
be included
in the
of triangles
(bdh)
andof(bdf)
of Figure
6, respectively.
θle6,
ft
respectively.
Here, area between
defines the
angular
lines6.ofθbh and
dh of Figure 6.
defines the angular
two
lines ofarea
bh between
and dh oftwo
Figure
right defines the angular area
defines
angular
between
lines
betweenthe
two
lines ofarea
bf and
df of two
Figure
6. of bf and df of Figure 6.
0, 75 0]}
1,1,
2, …
≤
≤
− 1")
S S == s|L | x ≤≤ wROI−−1,1, θ L∈∈[25
25 , 75 , (, (=
i=
2, ,. . ). ,( p) ("i f ”H/3
ROI /3 ≤ y1i ≤ HROI − 1”)
1i
left
22
i
i
L
w ROI
L
L∗
0
L
L∗
0
(3)
(3)
i ≤ θ
S Sleft== si | x1i ≤≤ 2 − 1,−θi 1,∈ θi ∈ −[10 ∗≤−θ10
≤i + 10≤ , ∗ + 10 ]},
2
(i = 1, 2, . . . , p) (else i f “0 ≤ y1i < HROI /3”)
( = 1, 2, … , )(
Sright = siR | x1i >
= |
>
R
right = si | x1i >
S S
“0 ≤ w ROI
2 ,
w ROI ,
22 ,
<
/3")
θiR ∈ 1050 , 1550
, (i = 1, 2, . . . , q) (i f ”HROI /3 ≤ y1i ≤ HROI − 1”)
R∗ , 1550 ]} , ( R = 1, 2,
− 1")
θiR ∈∈[105
θi − 10 ≤ θi ≤ θiR∗…+, 10)0( ", (i = /3
1, 2,≤. . . , q≤
)
(4)
(4)
∗
∗
(
else
i
f
“0
≤
y
<
H
/3”
)
S
=
| >
,ROI ∈ [
− 10 ≤ ≤ + 10 ]}, ( = 1, 2, … , )(
“0
1i
2
where x1i is the x coordinate
starting point of line segment si in Equation (1), and θi is the angle of
≤ < of the/3")
line segment si in Equation (2). In addition, WROI is the width of the ROI region (the distance between
a and e (or h and f) of Figure 6). As explained in Section 3.2, WROI is 440 pixels in our research. HROI is
Sensors 2016, 16, x FOR PEER REVIEW
Sensors 2016, 16, x FOR PEER REVIEW
Sensors 2016, 16, 1313
9 of 22
9 of 22
9 of 23
where
is the x coordinate of the starting point of line segment in Equation (1), and
is
the
where
is the x coordinate of the starting point of line segment in Equation (1), and
is the
angle of line segment in Equation (2). In addition,
is the width of the ROI region (the
angle of line segment in Equation (2). In addition,
is the width of the ROI region (the
distance
between
a
and
e
(or
h
and
f)
of
Figure
6).
As
explained
in Section
3.2, 6). As
is explained
440 pixels in
in
thedistance
height of
the ROI
region
(the
distance
between
a and
h (or ein
and
f) of Figure
between
a and
e (or
h and
f) of Figure
6). As
explained
Section
3.2,
is 440
pixels in
our
research.
is
the
height
of
the
ROI
region
(the
distance
between
a
and
h
(or
e
and
f)
of
Section
3.2, HROI is 100
in our
our research.
is pixels
the height
of research.
the ROI region (the distance between a and h (or e and f) of
is 100 pixels
in our research.
Figure
6). As
explained has
in Section
3.2,
Each
line
two positions,
namely
its starting
ending positions. In our research,
is 100 pixels
in ourand
research.
Figure
6).
Assegment
explained in Section
3.2,
Each
line
segment
has
two
positions,
namely
its
starting
and
endingpositions.
positions.
our
research,
we consider
the higher
the position
a lower
y coordinate)
to be the
starting
point
Each line
segmentposition
has two(i.e.,
positions,
namelywith
its starting
and
ending
InIn
our
research,
we
consider
the
higher
position
(i.e.,
the
position
with
a
lower
y
coordinate)
to
be
the
starting
point
the higher
position
(i.e.,
the position
a lowermost
y coordinate)
to be
thesegments
starting point
in we
the consider
image with
the origin
(0, 0)
defined
as the with
left-upper
corner. All
line
whose
in in
thethe
image
with
the
origin
defined
the
left-upper
most
corner.All
All
linesegments
segments
whose
image
with
the
origin(0,
(0,0)0)between
defined 0as
asand
theW
left-upper
most
corner.
starting
point
has
an
x-coordinate
considered
toline
belong
to thewhose
left
side
ROI /2-1 are
starting
point
has
an
x-coordinate
between
0
and
/2-1
are
considered
to
belong
to
the
left
side
starting
point region
has an of
x-coordinate
between
and in Equation
/2-1 are considered
to belong
to the left
side
(the
rectangular
acgh of Figure
6) as0shown
(3). All other
line segments
belong
to
(the
rectangular
region
of
acgh
of
Figure
6)
as
shown
in
Equation
(3).
All
other
line
segments
belong
(the
rectangular
region
of
acgh
of
Figure
6)
as
shown
in
Equation
(3).
All
other
line
segments
belong
the right side (the rectangular region of cefg of Figure 6) as shown in Equation (4). That is, the sets of
to the
right
side
region
6) as
asshown
shown
inEquation
Equation
(4).
That
is,
sets
L and
tosegments
the
right
side
(therectangular
rectangular
region
of cefg
cefg of Figure
Figure
6)
That
is,as
the
sets
line
(si(the
siR ) satisfying
theof
conditions
of Equations
(3) in
and
(4) are (4).
obtained
Sthe
left and
of
line
segments
(
and
)
satisfying
the
conditions
of
Equations
(3)
and
(4)
are
obtained
as
Sleft
of
line
segments
(
and
)
satisfying
the
conditions
of
Equations
(3)
and
(4)
are
obtained
as
S
left
Sright , respectively, and are considered correct line segments. Figure 7 shows the example where the
and
Sright
, respectively,
andremoved
areconsidered
considered
correct
segments.
where
and
Sright
, line
respectively,
are
line
segments.Figure
Figure77shows
showsthe
theexample
example
where
incorrect
segmentsand
are
based correct
on angle
features.
incorrect
line
segmentsare
areremoved
removedbased
based on angle features.
thethe
incorrect
line
segments
features.
As explained at the end of this Section, the conditions of Equations (3) and (4) are divided into
explainedatatthe
theend
endofofthis
this Section, the
the conditions
conditions of
divided
into
AsAs
explained
of Equations
Equations(3)
(3)and
and(4)(4)are
are
divided
into
two parts
(i f ”HROI /3 ≤
y1i ≤ HSection,
ROI − 1” and else if “0 ≤ y1i < HROI /3”)) according to y1i
two
parts
(
"
/3
≤
≤
−
1"
and
else
if
“0
≤
<
/3"))
according
to
(the
y y
two yparts
( " of /3
≤ point −
else if s“0
≤ <
/3")) according to
(the
(the
coordinate
the ≤
starting
of1"
lineand
segment
i in Equation (1)) in order to solve the detection
coordinate
of
the
starting
point
of
line
segment
in
Equation
(1))
in
order
to
solve
the
detection
coordinate
of the lane.
starting
point
of line segment
inatEquation
(1))
in Section.
order to solve the detection
problem
of curve
Detail
explanations
are shown
the end of
this
problem
of
curve
lane.
Detail
explanations
are
shown
at
the
end
of
this
Section.
problem
of curve
lane.the
Detail
explanations
are
shown
atline
thesegments
end of thisfrom
Section.
Figure
6 denotes
angular
range
of
the
correct
a cameraview.
view.AllAll
valid
Figure
6 denotesthe
theangular
angularrange
range of
of the
the correct
correct line
segments
from
a acamera
valid
Figure
6
denotes
line
segments
from
camera
view.
All
valid
line
segments
should
lie
within
these
two
areas,
which
are
defined
by
the
angular
ranges
of
θle f t
line segments should lie within these two areas, which are defined by the angular ranges of
line
segments
should
lie
within
these
two
areas,
which
are
defined
by
the
angular
ranges
of
(the
angular
area
between
two
lines
of
bh
and
dh)
and
θ
(the
angular
area
between
two
lines
(the angular area between two lines of bh and dh) and right (the angular area between two lines of of
(the
angular
area
between two
lines of bh and
dh) and areas as
(the
angularand
areaare
between
two them.
lines of
bf bf
and
df).
We
consider
incorrect
eliminated
and
df).
We
considerthe
theline
linesegments
segmentsoutside
outside these
these areas as incorrect
and are
eliminated
them.
bf and df). We consider the line segments outside these areas as incorrect and are eliminated them.
Figure 6. Angle feature used for eliminating incorrect line segments.
Figure 6. Angle feature used for eliminating incorrect line segments.
Figure 6. Angle feature used for eliminating incorrect line segments.
(a)
(b)
Figure 7. Removing(a)
the incorrect line segments based on angle features: (a)
(b)image including the
detected line segments; and (b) image with the incorrect line segments removed.
Figure 7.
7. Removing
segments based
based on
on angle
angle features:
features: (a)
image including
including the
the
Figure
Removing the
the incorrect
incorrect line
line segments
(a) image
detected
line
segments;
and
(b)
image
with
the
incorrect
line
segments
removed.
As
shown
in
[31,32],
the
curve
lane
having
severe
curvature
is
usually
observed
only
in
the
detected line segments; and (b) image with the incorrect line segments removed.
upper area of ROI. Therefore, our method uses the 2nd ones of Equations (3) and (4) (condition of
shown
thewith
curve
lane6having
is of
usually
observed
onlycurve
in the
elseAs
if “0
≤ in <[31,32],
/3")
Figure
within severe
only thecurvature
upper area
the ROI
(where the
As shown in [31,32], the curve lane having severe curvature is usually observed only in the
upper
area
of
ROI.
Therefore,
our
method
uses
the
2nd
ones
of
Equations
(3)
and
(4)
(condition
lane having severe curvature can be observed). That is, because the width and height of ROI are, of
upper area of ROI. Therefore, our method uses the 2nd ones of Equations (3) and (4) (condition of
elserespectively,
if “0 ≤ 440
< and /3")
with Figure
within only
the upperinarea
of the
the and
curve
100 pixels
in our 6research
(as explained
Section
3.2),ROI
the (where
upper-left
else if “0 ≤ y1i < HROI /3”) with Figure 6 within only the upper area of the ROI (where the
lane
having severe
curvature
beinobserved).
because
width and height of ROI are,
lower-right
positions
of uppercan
area
the ROI areThat
(0, 0)is,
and
(439, 33the
(100/3)).
curve lane having severe curvature can be observed). That is, because the width and height of ROI
respectively, 440 and 100 pixels in our research (as explained in Section 3.2), the upper-left and
are, respectively, 440 and 100 pixels in our research (as explained in Section 3.2), the upper-left and
lower-right positions of upper area in the ROI are (0, 0) and (439, 33 (100/3)).
lower-right positions of upper area in the ROI are (0, 0) and (439, 33 (100/3)).
Sensors 2016, 16, 1313
10 of 23
Sensors 2016, 16, x FOR PEER REVIEW
10 of 22
In this
area,
thethe
angular
ranges
adaptively
as shown
showninin the
In upper
this upper
area,
angular
rangesofofθright and
andθle f t are
are
adaptivelychanged
changed as
2nd ones
of
Equations
(3)
and
(4)
(condition
of
else
if
“0
≤
y
<
H
/3”).
That
is, based
the 2nd ones of Equations (3) and (4) (condition of else if “0 ≤ 1i < ROI
/3"). That is, based
on on
L
∗
R
∗
∗
∗
tracing
information
(θi (and and
θi of
of Equations
(3) and
theofangular
of previous
tracing
information
theof2nd
the ones
2nd ones
of Equations
(3) (4))
and of
(4))
the angular
of
line segment
position
is immediately
below, but the
closest
to the
current
line
segment
previous (whose
line segment
(whose
position is immediately
below,
but the
closest
to the
current
line to be
segment
to be traced),
our method
adaptively
changes
the angular
( this
and
) area
in thisasupper
traced),
our method
adaptively
changes
the angular
range
(θiL andrange
θiR ) in
upper
shown in
L∗
∗
∗ R∗
∗
∗
L
0
L
L
∗
0
R
0
R
R
∗
0
area
as
shown
in
“
∈
[
−
10
≤
≤
+
10
]"and
∈
[
−
10
≤
≤
+
10
of of
“θi ∈ θi − 10 ≤ θi ≤ θi + 10 ” and θi ∈ θi − 10 ≤ θi ≤ θi + 10 of the 2nd] ones
the
2nd
ones
of
Equations
(3)
and
(4).
Equations (3) and (4).
Based on this, our method can correctly detect curve lane. In addition, through the line
Based on this, our method can correctly detect curve lane. In addition, through the line combination
combination algorithm of Section 3.4.2, the pieces of line segments from a curve line can be
algorithm of Section 3.4.2, the pieces of line segments from a curve line can be correctly combined as a
correctly combined as a curve line.
curve line.
3.4.2. Combining the Pieces of Line Segments
3.4.2. Combining the Pieces of Line Segments
Due to the impact of shadows, illumination changes, or incorrect detection of line segments, we
Due
to the multiple
impact ofline
shadows,
illumination
changes, or
detection
of linechecks
segments,
can detect
segments
from one boundary
of incorrect
road lanes.
Our system
the we
can detect
multiple
line
segments
from
one
boundary
of
road
lanes.
Our
system
checks
the
conditions
conditions to determine whether two or more line segments should be combined into one longer line
segment. whether two or more line segments should be combined into one longer line segment.
to determine
Figure
8 shows
three
cases
wheretwo
twoor
or more
more line
are
combined
intointo
one one
longer
one. one.
Figure
8 shows
three
cases
where
linesegments
segments
are
combined
longer
The
first
two
cases
make
one
straight
line
whereas
the
last
one
makes
a
curve
line.
As
mentioned
The first two cases make one straight line whereas the last one makes a curve line. As mentionedinin the
the previous section, we assume that the starting point of a line segment has a lower y-coordinate
previous
section, we assume that the starting point of a line segment has a lower y-coordinate than the
than the ending point. The dashed lines in Figure 8 denote line segments, whereas the solid lines
ending point. The dashed lines in Figure 8 denote line segments, whereas the solid lines represent the
represent the merged line segment. Blue and green circles represent the starting point and ending
merged
linerespectively.
segment. Blue and green circles represent the starting point and ending point, respectively.
point,
Figure 8. Angle feature used for eliminating incorrect line segments.
Figure 8. Angle feature used for eliminating incorrect line segments.
To evaluate the connectivity of two line segments, the following two conditions should be
satisfied
based
the distance of
threshold
dst) and the
angle
threshold
(thr
angle). The thresholds
To
evaluate
theonconnectivity
two line(thr
segments,
the
following
two
conditions
should bewere
satisfied
obtained,
empirically.
In
our
research,
thr
dst and thrangle are 3 pixels and 2 degrees, respectively, on
based on the distance threshold (thrdst ) and the angle threshold (thrangle ). The thresholds were obtained,
both Caltech
SLD databases
which
were
in our
(see details inon
Section
4).
empirically.
In ourand
research,
thrdst and
thrangle
areused
3 pixels
andexperiments
2 degrees, respectively,
both Caltech
However, thrdst is proportionally changed based on the width and height of the original image.
and SLD databases which were used in our experiments (see details in Section 4). However, thrdst is
We assume that the Euclidean distance between the starting point of the ith line and the ending
proportionally changed based on the width and height of the original image.
point of the jth line, or the ending point of the ith line and the starting point of the jth line is
.
We
thatthan
the thr
Euclidean
distance between the starting point of the ith line and the ending
If assume
is less
dst (the first condition), our system checks the second condition based on the
pointangle
of the
line,
or the whether
ending point
of segments
the ith line
point
the jth
injth
order
to decide
two line
are and
part the
of a starting
straight line
or aofcurve
line.line is di f f dst .
If di f f dst is
than thr
first condition),
system
checks
second
condition
based
on the
Asless
explained
indst
the(the
previous
section, our our
system
can obtain
thethe
angle
of each
line segment
and
angularwhether
difference
of line
two segments
line segments
(called
) toline
a predefined
anglecompare
in orderthe
to decide
two
are part
of a straight
or a curvethreshold
line.
(thrangle
). Line H isinthe
new
line thatsection,
is obtained
bysystem
combining
ith linethe
with
the jth
As
explained
the
previous
our
canthe
obtain
angle
ofline.
each line segment
and compare the angular difference of two line segments (called di f f angle ) to a predefined threshold
(thrangle ). Line H is the new line that is obtained by combining the ith line with the jth line.
(
Line H =
straight line, i f di f f angle ≤ thr angle (cases 1 and 2 of Figure 8)
curve line, otherwise (case 3 of Figure 8)
(5)
Sensors 2016, 16, 1313
11 of 23
The below Algorithm 1 provides more details. Rough explanations of this algorithm are as follows.
If the Y starting position of line I is lower than that of line J, the distance between the ending position
of line I and the starting one of line J is measured. If this distance is less than threshold (thrdst ) and the
angle between these two lines is less than threshold (thrangle ), these two lines are combined as a new
straight line (Case 1 of Figure 8). If the distance condition is satisfied, but the angle condition is not,
these two lines are combined as a new curve line (Case 3 of Figure 8).
If the Y starting position of line I is higher than that of line J, the distance between the starting
position of line I and the ending position of line J is measured. If this distance is less than threshold
(thrdst ) and the angle between these two lines is less than threshold (thrangle ), these two lines are
combined as a new straight line (Case 2 of Figure 8). If the distance condition is satisfied, but the angle
condition is not, these two lines are combined as a new curve line (Case 3 of Figure 8).
Algorithm 1. Line Combination Algorithm.
Input: Set of line segments S
Output: Set of combined lines
While (line I ∈ S)
{
Get starting point Is , ending point Ie , and angle Ia
While ((line J 6= lineI) and (J ∈ S))
{
Get starting point Js , ending point Je , and angle Ja
If (Is .y < Js .y)
{
di f f dst = d(Ie , Js )
If (di f f dst < thrdst)
{
di f f angle = |Ia – Ja |
If (di f f angle <= thr angle )
{
Define a new straight line K having Is and Je //Case 1 of Figure 8
Remove lines I and J
}
Else if (di f f angle > thr angle )
{
Ie = Js
Define a new curve line having Is , Js , and Je //Case 3 of Figure 8
}
}
}
Else if (Is .y >= Js .y)
{
di f f dst = d(Is , Je )
If (di f f dst < thrdst)
{
di f f angle = |Ia – Ja |
If (di f f angle <= thr angle )
Sensors 2016, 16, 1313
12 of 23
Algorithm 1. Cont.
{
Define a new straight line K having Js and Ie //Case 2 of Figure 8
Remove lines I and J
}
Else if (di f f angle > thr angle )
{
Is = Je
Define a new curve line having Js , Is , and Ie //Case 3 of Figure 8
}
}
}
}
}
3.4.3. Detecting the Left and Right Boundaries Based on Adaptive Threshold
Because of various factors such as varying illumination, shadows, and the abrasion of paint on the
road lane, we can divide the detected road line into several discrete parts. “Line Combination
Algorithm” (explained in Section 3.4.2) combines these parts into a line boundary, but further
processing is necessary to detect more accurate lane boundaries.
A road lane always includes a left and a right edge boundary. If a detected line is on the left
boundary, it has almost certainly a symmetrical counterpart on the right boundary and vice versa
(Figure 9). From a camera view, the road lane appears as a trapezoid in the image as shown in Figure 9.
This is because in a perspective camera model the further two points are away from the camera, the
smaller their distance appears. Therefore, the adaptive threshold is determined by measuring the
inter-distance between two starting points or ending points, as shown in Figure 9. Based on this
threshold, we combine the two lines together. If the distance between the two starting positions
and the distance between the two ending positions are less than the adaptive threshold (thradaptive of
Figure 9), the two lines are regarded as the correct left and right boundaries of the lane. In our research,
thradaptive has the range from 6 to 14 pixels. In the case of small thradaptive in Figure 9, 6 pixels are used,
whereas 14 pixels are used in the case of large thradaptive in Figure 9. In the case of intermediate position
between the upper and lower boundaries of Figure 9, the intermediate value by linear interpolation
from 6 to 14 pixels is used as thradaptive according to the Y position of line. Same parameter of thradaptive
is used in two open databases of our experiments in Section 4. However, thradaptive is proportionally
changed based on the width and height of the original image.
However, in the case of a curve lane, we detect several line segments from the left and right
boundaries as illustrated in Figure 10b. Consequently, we obtain several starting and ending positions
for each line segment of the left and right boundaries. We select the two starting positions that have
a y coordinate smaller than the other starting positions and the two ending positions that have a y
coordinate larger than the other ending positions. We then calculate the distance between the two
selected starting positions and the distance between the two selected ending positions. If the two
distances are less than the adaptive threshold from the perspective camera model, we assume that we
identified the correct left and right boundaries of the lane.
In our research, the curve line boundaries are also represented as the linear segments of small
size. Because the length of line segment on curve line boundaries is small, the representation by
linear segments as shown in Figure 10b produces small approximation error. The advantages of this
representation are that we can reduce the processing time by representing the curve lane with the
limited numbers of linear segments (not complicated polynomial curves).
Sensors 2016, 16, x FOR PEER REVIEW
13 of 22
Sensors
2016,
FOR
representation
are PEER
that REVIEW
we can
Sensors
2016, 16,
16, x
1313
13
22
reduce the processing time by representing the curve lane with
13 of
ofthe
23
limited numbers of linear segments (not complicated polynomial curves).
representation are that we can reduce the processing time by representing the curve lane with the
limited numbers of linear segments (not complicated polynomial curves).
Figure 9. Distance between left and right boundaries of the road lane in the image.
Figure 9. Distance between left and right boundaries of the road lane in the image.
Figure 9. Distance between left and right boundaries of the road lane in the image.
Figure 10. Distance between neighboring lines: (a) straight line; and (b) curve line.
Figure
10.
4. Experimental
Results
Figure
10. Distance
Distance between
between neighboring
neighboring lines:
lines: (a)
(a) straight
straight line;
line; and
and (b)
(b) curve
curve line.
line.
We test ourResults
proposed method with the Caltech open database from [12], which consists of 866
4. Experimental
4.
Experimental
total
frames withResults
an image size of 640 × 480 pixels. We implemented the proposed algorithm in
We test
our proposed
method
with the3.0.
Caltech
open database
from [12],onwhich
consists
of 866
Microsoft
Visual
Studio
2013
and OpenCV
Experiments
werefrom
performed
desktop
We test our proposed
method
with the Caltech
open database
[12], whichaconsists
ofcomputer
866 total
total
with an
size (Intel
of 640Corporation,
× 480 pixels. Santa
We implemented
the proposed
in
with frames
Intel
i7 image
3.47 ofGHz
Clara,
USA)
and 12 algorithm
GB
memory
frames
withCore™
an image
size
640 × 480 pixels.
We implemented
the CA,
proposed
algorithm
in Microsoft
Microsoft
Visual
Studio
2013
and
OpenCV
3.0.
Experiments
were
performed
on
a
desktop
computer
(Samsung
Electronics
Ltd., 3.0.
Suwon,
Korea). Figure
11 shows on
theasample
used
forIntel
the
Visual
Studio
2013 andCo.,
OpenCV
Experiments
were performed
desktopdatasets
computer
with
with
Intel
Core™
i7
3.47
GHz
(Intel
Corporation,
Santa
Clara,
CA,
USA)
and
12
GB
memory
experiments.
Core™
i7 3.47 GHz (Intel Corporation, Santa Clara, CA, USA) and 12 GB memory (Samsung Electronics
(Samsung Electronics Co., Ltd., Suwon, Korea). Figure 11 shows the sample datasets used for the
Co., Ltd., Suwon, Korea). Figure 11 shows the sample datasets used for the experiments.
experiments.
To measure the accuracy of lane detection, the ground-truth (starting and ending) positions of
lane were manually marked in the images. Because our method can detect the left and right boundary
positions of a road lane by discriminating the dashed and solid lanes, all the ground-truth positions
were manually marked to measure the accuracy. Based on the inter-distance between two starting
positions (of ground-truth point and that detected by our method) and that between two ending
positions (of ground-truth point and that detected by our method) of the lane, we determined whether
the detection was successful or failed. If the two inter-distances are less than the threshold, the line
detected by our method is determined as correct one. If not, it is determined as false one.
Sensors 2016, 16, 1313
14 of 23
Sensors 2016, 16, x FOR PEER REVIEW
14 of 22
(a)
(b)
Figure 11. Cont.
Sensors 2016, 16, 1313
15 of 23
Sensors 2016, 16, x FOR PEER REVIEW
15 of 22
(c)
(d)
Figure
experimental
datasets:
(a) (a)
Cordova
1; (b)
2; (c)2;Washington
1; and
Figure 11.
11. Examples
Examplesofof
experimental
datasets:
Cordova
1; Cordova
(b) Cordova
(c) Washington
1;
(d)
Washington
2.
and (d) Washington 2.
To measure the accuracy of lane detection, the ground-truth (starting and ending) positions of
lane were manually marked in the images. Because our method can detect the left and right
boundary positions of a road lane by discriminating the dashed and solid lanes, all the ground-truth
positions were manually marked to measure the accuracy. Based on the inter-distance between two
Sensors 2016, 16, 1313
16 of 23
We define the correct lane point as positive data and the non-lane point as negative data. From that,
we can define two kinds of errors, false positive (FP) errors and false negative (FN) errors. In addition,
true positive (TP) is defined. Because we measure the accuracy only with the positive data and do
not have any negative data (i.e., ground-truth non-lane point), true negative (TN) errors are 0% in our
experiment. From that, we can obtain precision, recall, and F-measure [33,34]. The range of precision,
recall, and F-measure is 0 to 1, with 0 being the lowest accuracy and 1 being the highest accuracy.
In our evaluations, the numbers for TP, FP, and FN cases are represented as #TP, #FP, and #FN.
Table 2 shows the accuracies of our method. As indicated in Table 2, the precision, recall,
and F-measure are about 0.90, 0.94, and 0.90, respectively. The road lines in Cordova 2 dataset are less
distinctive compared to those in other datasets, which increases the FP detection by our method and
decreases the consequent precision. The Washington 2 dataset includes more symbol markings (such as
indicating words, crosswalk as shown in Figure 11d) on the road than other datasets. These symbol
markings cause FP detection by our method, which decreases the consequent precision. Based on the
results from Table 2, we can conclude that the proposed method works well with the images captured
in various environments.
Table 2. Experimental results with Caltech datasets.
Database
#Images
#TP
#FP
#FN
Precision
Recall
F-Measure
Cordova 1
Cordova 2
Washington 1
Washington 2
Total
233
253
175
205
866
1252
734
875
1180
4041
53
112
94
196
455
92
15
64
79
250
0.96
0.87
0.9
0.86
0.90
0.93
0.98
0.93
0.94
0.94
0.94
0.92
0.91
0.90
0.90
In Figure 12, we show examples of correct detection by our method. Figure 13 shows examples of
false detection. As shown in Figure 13a, our method falsely detected the part of a crosswalk on the
road as a road lane. In Figure 13b, the part of a text symbol marking is falsely detected as a road lane.
In Figure 13c, the boundary of a non-road object is falsely detected as a road lane.
Next, we compare the performance of our method with that of Aly’s method [12]. Aly used
the IPM method to represent a road lane as a straight line in an image, a Gaussian filter and HT
to detect straight lines, and RANSAC to fit lane markers. Figure 14 shows the comparisons of
lane detection by [12] and our method. The lanes detected by Aly’s method are shown as thick
green lines, those detected by our method are represented as blue and red lines. In our comparison,
we empirically found the optimal thresholds for both our method and Aly’s method [12], respectively.
As shown in Figure 14, Aly’s method cannot accurately discriminate between the dashed and solid
lanes. Moreover, his method does not detect the left and right boundaries of each lane.
The reason why our method tries to detect all the boundaries of a road lane is that the type of
road lane can be discriminated among two lanes (left blue lanes of Figure 14b) and one lane (right red
lane of Figure 14b) by detecting all boundaries. In addition, the reason why our method tries to detect
a road lane by discriminating between the dashed and solid lanes is that this discrimination is also
important for the driving of an autonomous vehicle.
Sensors 2016, 16, 1313
17 of 23
Sensors 2016, 16, x FOR PEER REVIEW
Sensors 2016, 16, x FOR PEER REVIEW
17 of 22
17 of 22
(a)
(a)
(b)
(b)
(c)
(c)
(d)
(d)
Figure 12. Examples of correct lane detection: (a) Cordova 1; (b) Cordova 2; (c) Washington 1; and (d)
Figure 12. Examples of correct lane detection: (a) Cordova 1; (b) Cordova 2; (c) Washington 1;
Figure 12. Examples
of correct lane detection: (a) Cordova 1; (b) Cordova 2; (c) Washington 1; and (d)
Washington
2.
and (d) Washington 2.
Washington 2.
(a)
(a)
(b)
(b)
(c)
(c)
Figure 13. Examples of false detection: (a) crosswalk; (b) symbol markings of indicating word; and (c)
Figure
13.
Examples
of of
false
detection:
(a) crosswalk;
(b) symbol
markings
of indicating
word; and
(c)
non-road
Figure
13.objects.
Examples
false
detection:
(a) crosswalk;
(b) symbol
markings
of indicating
word;
non-road
objects.objects.
and
(c) non-road
Sensors 2016, 16, 1313
18 of 23
Sensors 2016, 16, x FOR PEER REVIEW
(a)
18 of 22
(b)
Figure 14. Examples of comparative results: (a) Aly’s method [12]; and (b) our method.
Figure 14. Examples of comparative results: (a) Aly’s method [12]; and (b) our method.
In Table 3, we compare the lane detection accuracies of Aly’s method [12] and ours, and confirm
In Table
3, we compare
the lane
accuracies
of Aly’s
[12] and
and confirm
that
our method
outperforms
Aly’sdetection
method [12].
The reason
whymethod
Aly’s method
is ours,
less accurate
is
thatthat
ourAly’s
method
outperforms
Aly’s method
[12]. between
The reason
Aly’s
is less
accurate
method
cannot accurately
discriminate
the why
dashed
and method
solid lanes
as shown
in is
thatFigure
Aly’s 14.
method
cannothis
accurately
discriminate
the right
dashed
and solid
lanes
shown
In addition,
method does
not detectbetween
the left and
boundaries
(the
red as
and
blue in
lines14.ofInFigure
14b)his
of method
each lane.
Asnot
explained
earlier,
in our
thered
ground-truth
Figure
addition,
does
detect the
left and
rightexperiment,
boundariesall
(the
and blue lines
positions
areofmarked
at all As
the explained
starting and
endinginpoints
of the left andallright
of dashed
of Figure
14b)
each lane.
earlier,
our experiment,
the boundaries
ground-truth
positions
solidatlanes.
on and
the ending
inter-distance
two
starting
positions
(ground-truth
are and
marked
all theBased
starting
points between
of the leftthe
and
right
boundaries
of dashed
and solid
position
thatinter-distance
detected by algorithm)
that starting
between positions
the two ending
positions position
(ground-truth
lanes.
Basedand
on the
between and
the two
(ground-truth
and that
position
and
that
detected
by
algorithm),
whether
the
detection
is
successful
or
has failed
detected by algorithm) and that between the two ending positions (ground-truth position
andisthat
determined. Therefore, Aly’s method gave much higher error rates than our method.
detected by algorithm), whether the detection is successful or has failed is determined. Therefore, Aly’s
In addition, we included the additional comparative experiments with Truong and coworker’s
method gave much higher error rates than our method.
method [35] with Caltech dataset as shown in Table 3. We empirically found the optimal thresholds
In addition, we included the additional comparative experiments with Truong and coworker’s
for their method, also. As shown in Table 3, our method outperforms Truong and coworker’s
method
[35][35],
withalso,
Caltech
dataset
as shown
Table 3. We
empirically
the optimal
method
because
Truong
and in
coworker’s
method
does found
not detect
the left thresholds
and right for
theirboundaries
method, also.
As
shown
in
Table
3,
our
method
outperforms
Truong
and
coworker’s
of each lane with the correct discrimination of dashed and solid lanes, either. method [35],
also, because Truong and coworker’s method does not detect the left and right boundaries of each lane
with the correct discrimination
of dashed
and solid
lanes,
either.on Caltech datasets.
Table 3. Comparative
accuracies
of lane
detection
Database
Accuracies
Cordova
1 Cordova
2 Washington
1 datasets.
Washington 2
Table
3. Comparative
accuracies
of lane detection
on Caltech
Ours
0.96
0.87
0.9
0.86
Precision
[12]
0.012
0.112
0.037
0.028
Database Accuracies
Cordova 1
Cordova 2
Washington 1
Washington 2
[35]
0.553
0.389
0.423
0.440
Ours
0.96
0.87
0.9
0.86
Ours
0.93
0.98
0.93
0.94
[12]
0.012
0.112
0.037
0.028
Precision
Recall
0.006
0.143
0.037
0.026
[35][12]
0.553
0.389
0.423
0.440
[35]
0.512
0.402
0.407
0.430
Ours
0.93
0.98
0.93
0.94
Ours
0.94
0.92
0.91
0.90
[12]
0.006
0.143
0.037
0.026
Recall
F-measure [35][12]
0.008
0.126
0.037
0.027
0.512
0.402
0.407
0.430
[35]
0.532
0.395
0.415
0.435
Ours
0.94
0.92
0.91
0.90
[12]
0.008
0.126
0.037
0.027
F-measure
As the next experiment,
frame by our method
[35] we measured
0.532 the processing
0.395 time per0.415
0.435as shown in
Table 4. The reason why the processing time with the Washington 1 dataset is larger than those with
others is that some images of Washington 1 dataset include many shadows which produced many
As the next
experiment,
the processing
time
per in
frame
method
as that
shown
incorrect
line segments,
and we
thismeasured
increases processing
time. As
shown
Tableby
4, our
we can
confirm
in Table
4.
The
reason
why
the
processing
time
with
the
Washington
1
dataset
is
larger
than
those
our method can be operated at a fast speed (about 30.2 frames/sec (1000/33.09)). With all the
with others is that some images of Washington 1 dataset include many shadows which produced
many incorrect line segments, and this increases processing time. As shown in Table 4, we can confirm
Sensors 2016, 16, 1313
19 of 23
Sensors 2016, 16, x FOR PEER REVIEW
19 of 22
that our method can be operated at a fast speed (about 30.2 frames/sec (1000/33.09)). With all the
databases, the processing time for three steps of “Removing the incorrect line segments based on
databases, the processing time for three steps of “Removing the incorrect line segments based on
angle”, “Combining two line segments based on inter-distance”, and “Detecting correct lane by the
angle”, “Combining two line segments based on inter-distance”, and “Detecting correct lane by the
adaptive threshold” of Figure 1 is 0 ms, respectively. Therefore, total processing time is same to
adaptive threshold” of Figure 1 is 0 ms, respectively. Therefore, total processing time is same to that
that of detecting line segments by LSD algorithm. In future work, we would research about more
of detecting line segments by LSD algorithm. In future work, we would research about more
sophisticated computational techniques to reduce the processing time on the step of detecting line
sophisticated computational techniques to reduce the processing time on the step of detecting line
segments by LSD algorithm.
segments by LSD algorithm.
Table 4. Processing time per each frame (unit: ms).
Table 4. Processing time per each frame (unit: ms).
ModuleDatabase
Database
Module
Cordova
Cordova 11
Cordova2
Cordova2
Washington1
Washington1
Washington 2
Washington
2
Average
Average
Processing Time
Processing
Time
26.56
26.56
32.89
32.89
37.80
37.80
35.12
35.12
33.09
33.09
In addition,
addition, we
we included
included the
the comparative
comparative experiments
experiments by
by our
our method
method with
with Aly’s
Aly’s method
method [12]
[12]
In
and
Truong
and
coworker’s
method
[35]
with
additional
datasets
of
Santiago
Lanes
Dataset
(SLD)
and Truong and coworker’s method [35] with additional datasets of Santiago Lanes Dataset (SLD)
dataset [36]
[36] as
as shown
shown in
in Table
Table 5.
dataset
5. The
The examples
examples of
of SLD
SLD dataset
dataset are
are shown
shown in
in Figure
Figure 15.
15.
Figure 15. Examples of images on Santiago Lanes Dataset (SLD) datasets.
Figure 15. Examples of images on Santiago Lanes Dataset (SLD) datasets.
As shown in Table 5, our method outperforms Aly’s method [12] and Truong and coworker’s
As shown
Tabledatasets,
5, our method
method
Truong
method
[35] onin SLD
also, outperforms
because theirAly’s
methods
do[12]
notand
detect
the and
leftcoworker’s
and right
method
[35]
on
SLD
datasets,
also,
because
their
methods
do
not
detect
the
left
and
right
boundaries
boundaries of each lane with the correct discrimination of dashed and solid lanes, either, as shown
of each
lane
correct discrimination
of dashed
anddatasets
solid lanes,
Figure
16.
in
Figure
16.with
Thethe
examples
of detected results
on SLD
are either,
shownasinshown
Figurein16,
which
The examples
ofmethod
detectedcan
results
on the
SLDstarting
datasetsand
are ending
shown in
Figure 16,
shows
our method
shows
that our
detect
positions
of which
lane with
thethat
discrimination
can
detect
the
starting
and
ending
positions
of
lane
with
the
discrimination
of
dashed
and
solid lanes
of dashed and solid lanes more accurately than Aly’s method [12] and Truong and coworker’s
more
accurately
than
Aly’s
method
[12]
and
Truong
and
coworker’s
method
[35].
method [35].
Table 5.
Comparative accuracies
Table
5. Comparative
accuracies of
of lane
lane detection
detection on
on SLD
SLD datasets.
datasets.
DatabaseAccuracies
Accuracies
SLD Dataset
Database
SLD
Dataset
Precision
Precision
Ours
Ours
[12]
[12]
[35]
[35]
0.905
0.905
0.01
0.01
0.662
0.662
Recall
Recall
Ours
Ours
[12]
[12]
[35]
0.929
0.929
0.002
0.002
0.465
F-measure
Ours
Ours
[12]
[12]
[35]
0.917
0.917
0.003
0.003
0.546
[35]
F-measure
[35]
0.465
0.546
Sensors 2016, 16, 1313
20 of 23
Sensors
2016,
16,x xFOR
FORPEER
PEERREVIEW
REVIEW
Sensors
2016,
16,
(a)
(a)
20 of2022of 22
(b)
(b)
(c)
(c)
Figure 16. Examples of comparative results of lane detection on SLD datasets: (a) our method; (b)
Figure 16. Examples of comparative results of lane detection on SLD datasets: (a) our method;
Figure
Examples
of (c)
comparative
of lane
detection
Aly’s 16.
method
[12]; and
Truong andresults
coworker’s
method
[35]. on SLD datasets: (a) our method; (b)
(b) Aly’s method [12]; and (c) Truong and coworker’s method [35].
Aly’s method [12]; and (c) Truong and coworker’s method [35].
Our method can correctly detect the curve lane, also. Because Caltech datasets do not include
method
detect
the
curve
lane,
also.
Because
Caltech
datasets
doAs
not
include
Our
method
can
correctlythe
detect
the
curve
lane,
also.
Because
Caltech
datasets
doshown
not
include
theOur
curve
lanes,can
wecorrectly
include
detection
results
on
SLD
datasets
by
our
method.
inthe
curve
lanes,
we include
the detection
results
oncurve
SLDon
datasets
by
method.
As
shown
in Figure
the
curve
we include
the detection
results
SLD
by our
method.
As
shown
in
Figure
17,lanes,
our
method
can
correctly
detect
the
lane,
also.datasets
In our
addition,
the errors
of detection
on 17,
our
method
can
correctly
detect
the
curve
lane,
also.
In
addition,
the
errors
of
detection
on
curve
lane
Figure
our
correctly
detect
theofcurve
curve17,
lane
aremethod
alreadycan
included
in the
results
Tablelane,
5. also. In addition, the errors of detection on
are
already
included
the results
of
Table
5. oflanes,
The methods
inin[31,32]
can in
not
only
detect
as well as straight, but also predict the
curve
lane
are
already
included
the
results
Tablecurves
5.
direction
of upcoming
curves.
However,
their lanes,
method
did as
not
detect
the
starting
and
ending
The
can not
notonly
onlydetect
detect
lanes,curves
curves
as
well
straight,
also
predict
The methods
methods
in [31,32]
can
well
as as
straight,
butbut
also
predict
the
positions
of
lane
in
addition
to
no
discrimination
of
dashed
(broken)
and
solid
(unbroken)
road
the
direction
of
upcoming
curves.
However,
their
method
did
not
detect
the
starting
and
ending
direction of upcoming curves. However, their method did not detect the starting
ending
lanes. Different
from
theirtomethods,
our method
candashed
correctly
detect
the starting
androad
ending
positions
ofoflane
addition
no no
discrimination
of dashed
(broken)
and solid
(unbroken)
lanes.
positions
laneinin
addition
to
discrimination
of
(broken)
and
solid
(unbroken)
road
positions
of lane
with
the
discrimination
of can
dashed
and
solid
lanes.
Different
from
their
methods,
our method
correctly
detect
the starting
andstarting
ending positions
of
lanes.
Different
from
their
methods,
our
method
can
correctly
detect the
and ending
lane
with the
discrimination
of dashed and
lanes.
positions
of lane
with the discrimination
ofsolid
dashed
and solid lanes.
Figure 17. Examples of detection results on curve lanes of SLD datasets by our method (Detected
dashed and solid lanes are shown in blue and red colors, respectively).
Figure17.
17.Examples
Examplesofof
detection
results
on curve
of datasets
SLD datasets
our method
(Detected
Figure
detection
results
on curve
laneslanes
of SLD
by ourby
method
(Detected
dashed
dashed
and
solid
lanes
are
shown
in
blue
and
red
colors,
respectively).
The
reason
why
the accurate
detection
ofrespectively).
starting and ending points is important in our
and
solid
lanes are
shown
in blue and
red colors,
research is as follows. In the case that the lane is changed from dashed lane to solid one, the car
The not
reason
why
accurate
detection
of starting
anderror
ending
points
is important
in our
should
change
its the
current
driving
lane. If there
exists the
to detect
accurate
starting and
The reason why the accurate detection of starting and ending points is important in our research
research
as follows.
In the
that the lanecar
is changed
fromcar)
dashed
to solid
one,it the
ending is
points
of dashed
lanecase
by autonomous
(self-driving
at fastlane
moving
speed,
can car
is as follows. In the case that the lane is changed from dashed lane to solid one, the car should not
should
currentFor
driving
lane.assuming
If there exists
the error
to detect
causesnot
thechange
traffic its
accident.
example,
that actually
dashed
laneaccurate
is ended,starting
but theand
change its current driving lane. If there exists the error to detect accurate starting and ending points of
ending
points car
of dashed
lane by
car dashed
(self-driving
car) still
at fast
moving due
speed,
it can
autonomous
misrecognize
the autonomous
situation as the
lane being
maintained
to the
dashed
lane
by autonomous
car (self-driving
car) at fast moving
speed, it can
causescar
the traffic
accident.
error to
detect
accurate
starting
ending assuming
points of dashed
lane. In this
case, the
change
causes
the
traffic
accident.
Forand
example,
that actually
dashed
lane is can
ended,
butitsthe
Fordriving
example,
thatof
actually
dashed
laneactually
is ended,
butlane).
the autonomous
caranother
misrecognize
the
pathassuming
by misrecognize
crossing
dashed
lane (but,
solid
If there
vehicle
autonomous
car
the situation
as the dashed
lane being
stillexists
maintained
due
to the
situation
as
the
dashed
lane
being
still
maintained
due
to
the
error
to
detect
accurate
starting
and
(approaching
at very fast
speed)and
behind
the autonomous
car, and
theIn
driver
in thisthe
vehicle
thinks
that its
error
to detect accurate
starting
ending
points of dashed
lane.
this case,
car can
change
ending
points
of dashed
lane. In by
thisthe
case,
the
car
can changecar)
its driving
path
by crossing
ofended,
dashed
there
would
be
no
lane
crossing
front
car
(autonomous
because
the
dashed
lane
is
driving path by crossing of dashed lane (but, actually solid lane). If there exists another vehicle
lane
actually
solid lane).
If therecollision
exists another
vehicle
(approaching
very
fast speed)
behind
the(but,
dangerous
rear-end
can happen.
These
arethe
why
weatare
interest
in detecting
(approaching
at situation
very fast of
speed)
behind the autonomous
car,
and
driver
in this
vehicle
thinks that
thethe
autonomous
car, and
the
driver
in thisroad
vehicle
thinks
thatinthere
would
noresearch.
lane crossing by the
starting
and
ending
points
lane
(as shown
Figure
17) inbe
our
there accurate
would be
no lane
crossing
by the of
front car
(autonomous
car)
because
the
dashed lane is ended,
front car (autonomous car) because the dashed lane is ended, the dangerous situation of rear-end
the dangerous situation of rear-end collision can happen. These are why we are interest in detecting
5. Conclusions
collision
can happen. These are why we are interest in detecting the accurate starting and ending
the accurate starting and ending points of road lane (as shown in Figure 17) in our research.
points of
(as we
shown
in Figure
17)research
in our research.
In road
this lane
paper,
presented
our
on lane detection, which focuses on how to
5. discriminate
Conclusionsbetween dashed and solid lanes under various environmental conditions. Although it
5. Conclusions
has some limitations in difficult scenarios such as blurred lane markings or shadows, the proposed
In this paper, we presented our research on lane detection, which focuses on how to
In thisshows
paper,a we
presented
our research
on lane
detection,
which
focuses
on environments.
how to discriminate
method
stable
performance
in detecting
lanes
with images
from
various
All
discriminate
between
dashed
andwere
solid
lanes under
various environmental
conditions.
Although
it
the
parameters
of
our
algorithm
empirically
determined
with
some
images
of
dataset
without
between dashed and solid lanes under various environmental conditions. Although it has some
has
some
limitations We
in difficult
scenarioscompared
such as blurred
lane markings
shadows,
the proposed
any
optimization.
experimentally
our approach
with the
anorproposed
existing
method
limitations
in difficult scenarios
such as blurred
lane markings
or shadows,
method and
shows
method
shows
a
stable
performance
in
detecting
lanes
with
images
from
various
environments.
demonstrated
the superiority
oflanes
our method.
Because
do notenvironments.
use any tracking
information
in All
a stable
performance
in detecting
with images
fromwe
various
All the
parameters
of
the
parameters
of
our
algorithm
were
empirically
determined
with
some
images
of
dataset
without
successive
image
frames,
the
detection
of
lanes
by
our
method
is
not
dependent
on
the
car’s
speed.
our algorithm were empirically determined with some images of dataset without any optimization.
any optimization.
We can
experimentally
compared
our even
approach
an existing
methodasand
our method
detect
the correct
lane
with and
a with
little
amount
of shadows,
WeAlthough
experimentally
compared
our approach
withroad
an existing
method
demonstrated
the superiority
demonstrated the superiority of our method. Because we do not use any tracking information in
successive image frames, the detection of lanes by our method is not dependent on the car’s speed.
Although our method can detect the correct road lane even with a little amount of shadows, as
Sensors 2016, 16, 1313
21 of 23
of our method. Because we do not use any tracking information in successive image frames,
the detection of lanes by our method is not dependent on the car’s speed. Although our method can
detect the correct road lane even with a little amount of shadows, as shown in Figures 12, 14b and 16a,
the road lanes with severe amount of shadows cannot be detected due to the limitation of LSD-based
detection of line segments.
To overcome the above limitations, we plan to conduct further research on how to reduce the
impacts of unexpected noises to enhance the detection accuracy by LSD method and to make the
method robust to occlusion. We can also overcome these limitations by using additional classifiers
of road symbol markings or indicating text that is written on roads. In addition, the ROI calculation
using camera calibration information or the curvature of road markings would be researched in our
future work. Furthermore, we would research about the determination of adaptive threshold based on
the calibration parameters of camera.
Acknowledgments: This research was supported by Basic Science Research Program through the National
Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2015R1D1A1A01056761), and in
part by the Bio & Medical Technology Development Program of the NRF funded by the Korean government,
MSIP (NRF-2016M3A9E1915855).
Author Contributions: Toan Minh Hoang and Kang Ryoung Park implemented the overall system and wrote
this paper. Hyung Gil Hong and Husan Vokhidov helped the experiments.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Xu, H.; Wang, X.; Huang, H.; Wu, K.; Fang, Q. A Fast and Stable Lane Detection Method Based on B-Spline
Curve. In Proceedings of the IEEE 10th International Conference on Computer-Aided Industrial Design &
Conceptual Design, Wenzhou, China, 26–29 November 2009; pp. 1036–1040.
Li, W.; Gong, X.; Wang, Y.; Liu, P. A Lane Marking Detection and Tracking Algorithm Based on Sub-Regions.
In Proceedings of the International Conference on Informative and Cybernetics for Computational Social
Systems, Qingdao, China, 9–10 October 2014; pp. 68–73.
Wang, Y.; Teoh, E.K.; Shen, D. Lane detection and tracking using B-Snake. Image Vis. Comput. 2004, 22,
269–280. [CrossRef]
Özcan, B.; Boyraz, P.; Yiğit, C.B. A MonoSLAM Approach to Lane Departure Warning System. In Proceedings
of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Besançon, France,
8–11 July 2014; pp. 640–645.
Tan, H.; Zhou, Y.; Zhu, Y.; Yao, D.; Li, K. A Novel Curve Lane Detection Based on Improved River
Flow and RANSA. In Proceedings of the International Conference on Intelligent Transportation Systems,
Qingdao, China, 8–11 October 2014; pp. 133–138.
Zhou, S.; Jiang, Y.; Xi, J.; Gong, J.; Xiong, G.; Chen, H. A Novel Lane Detection Based on Geometrical
Model and Gabor Filter. In Proceedings of the IEEE Intelligent Vehicles Symposium, San Diego, CA, USA,
21–24 June 2010; pp. 59–64.
Chen, Q.; Wang, H. A Real-Time Lane Detection Algorithm Based on a Hyperbola-Pair Model. In Proceedings
of the Intelligent Vehicles Symposium, Tokyo, Japan, 13–15 June 2006; pp. 510–515.
Jung, C.R.; Kelber, C.R. A Robust Linear-Parabolic Model for Lane Following. In Proceedings of the 17th
Brazilian Symposium on Computer Graphics and Image Processing, Curitiba, Brazil, 17–20 October 2004;
pp. 72–79.
Litkouhi, B.B.; Lee, A.Y.; Craig, D.B. Estimator and Controller Design for Lanetrak, a Vision-Based Automatic
Vehicle Steering System. In Proceedings of the 32nd Conference on Decision and Control, San Antonio, TX,
USA, 15–17 December 1993; pp. 1868–1873.
Shin, J.; Lee, E.; Kwon, K.; Lee, S. Lane Detection Algorithm Based on Top-View Image Using Random
Sample Consensus Algorithm and Curve Road Model. In Proceedings of the 6th International Conference
on Ubiquitous and Future Networks, Shanghai, China, 8–11 July 2014; pp. 1–2.
Sensors 2016, 16, 1313
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
22 of 23
Lu, W.; Rodriguez, F.S.A.; Seignez, E.; Reynaud, R. Monocular Multi-Kernel Based Lane Marking Detection.
In Proceedings of the 4th Annual International Conference on Cyber Technology in Automation, Control,
and Intelligent Systems, Hong Kong, China, 4–7 June 2014; pp. 123–128.
Aly, M. Real Time Detection of Lane Markers in Urban Streets. In Proceedings of the IEEE Intelligent Vehicles
Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 7–12.
Yoo, H.; Yang, U.; Sohn, K. Gradient-enhancing conversion for illumination-robust lane detection. IEEE Trans.
Intell. Transp. Syst. 2013, 14, 1083–1094. [CrossRef]
Li, H.; Feng, M.; Wang, X. Inverse Perspective Mapping Based Urban Road Markings Detection.
In Proceedings of the International Conference on Cloud Computing and Intelligent Systems,
Hangzhou, China, 30 October–1 November 2013; pp. 1178–1182.
Chiu, K.-Y.; Lin, S.-F. Lane Detection Using Color-Based Segmentation. In Proceeding of the Intelligent
Vehicles Symposium, Las Vegas, NV, USA, 6–8 June 2005; pp. 706–711.
Deng, J.; Kim, J.; Sin, H.; Han, Y. Fast lane detection based on the B-spline fitting. Int. J. Res. Eng. Tech. 2013,
2, 134–137.
Mu, C.; Ma, X. Lane detection based on object segmentation and piecewise fitting. TELKOMNIKA Indones. J.
Electr. Eng. 2014, 12, 3491–3500. [CrossRef]
Chang, C.-Y.; Lin, C.-H. An Efficient Method for Lane-Mark Extraction in Complex Conditions.
In Proceedings of the International Conference on Ubiquitous Intelligence and Computing and International
Conference on Autonomic and Trusted Computing, Fukuoka, Japan, 4–7 September 2012; pp. 330–336.
Ding, D.; Lee, C.; Lee, K.-Y. An Adaptive Road ROI Determination Algorithm for Lane Detection.
In Proceedings of the TENCON 2013–2013 IEEE Region 10 Conference, Xi’an, China, 22–25 October 2013;
pp. 1–4.
Chen, M.; Jochem, T.; Pomerleau, D. AURORA: A Vision-Based Roadway Departure Warning System.
In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems 95 Human
Robot Interaction and Cooperative Robots, Pittsburgh, PA, USA, 5–9 August 1995; pp. 243–248.
Benligiray, B.; Topal, C.; Akinlar, C. Video-Based Lane Detection Using a Fast Vanishing Point Estimation
Method. In Proceedings of the IEEE International Symposium on Multimedia, Irvine, CA, USA,
10–12 December 2012; pp. 348–351.
Jingyu, W.; Jianmin, D. Lane Detection Algorithm Using Vanishing Point. In Proceedings of the International
Conference on Machine Learning and Cybernetics, Tianjin, China, 14–17 July 2013; pp. 735–740.
Son, J.; Yoo, H.; Kim, S.; Sohn, K. Real-time illumination invariant lane detection for lane departure warning
system. Expert Syst. Appl. 2015, 42, 1816–1824. [CrossRef]
Paula, M.B.D.; Jung, C.R. Automatic detection and classification of road lane markings using onboard
vehicular cameras. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3160–3169. [CrossRef]
Li, Z.; Cai, Z.-X.; Xie, J.; Ren, X.-P. Road Markings Extraction Based on Threshold Segmentation.
In Proceedings of the International Conference on Fuzzy Systems and Knowledge Discovery,
Chongqing, China, 29–31 May 2012; pp. 1924–1928.
Von Gioi, R.G.; Jakubowicz, J.; Morel, J.-M.; Randall, G. LSD: A line segment detector. Image Process. Line
2012, 2, 35–55. [CrossRef]
Von Gioi, R.G.; Jakubowicz, J.; Morel, J.-M.; Randall, G. LSD: A fast line segment detector with a false
detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 722–732. [CrossRef] [PubMed]
Burns, J.B.; Hanson, A.R.; Riseman, E.M. Extracting straight lines. IEEE Trans. Pattern Anal. Mach. Intell.
1986, 8, 425–455. [CrossRef]
Desolneux, A.; Moisan, L.; Morel, J.-M. Meaningful alignments. Int. J. Comput. Vis. 2000, 40, 7–23. [CrossRef]
Desolneux, A.; Moisan, L.; Morel, J.-M. From Gestalt Theory to Image Analysis—A Probabilistic Approach;
Springer: New York, NY, USA, 2007.
Curved Lane Detection. Available online:
https://www.youtube.com/watch?v=VlH3OEhZnow
(accessed on 22 June 2016).
Real-Time Lane Detection and Tracking System. Available online: https://www.youtube.com/watch?v=
0v8sdPViB1c (accessed on 22 June 2016).
Sensitivity and Specificity. Available online: http://en.wikipedia.org/wiki/Sensitivity_and_specificity
(accessed on 18 March 2016).
Sensors 2016, 16, 1313
34.
35.
36.
23 of 23
F1 Score. Available online: https://en.wikipedia.org/wiki/F1_score (accessed on 20 June 2016).
Truong, Q.-B.; Lee, B.-R. New Lane Detection Algorithm for Autonomous Vehicles Using Computer
Vision. In Proceedings of the International Conference on Control, Automation and Systems, Seoul, Korea,
14–17 October 2008; pp. 1208–1213.
Santiago Lanes Dataset. Available online: http://ral.ing.puc.cl/datasets.htm (accessed on 10 June 2016).
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
Download