International Journal of Engineering Trends and Technology (IJETT) - Volume4Issue5- May 2013
#1
#2
#
This research aims to develop speed violated vehicle detection system using image processing technique. General works are software development of a system that requires a video scene, comprising the following components: moving vehicle starting reference point and end point of reference. A chip dedicated digital signal processing techniques used to exploit image processing computationally more economical video sequence captured by the video camera fixed position to estimate the speed of moving vehicles are moving vehicles detected by analyzing the sequences of binary images which are constructed from the captured frames by employing the difference in interface or background subtraction algorithm. We propose a new adaptive threshold method for binarizing the outputs from the difference between frames and background subtraction techniques The system is designed to detect the position of the moving vehicle at the scene and the position of the reference points and calculate the speed of each frame of the static image detected positions and report, to expedite violated vehicle information to the remote station permitted.
Keywords: Background subtraction, inter-frame difference, image processing, Adaptive Thresholding, Modeling Color, moving object detection, tracking and vehicle speed measurement
I. Introduction
Monitoring vehicle speed is important to enforce speed limit laws. It also indicates the traffic conditions of the section of road tracking radar systems of interest are widely used as a tool of deterrence to prevent drivers from speeding. Are mainly composed of three major units: 1) vehicle detectors 2) a video camera fixed in position, and 3) a processor for controlling the overall system. Vehicle detectors are generally installed in pairs
[1] with a known physical distance between them, and the signals that originated from the existence of a vehicle is used to estimate the time it takes for the vehicle to travel for two detectors. Vehicle speed is then calculated according to the known distance and estimated travel time between the detectors.
Then, the control processor activates the camera to capture the image of the speeding vehicle in accordance with the speed controls. Vehicle detectors could be categorized into two types:
1) hardware-based detectors and 2) software based detectors.
The first is based on electromagnetic principles and requires dedicated hardware. The latter, which is adopted in this paper, we make the detection of vehicles with sophisticated techniques of image processing on vehicle detectors can be classified into four groups: 1) inductive loop detector, 2) laser detectors, 3) detector Optical, and 4) weight detector. An inductive loop detector is a wire loop is embedded in the road surface. Wire loop which emits a magnetic field can be detected by the metal content vehicles within a vehicle and detects a vehicle traveling directly above them. Loop detectors are studied and well known technologies that have been widely used in many monitoring projects. Installation is relatively cheap, and power requirements are low. However, the major disadvantages associated with them are that they need to be located within the surface (therefore subject to damage pavement). E installation and maintenance require traffic disruption. For laser detector [2], a laser is above the roadway, which emits a beam directed to a photodiode array arranged on the pavement. Vehicle detection is achieved when the vehicle breaks the laser beam and the photodiode array detects its presence. The laser detector can work in the day and at night under different climatic conditions, but the extreme temperatures could lead to loss of performance in their duties.
Optical detector vehicle detection achieved using two optical sensors, light-activated placed inside road studs located on the ground. Installation and maintenance of this technology is easy, but the technology is new, and conclusive results on the accuracy are not yet available. The weight detector plates employs flexural piezoelectric sensors, load sensors, or optical fibers to detect the weight of the vehicle. Weight measurement is extremely useful for the detection and classification of vehicles and rarely taken up with other types of detectors.
Weight detectors have substantial error in the measurement of weight of the vehicle, which are within the pavement, requiring lane closures.
For detectors based vehicle software, video cameras are placed at the side of a road [4,7] to achieve the detection of moving vehicles. Video cameras continuously acquire images of traffic flow, which can be analyzed to extract a variety of information.
Based vehicle detector software is not intrusive and does not require the interruption driveway installation or maintenance, but the performance of the image analysis algorithms can suffer from bad weather, darkness, glare or shadows. Also, processors are needed for high-powered computing algorithms perform complex computational image processing in real time. Image processing techniques to lower computational cost are adopted and developed for vehicle detection and tracking. Image processing is the technology, which is based on software component that does not require special hardware. a video recording device and a computer typical normal [7] which can create a speed detection device. Using the theory of basic scientific rate, we can calculate the speed of a vehicle moving in the video scene from the distance and the time known that the vehicle has moved beyond.
II. Binary Image Generation
The speed measurement is performed in binary image domain, i.e., each pixel is transformed into either “1” or “0” according to its motion information. To binarize the incoming input image and only detect the moving pixels, two different techniques are used: (1) interframe difference and (2) background subtraction.
ISSN: 2231-5381 http://www.ijettjournal.org
Page 1437
International Journal of Engineering Trends and Technology (IJETT) - Volume4Issue5- May 2013
A. Interframe Difference Technique
Let us assume that the input RGB image from the camera through a video card is represented as of size at time Furthermore, considering the region of interest, and let
be the subimage of size H W referring to the pixel intensity values of the spatial region labeled with .The difference image between consecutive frames at times and
is computed by taking the absolute valued difference of gray scale representations of frames. The RGB image is converted to gray scale image by using simple averaging of color channels, i.e.,
I t
I t
I t
I t
3
(1)
Where , , are the intensity values for R,G,and B channels, respectively.
After the conversion to grayscale, the previous frame is subtracted from the urrent frame o create an absolute valued difference , i.e.,
I t
I t 1
(2)
DI
Once the difference image is obtained, thresholding is applied to differentiate the moving pixels from the non-moving pixels.
This process generates a binary difference image using a threshold value The threshold value is statistically obtained as
DI
DI
DI
, 1
H y 1
w x 1
HW
, 1 y x
y 1
x 1
2
HW 1
(3) where
and
are the mean and the standard deviation of the difference pixels, respectively, and ( y, x ) is the spatial coordinate. The definition of standard deviation given in
(3) requires mathematical operations of square root and square.
These operations are expensive in computation and require more energy consumption. Thus, we modify it as
H y 1
W x 1
DI t t y x
HW 1
Using the threshold
DI
, 1
(4) the binary difference image is created as
B DI t t y x
1,
0, if DI
, 1 otherwise .
y x
(5)
Fig.1 shows the binarization process of the frame difference using two consecutive RGB frames. Fig. 3(a) and (b) shows the current and previous frames, respectively. The binary difference image according to (2) is shown in Fig. 3(c). The thresholding process according to (5) generates a binary image as shown in
Fig. 3(d) for further processing. Generating a binary difference image using the interframe difference technique only requires a single frame memory to store the previous frame.
Figure.1 Illustration of generating difference and binary difference images using the interframe difference technique. (a)
Current RGB image. (b) Previous RGB image. (c) Difference image using (2). (d) Binary difference image using (5).
B. Background Subtraction Technique
Background subtraction detection is the essential function for all surveillance system based on computer vision. Accurate background subtraction is a key element to successful object tracking. Background subtraction mainly used to determine region of foreground object. The basic method is by pixel-bypixel absolute differentiation of consecutive video frame with background image. However, in practical usage for live traffic monitoring system, background tends to be dynamic.
Background can be affected by variation of lightning conditions, camera vibration and surroundings conditions. Thus, several methods were proposed for background modelling technique.
Most popular method of background subtraction is modelling through statistic method. One of the approaches is Gaussian
Mixture Model probability distribution for each pixel .In this method mean and variance is updated with the pixel values from the new frames of video sequence. After few frames, the model has acquired enough information and decision is made for each pixel whether it’s background or foreground. Background and foreground decision is based on parametric equation. Interframe difference-based binary image generation produces an abstract representation of moving objects. A better approach is to perform background subtraction which identifies moving objects from the portion of a video frame that significantly differs from a background model. It involves comparing an observed image with an estimate of the background image to decide if it contains any moving objects. The pixels of the image plane where there
ISSN: 2231-5381 http://www.ijettjournal.org
Page 1438
International Journal of Engineering Trends and Technology (IJETT) - Volume4Issue5- May 2013 are significant differences between the observed and estimated images indicate the location of the objects of interest.
Let us assume that the background image
Bt of a specific scene is constructed. The current grayscale image
It
at time t is subtracted from the background image
Bt as
S t
B t
I t
(6) to find the absolute-valued difference image between It and Bt .
The difference image is binarized using the same approach in
(5), i.e.,BBS t ( y, x ) = 1 , if
S t
1, if S t
0, otherwise
St
(7) where τ (BS t ) is a statistical threshold that is calculated according to (3), and BBS t is a binary difference image.
Figure. 2. Illustration of generating difference and binary difference images using the background subtraction technique.
(a) Current RGB image. (b) Background RGB image. (c)
Difference from the background image using (6). (d) Binary difference image using (7).
Fig. 2 shows the binarization process of the background subtraction technique. Fig. 2(a) and (b) shows the current and the background images, respectively. The difference image according to (6) is shown in Fig. 2(c). The thresholding process according to (7) generates a binary image that is shown in Fig.
2(d) for further processing.
The background image is updated according to BBS t using BS t and It as
B t 1 y x t ( , ) (1 I t y x ( , ) 0
(8) t ( , ) otherwise where Bt +1 is the updated background image, and α ∈ [0 , 1] is an update factor that is settled as α = 0 .
95 in this paper.
The mean filter method for background generation is used because the background in the original color is needed and also it is an effective way for generating background, particularly, in low light variation and stationary camera posture, as it produces accurate results. The equation used in this study is described in the following formula: m j
1
0
K x y
f x y
Where: n is frame number j f
t n xy
m
(9)
is the pixel value of (x,y) in n’th frame k xy
is the pixel mean value of (x,y) in n’th frame averaged over the previous j frames and j is the number of the frames used to calculate the average of the pixels value.
III. Speed Detection
The speed of the vehicle in each frame is calculated using the position of the vehicle in each frame, so the next step is to find the spots Bounding, and the center of gravity. Bubble centroid distance is important to understand the moving vehicle in consecutive frames and therefore is known as the frame rate for motion capture, the speed calculation becomes possible. This information must be recorded in a continuous array cell in the same size as the camera image captured because the distance traveled by the centroid is needed is a pixel with a specific coordinate on the image to determine the vehicle speed . To find the distance traveled by the pixel, suppose the pixel has the coordinate as: i , i
(10) where the centroids location is showed in frame i and i-1for one vehicles, with (a, b) coordinate and (e, f) coordinate.
The distance difference for the vehicle is equal to d 1 a e
2
b f
2
(11) and if the image sequence is 25 frames per second, the time between two consecutive frames is equal to 0.04 s and finally the speed can be determined from the equation.
V K
x
y (12)
Where K is the calibration coefficient.
Speed Violated vehicle Detection Using shrinking algorithm.
Algorithm:
The speed estimation process is related with the tracking objects[5] in binary difference image
BI t where BI t
{
BI
BBS t
The tracking and speed estimation using BI t
consists of the following steps.
ISSN: 2231-5381 http://www.ijettjournal.org
Page 1439
International Journal of Engineering Trends and Technology (IJETT) - Volume4Issue5- May 2013
(1) Use the binary image BI t
and segment it into groups of moving objects using the aforementioned shrinking algorithm to create FR t
' s
over region R
0
.
(2) Track each FR t in consecutive frames and find its spatial bounding box coordinates, i.e., upper left side coordinate of the spatial bounding box
y x t , t
at time instant t .
(3) Trigger the timing t i when the object passes the first imaginary line located at, y i.e.,
1 y ti
y
1
and record its upper left side coordinate of the spatial bounding box, i.e.,
y x ti
, ti
.
(4) Trigger the timing t e when the object passes the second imaginary line located at y 2, i.e., y te
y
2
and record its upper left side coordinate of spatial bounding box, i.e.,
y x te , te
.
(5) Estimate the speed of moving the vehicle by
V
y x
y y te
i ti
(6) If the speed V is lower than the speed limit, then discard the object and go to step 1.
(7) Extract the license plate using color information.
(8) Transmit the extracted license plate image to the authorized remote station.
Figure 3: Configuration for the speed measurement of a moving vehicle.
IV.RESULTS
(9) Go to step (1).
ISSN: 2231-5381 http://www.ijettjournal.org
Figure 4: RGB Image
Page 1440
International Journal of Engineering Trends and Technology (IJETT) - Volume4Issue5- May 2013
Figure 5: Background frame
Figure 6: Grascale image
ISSN: 2231-5381 http://www.ijettjournal.org
Figure :7 Binary Image
Vehicle True speed Estimated number
1
2
3
4
(km/h)
60.60
72.80
64.60
73.30 speed
(km/h)
60.72
73.58
65.76
74.10
5 63.20 63.64
Average error 0.66
Error
(km/h)
0.12
0.78
1.16
0.80
0.44
Table1.Vehicle Speed detection using Shrinking algorithm
V.CONCLUSION
In this paper, we have presented the speeding vehicles are detected using image processing techniques on the sequence of input images captured by the video camera fixed position. Image processing techniques are developed computationally economical and are used to reduce energy consumption. Are detected and tracked vehicles at high speed consecutive sequences of image of the vehicle, generally speeding informs remote station authorized. The accuracy of the system proposed in the speed measurement is comparable to the actual speed of the moving vehicles.
The best results are obtained in shinking algorithm.
VI. References
[1] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati,
“Detecting moving objects, ghosts, and shadows in video streams,” IEEE Trans. Pattern Anal. Mach.
Intell.
, vol. 25, no. 10, pp. 1337–1342, Oct. 2003
[2] H. Cheng, B. Shaw, J. Palen, B. Lin, B. Chen, and
Z.Wang, “Development and field test of a laser-based nonintrusive detection system for identification of vehicles on the highway,” IEEE Trans. Intell. Transp.
Syst.
,
[3] R. Cucchiara, M. Piccardi, and P. Mello, “Image analysis and rule-based reasoning for a traffic monitoring system,” IEEE Trans. Intell. Transp. Syst.
, vol. 1, no. 2, pp. 119–130, Jun. 2000.
[4] S. Gupte, O. Masoud, R. Martin, and N.
Papanikolopoulos, “Detectionand classification of vehicles,” IEEE Trans. Intell. Transp. Syst.
, vol. 3, no.
1, pp. 37–47, Mar. 2002.
[5] R. Canals, A. Roussel, J.-L. Famechon, and S.
Treuillet, “A biprocessororiented vision-based target tracking system,” IEEE Trans. Ind. Electron.
, vol. 49, no. 2, pp. 500–506, Apr. 2002.
[6] T. Schoepflin and D. Dailey, “Dynamic camera calibration of roadside traffic management cameras for vehicle speed estimation,” IEEE Trans. Intell. Transp.
Syst.
, vol. 4, no. 2, pp. 90–98, Jun. 2003.
Page 1441