Uploaded by Tejaswi Uppalapati

Discernment of Fire and its Growth in Various Circumstances Based on Illumination Image Analysis ICOEI-253 1 1

advertisement
Discernment of Fire and its Growth in Various
Circumstances Based on Illumination Analysis
Tejaswi Uppalapati
Department of Electronics and
Communication Engineering,
Gokaraju Rangaraju Institute of
Engineering & Technology
Hyderabad, India
tejaswi.uppalapati01@gmail.com
Sai Charitha Veeramachaneni
Department of Electronics and
Communication Engineering,
Gokaraju Rangaraju Institute of
Engineering & Technology
Hyderabad, India
saicharithaveeramachaneni@gmail.com
Sathya Keerthi Sri Chalasani
Department of Electronics and
Communication Engineering,
Gokaraju Rangaraju Institute of
Engineering & Technology
Hyderabad, India
sathyakeerthi2000@gmail.com
Samanvitha Kosaraju
Department of Electronics and
Communication Engineering,
Gokaraju Rangaraju Institute of
Engineering & Technology
Hyderabad, India
kosarajusamanvitha@gmail.com
Shilpa Bagade
Department of Electronics and
Communication Engineering,
Gokaraju Rangaraju Institute of
Engineering & Technology
Hyderabad, India
Shilpa1097@grietcollege.com
Sreehari Veeramachaneni
Department of Electronics and
Communication Engineering,
Gokaraju Rangaraju Institute of
Engineering & Technology
Hyderabad, India
srihariy2k4@gmail.com
Abstract—Fire outbreaks and wild fires have been one of the
most common disasters taking place around the world at an
increasing alarming rate. An early warning system is essential
for reducing fire-related loss of properties and living. The
previously existing methodologies used various detection
systems including Convolution methods, RGB color model, HSV
color model, Edge detection, etc. which are entirely based on
color analysis. This paper states a new fire discernment
algorithm which is completely based on light analysis. The
luminance and the brightness of the image are taken into
consideration for the identification of fire in the following
analysis to get the output which not only detects fire but also
highlights its growing intensity and its spread in the
surroundings of the fire outbreak.
Keywords: Luminous, Brightness, RGB (Red, Green, Blue),
YCbCr (Luminance, Chrominance blue, Chrominance red),
Edge Detection, Grayscale.
I. INTRODUCTION
In general, fire accidents frequently beget profitable and
ecological damage and are life- changing. Numerous
ways have been developed to avoid fire disasters, utmost
of which are flyspeck samples, temperature samples,
relative moisture samples, aeration tests, bank analysis,
as well as traditional ultraviolet radiation. And in
addition to infrared operations, fire sensors are also used
for detecting fire. However, these sensors need to be
located near the fire else they cannot give accurate
information about the combustion process, similar as
fire position, size, growth rate, etc. The visual- grounded
approach is getting more and more intriguing to give
further dependable information about fires. With the
rapid-fire development of digital camera technology and
the development of content- grounded videotape
processing, further and further image- grounded fire
discovery systems are being introduced. Visualgrounded systems generally take advantage of the three
distinctive features. They are fire color, movement, and
shape. Color information is used as an original step to
descry fires and bank. Numerous fire alarm systems use
color information as an original step. With the quick
development of digital camera fire detection technology
and videotape fire detection processing technology,
there is a great trend to restore traditional fire discovery
styles with digital computer vision grounded systems.
Commonly, the digital computer vision grounded fire
discovery systems enlist three main phases of fire pixel
bracket, segmentation of objects in motion, and seeker
area examination. This scanning is generally grounded
on two fig.ures. They are the shape of the area and
change in the area over time. The fire realization
performance bases heavily on the usefulness of the fire
pixel classifier to produce the seed area, just like any
other part of the system. Thus, the fire pixel classifier
should have a veritably high discovery rate, rather a low
false alarm rate. Several algorithms in the literature deal
directly with the bracket of fire pixels. Fire pixel bracket
can be taken into account in both grayscale and color
videotape sequences. Utmost of the work on classifying
fire pixels in color videotape sequences is rulegrounded. Chen etal [1] developed a set of rules for
classifying fire pixels using raw R, G, and B color
information which forms the basis for the RGB color
model. Another existing algorithm by Kumarguru
Poobalan1and Siau-Chuin Liew2 [13] includes the
conversion of RGB image into HSV (Hue, Saturation,
Value) image which is further combined with the output
of sobel edge to get the fire detected output. A Fast and
effective fire discovery system with image processing
Rather of using rule- grounded color models like Turgay
Celik [12], etc. Torey Inetal. uses an admixture Analysis
of the Gaussian model in RGB space attained from a
training set of fire pixel. Many such methods were
developed which are based on the color analysis of the
image. These aren't always dependable as they may
possess the risk of detected other objects with the similar
pixel values to that of the fire pixels. There is a serious
need for developing an algorithm that detects the
direction of scattering fire due to its growing intensity.
This paper is about a fire discernment algorithm that is
completely based on the analysis of light present in the
image. The brightness of the image and its Y component
in YCbCr color model, i.e., luminance is used to detect
the fire and the growth of its intensity in the surrounding
environment of the fire. This proposed model not only
detects the exact location of the fire but also gives
information about intensity, direction and the fast
growth of the fire in its surroundings.
The proposed method has the following contributions:
•
Existing methods can detect the region of interest
of fire. But it has some drawbacks, such as it also
detects the pixels whose intensities are equal or
nearer to the intensity of fire pixels which are
actually not fire but detected as fire.
•
The existing method uses the RGB color model,
where the color values are only based on the
positive intensity values of pixels.
•
But the method used in the proposed algorithm
uses the YCbCr color model, as this model takes
both the positive and negative values of intensities
into consideration, unlike the RGB color model.
•
When RGB color model is used, the increase in
intensities of the pixels the brightness also varies.
Whereas, in the YCbCr color model, the brightness
does not change with the changes in the pixel
intensities. This is because brightness is a separate
layer independent of the remaining layers
responsible for color intensities in YCbCr.
•
The proposed algorithm also highlights the growth
in the intensity and direction of the fire spread,
unlike the other existing fire detection models.
II. RELATED WORK
Narendra Ahuja, Chie-Bin Liu, Glenn Healey, Ted Lin,
Ben Drda, David Slater and A. Donald Goedeke together
stated that the fire detection based on the temporal, spectral
and the spatial properties of fire events is more accurate.
They developed algorithms for automated fire detection
using various videos as inputs. However spatial quatization
errors are likely to occur which in turn causes noise [2], [3].
Liyang Yu, Xiaoqiao Meng and Neng Wang developed a
neural crisscross grid that is then applied in the in-network
computing of data for collecting and processing the data to
detect and predict fires in the early stages. This method is
mainly used in the detection on forest fires. The major
drawback for this is that the forest fires may cause damage
to the sensor equipments [4]. Khan Muhammad, Jamil
Ahmad, Paolo Bellavista, Po Yang and Sung Wook Baik
proposed an economical CNN (Computational Neural
Networks) architecture for fire detection in surveillance
videos. Their model is persuaded from the Google Net
architecture and SqeezeNet architecture due to its simple
computational complexity and suitability. The model can be
fine-tuned based on the fire data and the target for more
efficiency and accuracy. The main disadvantage of this
model is that it requires more memory and more
computational time [5], [16]. Thou-Ho Chen, Sju-Mo
Chang and Cheng-Liang Kao proposed a two-stage decision
strategy for real time for detection. The first stage is to
obtain fire-pixels from the images that are represented
visually, if there is any presence of fire. If thus obtained fire
pixels keep growing as the time goes on, then in the second
stage an alarm will be buzzed to avoid the fire accident.
However, sometimes there maybe a threat of false detection
of fire which in turn leads to false alarm [6]. Luis Merino,
J.R Martinez-de Dios, Fernando Caballero, Anibal Ollero
along with Junguo Zhang, Zhongxing YIN, Xiaolin GUO,
Wenbin LI and Shengbo LIU proposed an interesting model
for fire detection using a group of fast moving heterogenous
Unmanned Aerial Vehicles. The fires from infrared and
local images are detected and localized by various computer
vision techniques. The UAVs (unmanned aerial vehicles)
consist of some on-board sensors and cameras whose data
is also taken into consideration for fire detection. However,
collection of data purely depends on the capabilities of the
UAV’s and high end UAV’s can’t be affordable in all the
senarios [7], [9], [18]. Sen-Li, Long-Shi, Shuyan-Wang,
Chunyong-Feng, Dan-Zhang and considered two conditions
for the detection of fire. The first consideration is the
lighting in the area where there is a fire outbreak and the
other consideration is that the smoke particles are always
black in color and the remaining elements in the air in other
weather conditions are commonly in the shades of white.
They developed a method based on the MSR model (multiscale Retinex) called the GL-MSR which is fast image
restoration method used to increase the fire detection
accuracy even in complicated situations. This method is
developed for the purpose of removing smoke retained after
fire but it does not detect fire [8]. Turgay Celik, Hasan
Demirel,
Huseyin
Ozkaramanh
and
Jareerat
Seebamrungsat, Panomkhawn Riyamongko, Suphachai
Praising along with S. Noda and K. Ueda used different
color models like Gray models, RGB, YCbCr, HSV and
CIE l*a*b space for statistical study of samples taken from
various images. Their proposed model combines this
samples color information with motion analysis. These
methods detect only the region of interest of fire and not its
growing intensity unlike the proposed method [1], [10],
[12], [17], [14]. Kumarguru P and Siau-Chuin-Liew
proposed a model which uses RGB color method to find the
color of the fire. It mainly concentrates in the intensity of
the red color component. Then detection named Sobel Edge
is used to detect growth of the fire. Finally, a technique
named color-based segmentation technique is imposed on
the results obtained by the above two steps to recognize the
region of interest (ROI) where the fire is located. The
drawback of this method is similar to the previous that it
detects only the ROI of fire and not its growing intensity
[13]. Suchet Rinsurongkawong, Matthew N. Dailey and
Mongkol Ekpanyapong developed an optical flow
algorithm that can be used to identify fire. This algorithm
can detect fire from an image taken in a monocular camera.
This method uses filters to find the consistency of colors and
also removes the background that is unwanted and helps
more efficiently to identify the moving pixels. This method
is accurate but does not include any training of values [20].
III. EXISTING METHODS
Fig. 1. Existing Methods
An image is the combination of three layers, namely red,
green and blue, which together form the RGB image. The
most commonly used methodology for fire detection is the
RGB color model. This RGB model can further be used in
three different methods for fire detection. The first method
is to combine it with the HSV color model followed by edge
detection to detect fire. The second method is to convert it
into a Gray image and plot its histogram at different time
intervals for comparison. The third method is to convert the
RGB image to a Gray image and then perform Edge
detection to its output which gives the final output image
that spots only fire. In this paper, The proposed algorithm
output is compared with the outputs of first and third to
show the accuracy and advantages in the output of the
proposed algorithm.
Various other existing methods like GL-MSR, novel
color model, etc. also give efficient outputs but these
algorithms are completely based on the color analysis of the
image. The GL-MSR method is a fast image restoration
method. It detects fire even in complex situations. The only
drawback is it detects only the position of fire and not spread
and intensity. The novel color model to detect fire is based
on the CIE l*a*b space. Convolution algorithms can also
give fire detected outputs and these convolution algorithms
are developed based on the google net architecture and
squeeze net architecture. This method localizes and
understands the schematic of the fire scene. It consists of
fully interlinked layers with tiny and simple convolutional
kernels. Some methods record the spatial and temporal
changes in the fire properties to compare and detect fire.
Another widely used algorithm is the Optical flow algorithm
that finds the consistency of colours in the image using
filters and removes the background for fire detection.
IV. PROPOSED METHOD
The proposed algorithm is completely based on the light
analysis. It takes the illumination of light into
consideration. The word illumination means lightning. It is
obtained from a late latin word illuminare, which means to
light up. The purpose of illumination is to achieve aesthetic
effects of light. This illumination is obtained by natural
daylight as well as by artificial lights like lamps, light
fixtures, etc.
There are three types of illumination. They are ambient,
directional, and spotlight. Ambient illumination is intended
to light up its entire surroundings, it gives consistent level
of light throughtout the surroundings independent of other
light sources. The directional illumination represents light
that travels in a specific direction. Whereas, spotlight
illumination refers to the angle of incidence of the light,
these can be short to long depending on the available
intensity and beam spread.
The main advantage of good illumination is to save energy,
especially in the case of comparision between natural
daylight and artificial lighting. By using the natural daylight
the energy used by the artificial lighting can be conserved.
Proper lighting (or) illumination improves task
performance,posses good psychological effects on people,
enhances the appearance of a place, etc.
The proposed algorithm is as follows:
Fig. 2. Proposed method flowchart
The algorithm for the proposed method is as follows:
Input: Input is the data set X(k) of images of diverse types
Output: The output is obtained by following steps.
1. Convert RGB image to YCbCr color model.
2. Extract the Y component (Luminance) from the
YCbCr image.
๐‘Œ ′ = ๐พ๐‘… ∗ ๐‘…′ + ๐พ๐บ ∗ ๐บ ′ + ๐พ๐ต ∗ ๐ต′
3. Calculate the Ymean.
๐‘Ž
๐‘
1
๐‘Œ๐‘š๐‘’๐‘Ž๐‘› (๐‘ฅ, ๐‘ฆ) =
∑ ∑ ๐‘Œ(๐‘ฅ, ๐‘ฆ)
๐‘Ž∗๐‘
๐‘ฅ=1 ๐‘ฆ=1
4.
Subtract the calculated Ymean from the original
RGB image.
5. Increase the brightness of the output obtained
from (4) to detect the intensity of fire.
6. Apply edge detection for the output of (5) to get
the fire location accurately.
The above fig. 2 shows the algorithm of the proposed
model. Firstly, the RGB image is converted to the YCbCr,
and Y(luminous) component is extracted from the image
and then the luminous component is removed from the
original image. After that, the brightness of the luminous
removed image is increased and from this the intensity of
fire is detected. At last, the edge detection is performed on
the output. The whole process is explained in detail below.
Pixel in a grayscale image has only one property i.e.,
brightness, and that brightness is usually represented as a
number that ranges from 0 to 255. Zero in the range
represents black and 255 represents white. All the remaining
in the range represent the shades of gray.
It is different in the case of color images. The pixel
incorporates another property that is its color. The most
common color model is called the RGB color model where
the RGB stands for red, green, and blue. Unlike the Gray
image having one value, i.e., brightness, an RGB image has
three values i.e., the brightness value of red, blue, and green
colors. The RGB image can be sliced into three different
layers with the layers having red, green and blue
components each. These layers can also be called channels.
Thus, in a pixel, the values of these three layers determine
the color of that particular pixel. Varying these values,
changes the color of the pixel.
But RGB is not always a useful format because as the pixel
value decreases the brightness of the image will also get
decreased. So, to overcome this drawback another color
model was introduced called the YCbCr color model.
This YCbCr color model also has three layers similar to that
of RGB, where Y represents luminance (overall brightness
of the pixel) of the image, Chrominance blue is represented
as Cb and Chrominance red is represented as Cr. The Cb and
Cr can be mathematically represented as
๐ถ๐‘ = ๐ต − ๐‘Œ
where B represents the blue layer in the RGB image.
๐ถ๐‘Ÿ = ๐‘… − ๐‘Œ
where R stands for the red component of the RGB image.
Cb and Cr in the digital world are the chrominance values
that contain the color information of the pixel. These Cb and
Cr are represented on the chrominance plane as shown in
fig.. 3 (a). When the values change from center to the top
the color becomes red and when it is moved to the right it
becomes bluer, the green color comes when the values are
negative or when moved from center to bottom of the
chrominance plane. So, unlike RGB, YCbCr is determined
by its positive and negative values.
(a)
(b)
Fig. 3. (a) Chrominance plane of Cb and Cr (b)Representation of RGB
color model
The above images show the YCbCr plane in the left and the
RGB warehouse in the right. They represent how the values
in YCbCr go from negative values to positive and how the
RGB image pixels range from 0 to 255 which is called the
grayscale.
YCbCr signals are obtained from the gamma-adjusted RGB
(red, green, and blue) source using pre-defined constants
๐พ๐‘… , ๐พ๐บ ๐‘Ž๐‘›๐‘‘ ๐พ๐ต .
๐‘Œ ′ = ๐พ๐‘… ∗ ๐‘…′ + ๐พ๐บ ∗ ๐บ ′ + ๐พ๐ต ∗ ๐ต′
where ๐พ๐‘… , ๐พ๐บ ๐‘Ž๐‘›๐‘‘ ๐พ๐ต are ordinarily derived from the
definition of the corresponding RGB space, and aboveproposed method.
The above proposed theorem is used for finding the the
growing intensity of fire and also for detecting the fire. It
uses the YCbCr color model as ,the main method. Initially
the original RGB image is converted into a YCbCr image.
Then ๐‘Œ๐‘š๐‘’๐‘Ž๐‘› for the Luminous(Y) component found. It can
mathematically be represented as follows.
Let us consider a matrix the M of size m*n containing the
Luminance values of the image for instance.
๐‘Œ11 โ‹ฏ ๐‘Œ1๐‘›
โ‹ฑ
โ‹ฎ ]
M=[ โ‹ฎ
๐‘Œ๐‘š1 โ‹ฏ ๐‘Œ๐‘š๐‘›
The first step in calculating ๐‘Œ๐‘š๐‘’๐‘Ž๐‘› is to add the matrix values
column wise to get a row matrix as shown below
๐‘š
๐‘š
Z = [ ∑๐‘š
๐‘–=1 ๐‘Œ๐‘–1 ∑๐‘–=1 ๐‘Œ๐‘–2 … … … … … .. ∑๐‘–=1 ๐‘Œ๐‘–๐‘› ]
The next step is to find the total of the row matrix
๐‘š
๐‘š
total = ∑๐‘š
๐‘–=1 ๐‘Œ๐‘–1 + ∑๐‘–=1 ๐‘Œ๐‘–2 + โ‹ฏ … … … … . . + ∑๐‘–=1 ๐‘Œ๐‘–๐‘›
The size of the matrix is defined as follows
[ a b ] = size of M
The final step to calculate ๐‘Œ๐‘š๐‘’๐‘Ž๐‘› is to divide the total with
the size of matrix M
๐‘ก๐‘œ๐‘ก๐‘Ž๐‘™
๐‘Œ๐‘š๐‘’๐‘Ž๐‘› =
๐‘Ž∗๐‘
The ๐‘Œ๐‘š๐‘’๐‘Ž๐‘› can also be represented by a general formula as
shown below:
๐‘Ž
๐‘Œ๐‘š๐‘’๐‘Ž๐‘› (๐‘ฅ, ๐‘ฆ) =
๐‘
1
∑ ∑ ๐‘Œ(๐‘ฅ, ๐‘ฆ)
๐‘Ž∗๐‘
๐‘ฅ=1 ๐‘ฆ=1
Thus obtained ๐‘Œ๐‘š๐‘’๐‘Ž๐‘› is subtracted from the pixels of the
original image. The obtained output is then brightened in the
local areas wherever the fire is present. The output of this
step gives the idea about the intensity and also the direction
of the growth of the fire. Later, edge detection is applied to
this output. Edge detection includes sobel filter. Sobel filter
when used in edge detection algorithms generates an image
by emphasizing its edges. In this method the sobel operator
males use of two 3*3 matrices that are multiplied with the
original image to obtain the derivative estimations. Out of
the two matrices, the first matrix is used to obtain the
horizontal changes and the other is for vertical. If A is said
to be the original image, Gx and Gy are the horizontal and
vertical masking matrices that are used for the derivative
approximations respectively. The masking is done as
follows:
−1 −2 −1
๐บ๐‘ฅ = [ 0
0
0 ]*A
1
2
1
−1
๐บ๐‘ฆ = [−2
−1
0
0
0
1
2]*A
1
The resultant edge detected image can be obtained by
combining the resulting gradient approximations by using
the hypotenuse theorem as follows:
G = √๐บ๐‘ฅ 2 + ๐บ๐‘ฆ 2
This ouput highlights the fire detected output. It shows the
region of interest of fire with more accuracy.
V. SIMULATIONS
The performance of the proposed algorithm is tested on
various sequences of images and are compared to the
outputs of the existing method for the same. All these
simulations are performed in MATLAB R2015b. As
mentioned above the proposed method shows both the
growing intensity of the fire and its location as well.
In fig. 4 and 5, column(a) represents the original images
considered for the simulations. Histograms for these
original images are plotted and shown in fig. 4 and 5
column(b). The color intensities of the original image
pixels can be observed with the help of these histograms.
Edge detection is performed on fig. 4 column(a) images,
and its output is shown in fig. 4 column(c). However, it is
observed that in fig. 4 column(c) fire is partially detected.
When the original images are converted to HSV images
followed by edge detection, accuracy is observed to be
increased fairly. The outputs for this method are as shown
in the fig. 4 column(d). Fire is not detected accurately in the
above two existing methods.
The proposed algorithm overcomes this drawback. It
detects the fire more accurately. Initially, luminance
component is extracted from the YCbCr image. The mean
of the luminance in the image is subtracted from the
original image. Later on, the brightness of the image is
increased. This gives the output as shown in fig. 5
column(c). The growing intensity of fire can be seen in
these outputs. The direction and the area onto which the fire
can spread can also be observed in fig.5 column(c).
Applying Edge detection to these obtained outputs gives
the edge detected output images as shown in fig.5
column(d). It is observed that these outputs are more
accurate than the existing method outputs. On comparing
the simulations from fig. 4 column(c), fig. 4 column(d), and
fig.5 column(d), it can be observed that the proposed
algorithm is more accurate in terms of detecting fire.
A. Existing Method Simulations:
By observing fig. 5(I(a)) and fig. 5(I(c)) the intensity and
direction of the fire can be observed. Similarly, in all the
simulations of the proposed algorithm, the intensity and
direction of fire can be observed as shown in fig. 5
column(c). By using the proposed algorithm, fire under the
smoke can also be detected as shown in fig.. 5(VIII(c)), fig.
5(IX(c)), and fig. 5(X(c)).
The intensity of the pixels of the original image and edge
detected image can be observed by plotting their
histograms. The histograms plotted for the original image
are shown in fig. 4 column(b) and fig. 5 column(b). The
histograms of the edge detected image for fig. 4 column(c)
and fig. 4 column(d) are shown in fig. 4 column(e).The
histograms for fig. 5 column(c) are shown in fig. 5
column(e). It can been seen in the fig. 5 column(e) the
intensity of the red color has been increased when
compared to the intensity of red color in fig. 5 column(b).
This is because the intensity of the fire increased due to
increase in brightness of the original image. The histogram
values change according to the intensity of the fire in the
original image. The bar graph reaches to its maxima when
the intensity of the fire pixel is maximum
(a)
(b)
(c)
(d)
(e)
Fig.. 4 . (a) Original image (b) Histogram of the original image (c) Fire detected (by existing method[12]) (d) Fire detected in original image (by using
existing method[13]) (e) Histogram of the fire detected image
B. Proposed Method Simulations:
(a)
(b)
(c)
(d)
(e)
Fig. 5. (a) Original image (b) Histogram of the original image (c) Proposed method result (d) Fire detected using proposed algorithm (e) Histogram of the fire
detected image
TABLE I.
Image
COMPARING ENTROPY, PSNR AND, MSE VALUES OF EXISTING METHOD AND PROPOSED METHOD
Entropy (e)
Existing
Proposed
Method
Method
Existing
Method
PSNR
Proposed
Method
MSE
Existing
Method
Proposed
Method
โˆ† =Entropy*PSNR
Existing
Proposed
Method
Method
I
1.88
4.96
77.09
26.2
0.00126
155.98
144.92
129.95
II
1.12
4.8
73.44
28.15
0.00294
99.45
82.25
135.12
III
0.18
4.06
77.19
27.99
0.00124
103.15
13.89
113.63
IV
0.23
4.39
87.96
26.43
0.0001
147.86
20.23
116.02
V
2.07
5.32
86.74
26.1
0.00013
159.38
179.55
138.85
VI
1.29
3.9
75.9
28.38
0.00166
94.39
97.91
110.68
VII
2
4.2
75.96
26.72
0.00764
138.17
151.92
112.22
VIII
4.88
5.19
84.09
26.93
0.00025
131.83
410.35
139.76
IX
1.14
4.37
82.98
26.13
0.00032
151.18
94.59
114.18
X
0.33
4.88
79.91
26.43
0.00066
147.67
26.37
130.85
In image, entropy is a proportion of the number of pieces
expected to encode picture information. The higher the
value of the entropy, the more definite the picture will be.
PSNR and These two parameters are used to measure the
quality of an image. Peak signal-to-noise ratio (PSNR) and
Mean square error (MSE) values are inversely proportional
to each other. High PSNR and low MSE values give highdefinition images with clear backgrounds. In the existing
method, the PSNR is high and MSE is low for all images
as shown in Table 1. Due to these values, the surroundings
of the fire is also detected as the background is clear and has
intensities nearer to the fire . Whereas in the proposed
method the PSNR values are low and MSE values are high,
due to this we can eliminate the background and highlight
only the regions containing the fire. These highlighted fire
regions are later enhanced to show the exact location of the
fire and its growing intensity. The enhancement of the
image can be shown by the entropy values in table 1. The
entropy values of the proposed method are higher when
compared to the existing method. This gives the proposed
method an advantage to detect fire accurately. As shown in
above table โˆ† (Entropy*PSNR) values has huge
differences for different images, whereas in the proposed
method all โˆ† values remain constant. Due to the differences
in โˆ† values in the existing method the detecttion of fire
varies for images. For example, in image(I) of table 1 the โˆ†
value is high due to this the surroundings of the fire is also
detected as they have nearer intensities of fire, this is shown
in fig. 4(I(d)). In mage(X) the โˆ† value for the existing
method is really low, so the fire is not fully detected instead
the background of fire is detected this is shown in fig.
4(X(d)). Whereas, in the proposed method the fire is
accurately detected and it can be seen in fig. 5(X(d)). Due
to huge differences in the โˆ† values of images in existing
methods the fire is either detected with similar intensity
values with the surrounding or the fire is not detected in
some places. But in the proposed algorithm as the โˆ† values
doesn’t have much difference for the images the fire is
detected at the same rate for all the images. In existing
methods, the fire is not fully detected for some habitats.
Whereas, in the proposed method the fire is completely
detected in different habitats.
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
VI. CONCLUSION
Latest inventions in surveillance systems and fire detection
algorithms have made it very easy to detect unusual events
like smoke and fire. Fire being the most hazardous disaster
should be handled and controlled on an early basis. Not only
the detection of fire but finding out its intensity and the
direction it is growing is also a very important mission. All
the existing methods are based on color analysis and detect
only the areas affected by the fire. They highlight only the
area where there is fire. They do not focus on the
surrounding that is getting affected by the fire outbreaks.
There comes the main disadvantage. This proposed
algorithm is used to address such issues. The algorithm is
completely based on light analysis. Its main characteristic
trait is that it detects the direction and growing intensity of
fire simply by taking the luminance and brightness of the
image into consideration. This method works very
efficiently and gives an accurate output.
REFERENCES
[1]
[2]
Thou-Ho Chen, Ping-Hsueh Wu and Yung-Chuen Chiou, "An early
fire-detection method based on image processing," 2004
International Conference on Image Processing, 2004.
Che-Bin Liu and N. Ahuja, "Vision based fire
detection," Proceedings of the 17th International Conference on
Pattern Recognition, 2004.
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
G. Healey, D. Slater, T. Lin, B. Drda and A. D. Goedeke, "A system
for real-time fire detection," Proceedings of IEEE Conference on
Computer Vision and Pattern Recognition, 1993.
Liyang Yu, Neng Wang and Xiaoqiao Meng, "Real-time forest fire
detection with wireless sensor networks," Proceedings. 2005
International Conference on Wireless Communications, Networking
and Mobile Computing, 2005.
K. Muhammad, J. Ahmad, I. Mehmood, S. Rho and S. W. Baik,
"Convolutional Neural Networks Based Fire Detection in
Surveillance Videos," in IEEE Access, vol. 6, pp. 18174-18183,
2018.
Thou-Ho Chen, Cheng-Liang Kao and Sju-Mo Chang, "An
intelligent real-time fire-detection method based on video
processing," IEEE 37th Annual 2003 International Carnahan
Conference onSecurity Technology, 2003. Proceedings., 2003.
L. Merino, F. Caballero, J. R. Martinez-de Dios and A. Ollero,
"Cooperative Fire Detection using Unmanned Aerial
Vehicles," Proceedings of the 2005 IEEE International Conference
on Robotics and Automation, 2005.
Li, S., Wang, S., Zhang, D., Feng, C., & Shi, L. Real-time smoke
removal for the surveillance images under fire scenario. Signal,
Image and Video Processing, 2019.
J. Zhang, W. Li, Z. Yin, S. Liu and X. Guo, "Forest fire detection
system based on wireless sensor network," 2009 4th IEEE
Conference on Industrial Electronics and Applications, 2009.
T. Çelik, H. Özkaramanlฤฑ and H. Demirel, "Fire and smoke detection
without sensors: Image processing based approach," 2007 15th
European Signal Processing Conference, 2007.
P. V. K. Borges and E. Izquierdo, "A Probabilistic Approach for
Vision-Based Fire Detection in Videos," in IEEE Transactions on
Circuits and Systems for Video Technology, vol. 20, no. 5, pp. 721731, May 2010.
Celik, T. Fast and efficient method for fire detection using image
processing. ETRI journal, 32(6), 881-890. (2010).
Poobalan, K., & Liew, S. C. Fire detection algorithm using image
processing techniques. In Proceedings of the 3rd International
Conference on Artificial Intelligence and Computer Science
(AICS2015) (pp. 160-168). (2015, October).
S. Noda and K. Ueda, "Fire detection in tunnels using an image
processing method," Proceedings of VNIS'94 - 1994 Vehicle
Navigation and Information Systems Conference, 1994.
C. Cheng, F. Sun and X. Zhou, "One fire detection method using
neural networks," in Tsinghua Science and Technology, vol. 16, no.
1, pp. 31-35, Feb. 2011.
K. Muhammad, J. Ahmad, Z. Lv, P. Bellavista, P. Yang and S. W.
Baik, "Efficient Deep CNN-Based Fire Detection and Localization
in Video Surveillance Applications," in IEEE Transactions on
Systems, Man, and Cybernetics: Systems, vol. 49, no. 7, pp. 14191434, July 2019.
J. Seebamrungsat, S. Praising and P. Riyamongkol, "Fire detection
in the buildings using image processing," 2014 Third ICT
International Student Project Conference (ICT-ISPC), 2014.
C. Yuan, Z. Liu and Y. Zhang, "UAV-based forest fire detection and
tracking using image processing techniques," 2015 International
Conference on Unmanned Aircraft Systems (ICUAS), 2015.
Abdulsahib, G. M., & Khalaf, O. I. An improved algorithm to fire
detection in forest by using wireless sensor networks. International
Journal of Civil Engineering & Technology (IJCIET)-Scopus
Indexed, 9(11), 369-377. (2018).
S. Rinsurongkawong, M. Ekpanyapong and M. N. Dailey, "Fire
detection for early fire alarm based on optical flow video
processing," 2012 9th International Conference on Electrical
Engineering/Electronics, Computer, Telecommunications and
Information Technology, 2012.
Download