removing rain streaks from multiple images

advertisement
REMOVING RAIN STREAKS FROM
MULTIPLE IMAGES
Yugashini.K1, ViswabharathyA.M2
1.PG Scholar ,Department of computer science, KSR COLLEGE OF ENGINEERING, Tiruchengode,
TamilNadu .
2. Assistant Professor, Department of computer science, KSR COLLEGE OF ENGINEERING,
Tiruchengode.
E-mail:kyugashini@gmail.com
ABSTRACT
parameter is based on the narrow numerical
Rain streaks removed from the multiple
characteristics of some casual variable. The
image is a demanding chore . By using
corresponding sub problem can be solved by
MCA algorithm we can easily remove a rain
the
streaks from the single image but we wont
efficiently. The TV algorithm is able to
attain ah clarity image. In this paper we are
preserve tiny picture details while the
using TV algorithm to remove the rain
streaks in the homogeneous regions are
streaks from multiple images. It is based on
removed sufficiently. As a consequence,
K-mean clustering and chromatic constraint.
method yields better rain streak results than
It is also associated to a TV model with
those of the current high-tech methods with
spatially modified regularization parameter.
respect to the SNR values.
augmented
Lagrangian
method
The automatic selection of the regularization
1 INTRODUCTION
camera capturing a lively of rain and carnal
Special sit out state of affairs such as
based motion gloom model characterize the
rain, snow will cause tricky photographic
photometry of rain. This paper is among the
unusual gear of time-based fields with
first specifically addressed the problem of
images. Lively images are sub-divided into
removing rain streaks in a multiple image.
rain and snow. The specific parametric
Removing a rain streaks in a single image
using morphological component analysis
atom clustering. Then, to perform sparse
algorithm (MCA) . With the help of bilateral
coding based on the two sub dictionaries to
filter the image is smoothen by MCA
achieve MCA-based image decomposition,
algorithm . Bilateral filter will decompose
where the geometric component in the HF
the image in to HF and LF. Then HF image
part can be obtained, followed by integrating
is then decomposed in to rain component
with the LF part of the image to obtain the
and non rain component . Then the rain
rain-removed
streaks are removed from the single image
Traditional MCA algorithms are all directly
but clarity is not good. In this paper using
performed on an image in the pixel domain.
total variation algorithm (TV) to remove
However, it is typically not easy to directly
rain streaks from multiple images . based on
decompose an image into its geometric and
K-mean clustering and chromatic constraint
rain components in the pixel domain
TV algorithm will process. By this method
because the geometric and rain components
easily remove the rain streaks from the
are usually largely mixed in a rain image.
multiple images and attain a lucidity image.
This makes the dictionary learning process
version
of
this
image.
This paper is organized as follows: Section 2
difficult to clearly identify the “geometric
introduces our existing consist of section 3. Our
(non rain) atoms” and “rain atoms” from the
proposed constructions are presented finally section 4
concludes this paper.
pixel-domain training patches with mixed
components. This may lead to removing too
many image contents that belong to the
2 EXISTING SYSTEM
geometric component but are erroneously
The key idea of MCA is to utilize the
classified to the rain component. Therefore,
morphological diversity of different features
first roughly decompose a rain image into
contained in the data to be decomposed and
the LF and HF parts. Obviously, the most
to associate each morphological component
basic information of the image is retained in
to a dictionary of atoms. In the image
the LF part, whereas the rain component and
decomposition step, a dictionary learned
the other edge/texture information are
from the training exemplars extracted from
mainly included in the HF part. The
the HF part of the image itself can be
decomposition problem can be converted to
divided into two sub dictionaries by
decomposing the HF part into the rain and
performing HOG feature-based dictionary
other
textural
components.
Such
decomposition
aids
in
the
dictionary
To decompose an image into geometric and
learning process as it is easier to classify in
textural component fig 2.1 . In geometric
the HF part “rain atoms” and “non rain
function have wavelet and cobalt been used,
atoms” into two clusters based on some
where the wavelet for global discrete cosine
specific characteristics of rain streaks.
transformation
Removing the rain streaks by using the
discrete
bilateral filter, it gives some accuracy. But
dictionary
using MCA algorithm remove a rain streaks
component of
from a single image only.
dictionary represents the sparse coefficient
2.1 IMAGE DECOMPOSITION USING
of patches extracted from an image. DCT for
A MCA
dictionary represent the textural component
cosine
for
and
curvelet
for
transformation
representing
a
local
as
a
textual
an image. In the local
of the image. Dictionary selection and
Utilize the morphological component
analysis into a single atom. In an image G of
M pixel is a position of S layers, denoted by
G =∑s=1 Is denotes the sth component, such
related parameter setting have different
kinds of image decomposition. Global DCT
and local DCT component are represented
sparsely independent.
as geometry or textural component of G. To
decompose the image G into {Gs}s=1 the
MCA algorithms iteratively minimize the
energy function:
𝐸({𝐺𝑠}S s = 1, {θs}S s = 1)
1
𝑠
𝑠
= 2 ‖𝐺−∑ 𝐺𝑠‖+τ ∑ 𝐸𝑠(𝐺𝑠,θs)
𝑠=1
𝑠=1
where 𝛉 €RMs denotes the sparse coefficents
corresponding to Gs with the respect to the
dictionary Ds , τis a regulation parameter,
and Es is the energy to the type Ds.
Fig 2.1.(i)Structure image, (ii)Texture
image
2.2 SPARSE CODING AND
DICTIONARY LEARNING
Sparse coding are based on the linear
decomposed
a
rain
image
into
rain
generative model. In this model, the symbols
component and geometric component. The
are
fashion to
reasons are i) in a rain image no assume a
approximate the input. To construct a
portion of rain component and geometric
dictionary D containing the local structure of
component in a global dictionary. ii ) in a
textures for sparsely represent each patch
rain image geometric component is mixed
extracted from the textural component of the
with rain streaks, so its segmented into local
image. A set of available training exemplars
patches to extract the rain patches
(similar
the
mainly contain self learning of rain atom. iii)
to
Local region images are exhibited different
D
characteristic
combined
in
patches
component)
decompose
specifying
a linear
extracted
from
Xi € с i=1, 2,… up,
the
Xi
learning
dictionary
by solving the following
optimization problem:
D€Ρ,α€β±€KXn
,
local
patches
that
based
dictionary learning rain atom are compared
to global dictionary. Fig 2.2(a) indicate the
1
𝑛
1
∑(2 β€–X𝑖 − 𝐷𝛼𝑖‖ 22 +
sparse coding 2.2(b) indicate the dictionary
learning.
πœ†||𝛼𝑖||1)
Where αdenotes the sparse coefficient of Xi
with respect to D and λ is a regularization
parameter . In an online dictionary learning
algorithm where the sparse coding is usually
achieved via orthogonal matching pursuit
(OMP) . Finally image decomposition is
obtained by the MAC algorithm.
The sparse coding technique is
identifying a small number of non zero’s or
significant coefficient corresponding to an
atom in a dictionary. Using a MAC
algorithm to remove a rain streaks in a
framework using two local dictionaries for
training patches extracted from rain image.
Without using a rain component easily
Fig 2.2(a)sparse coding
of the every pixel in an image swapped a
weight value from neighboring pixels. So
weights
are
based
on
a
Gaussian
distribution. The weights not only depend on
Euclidean distance of pixels, but also the
radiometric differences. Average nearby
pixel is used to replace the pixel value. The
sharp edges are equally encompassing
through
each
pixel
and
weights
to
neighboring pixel.
The bilateral filter is defined as:
Fig 2.2(b)dictionary learning
(π‘₯) = ∑ 𝐼(π‘₯𝑖)π‘“π‘Ÿ(‖𝐼(π‘₯𝑖)
2.3 RAIN STREAKS REMOVAL
π‘₯𝑖€π›Ί
FRAMEWORK
− 𝐼(π‘₯)β€–)𝑔𝑠(β€–π‘₯𝑖 − π‘₯β€–)
The rain streak removal framework
is formulated to remove a rain streaks in the
Where as
decomposed image from a single image.
the original input image to be filtered ,
Bilateral filter is used to decomposed input
the matches of the existing pixel to be
image into LF and HF. In a LF part
filtered ,
information can be obtained where as in HF
These functions are Gaussian function:
part edges/texture information may include
for smoothing the differences in intensities,
in the image. In a dictionary training
is the spatial core for smoothing changes
exemplar patch extracted from HF part of
the image to perform the HOG feature based
dictionary atom clustering.
2.4 BILATERAL FILTER
A bilateral filter is non-linear, edgepreserving
and noise
reducing
smoothing filter. The smoothen image refer
fig 2.4 (ii)b. To calculate the intensity value
is the filtered image ,
is
are
is the window centered in .
in coordinates.
is
Fig 2.4 (i)working of bilateral filter
direction. These changes appear only in
higher spatial states fig 2.5.
Fig 2.4.(ii)b.smoothern image
2.4 (ii)a.input image
2.5 HISTOGRAM OF ORIENTED
Fig 2.5 Histogram Oriented Gradient
3 PROPOSED SYSTEM
GRADIENT
Recently,
total
variation
(TV)
Distribution of intensity gradient or
models and filter methods for removing rain
edge directions are used to find the local
streaks were proposed. Using TV algorithm
object occurrence and
shape within the
removing rain streaks from multiple images
image. To achieve by dividing the image
. Rain streaks are easily removed from
into small joined regions, called cells, and
multiple images by k-mean clustering and
for each cell collecting a histogram of
chromatic
gradient directions for the pixels within the
algorithm. Clustering is a classification
cell. To improve the accuracy calculate the
technique used in image segmentation.
intensity to a large region of the image in the
Similar data points grouped together into
contract-normalized
histogram
clusters. Then the intensity histogram of a
called a tablet, such value are standardized
pixel in a video taken by a stationary camera
to
exhibits two peaks. K-means clustering
of
local
all cells within the tablet. This
standardization result is
constraint
along
with
TV
at variance to
algorithm can be used to identify the two
changes in brightness or stakeout. The HOG
peaks. For each pixel in the image, its
descriptor operates on local cells, the
intensity over the entire video is collected to
method maintains an invariant to regular and
compute its intensity histogram. The two
photometric changes, except for entity
initial cluster centers for background and for
rain are initialized to be the smallest and the
effect and a significant improvement over
largest intensities of the histogram. In a
earlier multiplicative models.
chromatic constraint group of pixels get
separated. This method the chromatic
3.1 K-MEAN CLUSTERING
constraint applies not only to rain in focus
Vector quantization is one of the
but also rain that is out of focus. So the
process of K-mean clustering. Clustering is
chromatic constraint
distinguish
a classification technique used in image
between rain over gray regions and slight
segmentation. Similar data points grouped
motion of gray regions. Replacing the colors
together into clusters. Non hierarchical
of rain pixels with the corresponding
method to begin with the number of works
background colors found by K-means
of the inhabitants equal the number of
clustering and total variation algorithm .
clusters . The function k means partition
Using Gaussian and dilation techniques, rain
data into k mutually limited clusters, and
pixels gets detected and removed easily. By
proceeds the key of the cluster to which it
this method rain streaks are removed from
has assign each study. Different hierarchical
color images and also grey scale images.
clustering, k-means clustering operate on
Then the total variation (TV) approach to
real clarification (quilter than the larger
solve the rain streaks was presented as a
place of difference measures), and creates a
model, which used a forced optimization
solitary stage of clusters. K means uses an
approach with two Lagrange multipliers.
iterative algorithm minimized the sum of
However, their fitting term is not convex,
distance from each article to its cluster
which leads to difficulties in using the
centroid, more than all clusters. The
iterative regularization or the inverse scale
clustering algorithms rely on a distance
space
logarithmic
metric between data points. Where mk the
transformation on both side converted the
mean vector of the kth cluster, Nk is the
multiplicative problem into the additive one.
number of observations in kth cluster.
method.
cannot
The
Then extended the relaxed inverse scale
space (RISS) flows to the transformed
additive problem. Numerical experiments
have shown a good removing rain streaks
K
2
1K
2
W (C) ο€½ οƒ₯ οƒ₯ οƒ₯ xi ο€­ x j ο€½ οƒ₯ N k οƒ₯ xi ο€­ mk
2 k ο€½1 C (i)ο€½k C ( j )ο€½k
k ο€½1 C (i ) ο€½ k
[3] J. Bossu, N. Hauti`ere, and J. P. Tarel,
4 CONCLUSION
As a result , rain streaks get removes
from the multiple images and also obtained
a clear clarity color as well as grey scale
images. Instead of using MCA algorithm ,
“Rain or snow detection in image sequences
through use of a histogram of orientation of
streaks,” Int. J. Comput. Vis., vol. 93, no. 3,
pp. 348–367, July 2011.
TV algorithm is used to remove rain streaks
from the multiple images. With the help of
[4]N. Dalal and B. Triggs, “Histograms of
K-mean clustering and chromatic constraint
oriented gradients for human detection,” in
TV algorithm gets work. The chromatic
Proc. IEEE Conf. Comput. Vis. Pattern
constraint changes the R,G,B values of rain
Recognit., San Diego, CA, Jun. 2005, vol. 1,
damaged pixel. From the proposed
pp. 886–893.
algorithm is adopted for both light rain and
heavy rain condition to remove rain streaks.
Hence the expected output ( i.e.) rain streaks
removed images are obtained.
[5]J. M. Fadili, J. L. Starck, M. Elad, and D.
L.
Donoho,
“MCALab:Reproducible
research in signal and image decomposition
and in-painting,” IEEE Comput. Sci. Eng.,
ACKNOWLEDGMENT
vol. 12, no. 1, pp. 44–63, Jan./Feb.2010.
We wish to thank our department HOD Mr.
[6] K. Garg and S. K. Nayar, “Detection and
Rajivkannan .
removal of rain from videos,” in Proc. IEEE
CVPR, June 2004, pp. 528–535.
REFERENCE
[7] K. Garg and S. K. Nayar, “When does a
[1] P. C. Barnum, S. Narasimhan, and T.
Kanade, “Analysis of rain and snow in
frequency space,” Int. J. Comput. Vis., vol.
86, no. 2/3, pp. 256–274, Jan. 2010.
and
D.
L.
Donoho,
“Morphological component analysis: An
adaptive thresholding strategy,” IEEE Trans.
Image Process., vol. 16, no. 11, pp. 2675–
2681, Nov. 2007.
Oct. 2005, pp. 1067–1074.
[8] K. Garg and S. K. Nayar, “Vision and
rain,” Int. J. Comput. Vis., vol. 75, no. 1, pp.
[2] J. Bobin, J. L. Starck, J. M. Fadili, Y.
Moudden,
camera see rain?,” in Proc. IEEE ICCV,
3–27, Oct. 2007.
[9] J. C. Halimeh and M. Roser, “Raindrop
detection
on
car
geometric–photometric
construction
and
windshields
using
environment
intensity-based
correlation,” in Proc. IEEE Intell. Veh.
Symp., Xi’an, China, Jun.2009, pp. 610–
[15]S. Patterson, Photoshop Rain Effect-
615.
Adding Rain to a Photo [Online].
[10] L. Itti, C.Koch, and E.Niebur, “Amodel
Available:http://www.photoshopessentials.c
of saliency-based visual atten-tion for rapid
om/photo-effects/rain/
scene analysis,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 20, no. 11, pp. 1254–
1259, Nov. 1998.
[16] M. Roser and A. Geiger, “Video-based
raindrop detection for improved image
registration,” in IEEE Int. Conf. Comput.
[11] Y. Jia, M. Salzmann, and T. Darrell,
Vis.Workshops, Kyoto, Sep. 2009, pp. 570–
“Factorized latent spaces with structured
577.
sparsity,” in Proc. Conf. Neural Inf. Proc.
Syst., Vancouver,BC, Canada, Dec. 2010,
pp. 982–990.
[12] O. Ludwig, D. Delgado, V. Goncalves,
and U. Nunes, “Trainable classifier-fusion
schemes: An application to pedestrian
detection,” in Proc. IEEE Int. Conf. Intell.
Transp. Syst., St. Louis, MO, Oct. 2009,pp.
1–6.
[13]J. Mairal, F. Bach, and J. Ponce, “Taskdriven dictionary learning,”IEEE Trans.
Pattern Anal. Mach. Intell., to be published,
to be pub-lished.
[18]H. R. Sheikh and A. C. Bovik, “Image
information and visualquality,” IEEE Trans.
Image Process., vol. 15, no. 2, pp. 430–
444,Feb. 2006.
[19] J. L. Starck, M. Elad, and D. L.
Donoho, “Image decomposition via the
combination of sparse representations and a
variational ap-proach,” IEEE Trans. Image
Process., vol. 14, no. 10, pp. 1570–1582,
Oct. 2005.
[20] X. Zhang, H. LI, Y. Qi, W. K. Leow,
and T. K. Ng, “Rain removal in video by
combining
temporal
and
chromatic
[14] B. A. Olshausen and D. J. Field,
properties,” in Proc. IEEE ICME, July 2006,
“Emergence of simple-cell receptive field
pp. 461–464.
properties by learning a sparse code for
natural images,” Nature,vol. 381, no. 6583,
pp. 607–609, Jun. 1996.
Download