www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242

advertisement
www.ijecs.in
International Journal Of Engineering And Computer Science ISSN:2319-7242
Volume 3 Issue 8 August, 2014 Page No. 7890-7894
Pyramid Based Image Fusion
Chhamman Sahu1, Raj Kumar Sahur2,
1
ME Scholar, Dept. of ECE,Chhatrapati Shivaji Institute of Technology,
Durg, Chhattisgarh, India
chhammansahu007@gmail.com
2
Associate professor, Dept. of ECE, Chhatrapati Shivaji Institute of Technology,
Durg, Chhattisgarh, India
rajkumarsao@gmail.com
Abstract:Many images are degraded by the bad weather conditions due to smoke, fogs, dust and ash which are obstacle in clarity of images.
Image processing techniques improve the quality of an image and enhance the maximum information from the degraded image. It is the
process of combining two or more images into a single image; method is a fusion-based strategy that derives from two original hazy images
inputs by applying a white balance and contrast enhancing procedure.The method performs in a pyramidal method, which is straightforward
to implement. Then the resulting image will be more clear and enhanced from the prior. This thesis reports a detailed study performed over
a set of image fusion algorithms regarding their implementation. The thesis demonstrates the utility and effectiveness of a fusion-based
technique for dehazing based on a single degraded image.
Keywords: Single image dehazing, Weight maps, Fusion Method, Pyramid Technique.
1. Introduction
Images of outdoor scenes often contain haze, fog, or other
types of atmospheric degradation caused by particles inthe
atmospheric medium absorbing and scattering light as it
travels from the source to the observer. Image Fusion is a
mechanism to improve the quality of information from a set
of images. By the process of image fusion the good
information from each of the given images is fused together
to form a resultant image whose quality is superior to any of
the input images. This is achieved by applying a sequence of
operators on the images that would make the good
information in each of the image prominent. The resultant
image is formed by combining such magnified information
from the input images into a single image [1]. The term
fusion in general means, the process is to integrate
multisensory or multi-view or multifocal information into a
new image that contains better quality features and is more
observable. Image fusion is called Pan sharpening used to
integrate the geometric details of the high resolution images
and colour of the low resolution or multispectral (MS)
images. Image fusion finds application in various fields such
as satellite imaging, medical sciences, military applications
digital cameras multi-focus imaging etc.
Image fusion methods can be broadly classified into spatial
domain and transform domain fusion Brovey method,
Principal Component analysis (PCA), IHS (intensity hue
saturation) and High pass filtering methods fall in the spatial
domain
fusion techniques. Spatial image fusion work by combining
the pixel values of the two or more images. The simplest is
averaging the pixelvalues of the input images wavelet
transform[2] and Laplacian transform[1] come in the
transform domain. In the transform domain method the
multi-scale decomposition of the images is done and the
composite image is constructed by using the fusion rule.
Then Image Fusion finds it application in vast range of
areas. It is used for medical diagnostics and treatment. A
patient’s images in different data formats can be fused.
These forms can include magnetic resonance image (MRI),
computed tomography (CT), For example, CT images are
used more often to ascertain differences in tissue density
while MRI images are typically used to diagnose brain
tumours.
2. Review of Literature
In [12], Fattal proposed an algorithm based on independent
component analysis. This algorithm estimates the surface
shading and the medium under the assumption transmissions
which are locally uncorrelated, and gets the image local
albedo and restore image contrast. However, a distinct lack
of enough color in heavily haze images makes heavily haze
images cannot be handled well. In [14], Tarel proposed an
improved image defogging algorithm based on bilateral
filtering.Tan [11] developed a system for estimating depth
from a single weather degraded input image. Motivated by
Chhamman Sahu, IJECS Volume 3 Issue 8 August 2014 Page No.7890-7894
Page 7890
the fact that contrast is reduced in a foggy image, Tan
divided the image into a series of small patches and
postulated that the corresponding patch in image should
have a higher contrast (where contrast was quantified as the
sum of local image gradients). In [13], He proposed a simple
but effective image prior-dark channel to remove haze from
a single input image. However, as the haze imaging model
assumes common transmission for all color channels, this
method may fail to recover the haze images. Fattal [12]
assumed every patch has uniform reflectance, and that the
appearance of the pixels within the patch can be expressed
in terms of shading and transmission. He considered the
shading and transmission signals to be unrelated and used
independent component analysis to estimate the appearance
of each patch. Significant progress in single image haze
removal has been made in recent years. Tan[11] made the
observation that a haze-free image has higher contrast than a
hazy image, and was able to obtain good results by
maximizing contrast in local regions of the input image.
However, the final results obtained by this method are not
based on a physical model and are often unnatural looking
due to over-saturation. Fattal [12] was able to obtain good
results by assuming that transmission and surface shading
are locally uncorrelated. With this assumption, he obtains
the transmission map through independent component
analysis. This is a physically reasonable approach, but this
method has trouble with very hazy regions where the
different components are difficult to resolve. Lastly, a
simple but powerful approach proposed by He [13] uses
dark pixels in local windows to obtain a coarse estimate of
the transmission map followed by a refinement step using an
image matting technique. Their method obtains results on
par with or exceeding other state-of-the-art algorithms, and
is even successful with very hazy scenes.
Image fusion technique, with an image fusion engineto
organize, join, and combine multi source and multitemporal
data, provides a powerful tool for these dataprocessing
problems[9]. Wavelet transform is first performedon source
images. Then a fusion decision map is generatedbased on a
set of fusion rules. Then fused wavelet coefficientmap can
be constructed from the wavelet coefficients of thesource
images according to the fusion decision map. Finallythe
fused image is obtained by performing the inverse wavelet.
3.2 Pixel Level Fusion
This section focuses on the so-called pixel level
fusionprocess, where a composite image has to be built of
severalinput images[1]. In pixel-level image fusion, some
genericrequirements can be imposed on the fusion result:
a) The fusion process should preserve all relevant
informationof the input imagery in the composite image
(patternconservation)
Figure 2:Laplacian Method
3. TheoreticalLevel
3.1Discrete Wavelet Transform
The discrete wavelet transform (DWT) of image
signalproduces image representations which provides better
spatialand spectral localization of image formation
compared withother multi scale representations such as
Gaussian andLaplacian pyramid[4] . Recently, DWT has
attracted more and more interest in imagefusion. An image
can be decomposed into a sequence ofdifferent spatial
resolution images using DWT[5].
b) The fusion scheme should not introduce any artifacts
orinconsistencies which would istract the human observer
orfollowing processing stages[1].
c) Image fusiontakes place at three different levels i.e. pixel,
feature and decision. Image fusion methods can bebroadly
classified into two that is special domain fusion and
transform domain fusion. Averaging,Brovery method,
Principal Component Analysis (PCA), based methods are
special domainmethods. But special domain methods
produce special distortion in the fused image .Thisproblem
can be solved by transform domain approach. The multiresolution analysis has becomea very useful tool for
analysing images. The discrete wavelet transform has
become a very usefultool for fusion. The images used in
imagefusion should already be registered. Pixel level fusion
technique is used to increase thespecial resolution of the
multi-spectral image. Application of image fusion include
improvinggeometric correction, enhancing certain features
not visible in either of the single data alone,change detection
using temporal data sets and enhancing provide a complete
information forDiagnosis [2].
Figure 1:Pixel Level
Image fusion technique [2]is a powerful tool forextracting
high quality information from large number ofremotely
sensed images and limiting redundancy among theseimages.
4. Fusion Process
The images are first decomposed using a Laplacian
Chhamman Sahu, IJECS Volume 3 Issue 8 August 2014 Page No.7890-7894
Page 7891
Pyramid decomposition of the original image into a
hierarchyof images such that each level corresponds to a
different band of image frequencies [1]. The Laplacian
pyramid decomposition is a suitable MR decomposition for
the present task as it is simple, efficient and better mirrors
the multiple scales of processing in theHVS. The next step is
to compute the Gaussian pyramid of the weight map.
Blending is then carried out for each level separately [1] [2].
the entire image. In order to overcome thislimitation, we
introduce three measures (weight maps). Thesemaps are
designed in a per-pixel fashion to better define thespatial
relations of degraded regions.Our weight maps balance the
contribution of each input andensure that regions with high
contrast or more saliency froma derived input, receive
higher values [1].
The luminance weight map measures the visibility of
eachpixel and assigns high values to regions with good
visibility and small values to the rest. Since hazy images
present lowsaturation, an effective way to measure this
property is toevaluate the loss of colorfulness. This weight is
processedbased on the RGB color channel information. We
make useof the well-knownproperty that more saturated
colors yieldhigher values in one or two of the color channels
[1].
A. Inputs
As mentioned previously, the input generation process
seeksto recover optimal region visibility in at least one of
theimages.In practice, there is no enhancing approach that is
ableto remove entirely the haze effects of such degraded
inputs.Therefore, considering the constraints stated before,
since weprocess only one captured image of the scene, the
algorithmgenerates from the original image only two inputs
that recovercolor and visibility of the entire image. The first
one better depicts the haze-free regions while the second
derived inputincreases visible details of the hazy
regions.Inherently inspired by the previous dehazing
approachessuch as Tan [11], Tarel and Hautiere [14] and He
et al. [13],we searched for a robust technique that will
properly white.
5. Results and discussion
Image Fusion aims to enhance the information apparent in
the images as well as to increase the reliability of the
interpretation by integrating disparate images. This leads to
more clear data and increased visualises in application fields
like medical imaging, foggy images, etc. This paper
discusses different techniques for pixel level image fusion
and their performance evaluation parameters. Depending
B. Weight Maps
As can be seen in figures by applying only theseenhancing
Image- Laplacian
Pyramid
Weight Map-Gaussian
Pyramid
Input Images
Original Image
Dehazed Image
*
+
*
Figure 3:Fusion Process
operations, the derived inputs still suffer
from lowvisibility mainly in those regions dense haze and
lowlight conditions. The idea that global contrast
enhancementtechniques are limited to dealing with hazy
scenes has beenremarked previously by Fattal [12]. This is
due to the fact thatthe optical density of haze varies across
the image and affectsthe values differently at each pixel.
Practically, the limitationof the general contrast
enhancement operators (e.g. gammacorrection, histogram
equalization, white balance) is due tothe fact that these
techniques perform (constantly) the sameoperation across
upon the use of the given application, some users may wish
a fusion outcome that would show more color details, some
may desire more analysis or mapping; while some may want
improved accuracy of application; and some others may
wish for a visually beautiful and appealing fused color
image, solely for visualization purposes. Thus, we can
conclude that fusion algorithm with pixel level and weight
maps. A combination of weight maps including luminance
map, chromatic map, saliency map
Chhamman Sahu, IJECS Volume 3 Issue 8 August 2014 Page No.7890-7894
Page 7892
(a)Haze1
(b)Our Result
(a)Haze 2
(b)Our Result
(a)Haze 3
(b)Our Result
(a)Haze 4
(b)Our Result
(a)Haze 5
(b)Our Result
(a)Haze 6
(b)Our Result
Figure 4:Output Result of Many Hazy Images.
Which increase the clarity of foggy image this approach
may be the correct way to find out which fusion algorithm is
most appropriate for an application.
References
[1] Codruta Orniana Ancuti and Cosmin Ancuti―single
image dehazing by multi- scale fusion ieee transactions
on image processing, vol. 22, no. 8, august 2013.
[2] Paresh
Rawat,
Sapna
Gangrade,
Pankaj
Vyas―Implementationof Hybrid Image Fusion
Technique Using Wavelet Based Fusion Rules‖
(IJCTEE)Volume 1, Issue 1july 12, 2011.
[3] Jin-Hwan Kim, Jae-Young Sim, and Chang-Su
Kim―single image dehazing based on contrast
enhancementieee 2011.
[4] Yoav Y. Schechner, Srinivasa G. Narasimhan and Shree
K. Nayar―Instant Dehazing of Images Using
PolarizationProc. Computer Vision & Pattern
Recognition Vol. 1, pp. 325-332 (2001).
[5]Pengli LU, Qiang ZHANG―Single Image Dehazing
Method‖, Journal of Computational Information
Systems 10: 4 (2014) 1581–1588.
[6] Peter Carr, Richard Hartley―Improved Single Image
Dehazing
using
GeometryAustralian
National
University and NICTACanberra, Australia.
[7] A. Ben Hamzaa, Yun Heb, Hamid Krimc,and Alan
Willskyd ―A approach to pixel-level imageFusion‖
Integrated Computer-Aided Engineering 12 (2005)
135–146.
[8] YufengZheng, ―Multi-scale Fusion Algorithm
Comparisons:Pyramid, DWT and Iterative DWT‖ 12th
International Conference on Information FusionSeattle,
WA, USA, July 6-9, 2009.
[9]N. INDHUMADHI, G. PADMAVATHI Enhanced Image
Fusion Algorithm Using Laplacian Pyramid and
Spatial
frequency
Based
WaveletAlgorithm
International Journal of Soft Computing and
Engineering (IJSCE) ISSN: 2231-2307, Volume-1,
Issue-5, November 2011.
[10]Yufeng Zheng, Edward A. Essock and Bruce C.
Hansen―An Advanced Image Fusion Algorithm
Based on Wavelet Transform Incorporation with PCA
and Morphological Processing.
[11]R. T. Tan, ―Visibility in bad weather from a single
image,‖ in Proc.IEEE Conf. Comput. Vis. Pattern
Recognit, Jun. 2008, pp. 1–8.
[12]R. Fattal, ―Single image dehazing,‖ ACM Trans.
Graph., SIGGRAPH, vol. 27, no. 3, p. 72, 2008.
[13]K. He, J. Sun, and X. Tang, ―Single image haze
removal using dark channel prior,‖ in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit.,Jun. 2009, pp. 1956–
1963.
[14] J. Tarel, N. Hauti, Fast visibility restoration from a
single color or gray level image, Proceedingsof IEEE
Internation Conference on Computer Vision (ICCV),
Kyoto, Japan: IEEE ComputerSociety, 2009, pp. 22012208.
[15] AminaSaleem,
AzeddineBeghdadi
and
BoualemBoashash,’ Image fusion-based contrast
enhancement‖ Saleem et al. EURASIP Journal on
Image and Video Processing 2012.
[16] M.A. Mohamed1 and B.M. El-DenImplementation of
Image Fusion Techniques Using FPGA IJCSNS
International Journal of Computer Science and
Network Security, VOL.10 No.5, May 2010.
[17] Dr. H.B. Kekre et al. review on image fusion
techniques
and
performance
evaluation
parametersInternational Journal of Engineering
Science and Technology (IJEST) Vol. 5 No.04 April
2013.
[18]Zheng Liu ―Objective Assessment Of Multiresolution
Image Fusion Algorithms For Context enhancement in
Night VisioN:A Comparative Study‖, IEEE
transaction,vol 34, no.1 january2012.
[19]Shutao Li , Bin Yang Multifocus image fusion using
region segmentation and spatial frequency accepted 31
Chhamman Sahu, IJECS Volume 3 Issue 8 August 2014 Page No.7890-7894
Page 7893
October 2007. www.sciencedirect.com Image and
Vision Computing 26 (2008) 971–979.
[20] S.M.Mukane1,Y.S.Ghodake2
and
P.S.Khandagle
Image enhancement using fusion by wavelet transform
and Laplacian Pyramid IJCSI International Journal of
Computer Science Issues, Vol. 10, Issue 4, No 2, July
2013.
[21]E. H. Adelson, C. H. Anderson, J. R. BergenPyramid
methods in image processing‖ RCA Engineer • 29-6 •
Nov/Dec 1984.
Chhamman Sahu, IJECS Volume 3 Issue 8 August 2014 Page No.7890-7894
Page 7894
Download