MoD3-1

advertisement
Different Focus Points Images Fusion Based on
Wavelet Decomposition
Xuan Yang
Wanhai Yang
Jihong Pei
School of Electronics and Engineering
School of Electronics and
School of Electronics and
Xidian University
Engineering
Engineering
Xi n, Shaanxi, China
Xidian University
Xidian University
xyang@mail.xidian.edu.cn
Xi n, Shaanxi, China
Xi n, Shaanxi, China
whyang@ xidian.edu.cn
pjhong@pub.xaonline.com
Abstract - A new technique is developed for the data
fusion of two images. Two spatially registered images
with differing focus points are fused by deciding clear
objects. At first, an impulse function is defined to
describe the image quality of an object. Then the clear
region is decided by analyze the wavelet decomposition
components of two primary images and two blurred
images. The results of the comparison show this method
performing better in preserving edge information for the
test images than that of other image fusion methods.
Keywords: Image Fusion , Wavelet Decomposition
Topic Number : B.5 Image Fusion
1 Introduction
Image fusion is the combination of two or more different
images to form a new image by using a certain
algorithm[1]. Fused images provide for robust
operational performance, i.e., increased confidence,
reduced ambiguity, improved reliability and improved
classification. Image fusion is applied to digital imagery
in order to sharpen images, improve geometric
corrections, provide stereo-viewing capabilities for
stereophotogrammetry, enhance certain features not
visible in either of the single data alone, and complement
data sets for improved classification. Image fusion plays
an important role in image sharpening, such as fusion
with two different focus points images. For example,
there are two objects in an image. If the front object is in
focus, then the back object is out of focus, vice versa. An
image with two objects being in focus can be obtained by
fusion with these two different focus points images. In
this paper, a new technique is developed for the data
fusion of two different focus points images.
A number of methods have been proposed for image
fusion[2-9,11-17]. The most common procedures are
methods based on intensity-hue-saturation transform
(IHS and LHS mergers)[3,4] Laplacian pyramid
method[9], and wavelet transform method[5-8]. HIS
transform methods are not suitable to fusion with
different focus points images, which Laplacian pyramid
method and wavelet transform method are suitable to.
Wavelet transform is an intermediate representation
between Fourier and spatial representations, and it can
provide good localization in both frequencies and space
domains. Wavelet decomposition is being used
increasingly for the processing of images. The method is
based on the decomposition of image into multiple
channels on the basis of their local frequencies content.
Wavelet transform method preserves the spectral
characteristics of the multispectral image better than the
standard HIS or LHS methods. Wavelet transform
method can be performed by replacing some wavelet
coefficients of one primary image with the corresponding
coefficients of other primary image, and the fused image
can be obtained by reconstructed. Although wavelet
transform method takes some advantages over the
standard HIS or LHS methods, However, the
disadvantage of the Laplacian pyramid method and
wavelet transform method is that the image edge
information can be loss to some extent by these methods.
In order to preserve edge information of original images
to the greatest extent, a new image fusion method is
proposed in this paper. The high frequencies of images
with different focus points are analyzed alternatively to
decide the blurred objects and clear objects in original
images. A fused image can be obtained by combination
with the clear objects of two primary images. In this
method, there is not wavelet reconstruction and edge
information of objects is preserved much more than that
of wavelet transform method and Laplacian pyramid
method.
This paper is organized as follows. In section 2, a brief
review of the wavelet transform is given. In section 3, the
new fusion method of different focus points images in
this paper is introduced. In section 4, experiments of
using the method of this paper, wavelet transform
method and Laplacian pyramid method to merge two
different focus points images are presented. Fused images
with three methods are compared in section 4 also.
2 Multiresolution wavelet decomposition
The multiresolution wavelet transform decomposes a
signal into the coarser resolution representation, which
consists of the low frequencies approximation
information and the high frequencies detail information.
Wavelet decomposition provides a framework for
decomposing images into a number of new images, each
with a different degree of resolution.
Let the convolution of two energy finite functions
f ( x, y ) ∈ L (R )
( f × g )(x, y )
2
g ( x, y ) ∈ L (R )
2
and
be
( f × g )(x, y ) = òò f (u, v )g (x − u, y − v )dudv
R2
(
(D f )
{ A2− J f , D21 j f
2
2j
)
− J ≤ j ≤ −1
− J ≤ j ≤ −1
(
,
, D23j f
)
− J ≤ j ≤ −1
}
No extra data are produced in the decomposition
procedures because of the orthogonality of the wavelet
representation. The wavelet decomposition can be
interpreted as signal decomposition in a set of
independent, spatially oriented frequencies channels. The
component A2 j f corresponds to the lowest frequencies,
D21 j f gives the high frequencies in vertical directions,
D22j f
gives the high frequencies in horizontal
3
directions, D2 j f
the high frequencies in diagonal
directions.
The approximation of a two-dimension finite energy
function f ( x, y ) at resolution
2 j , where j is a
decomposition level, can be characterized by A2 j f . The
difference between approximation information at two
j −1
j
, which are
consecutive resolutions 2 and 2
characterized by A2 j f and A2 j −1 f , respectively, can
1
and D
(
f .
(
))
− j −1)
m,2 − ( j −1) n ))
A2 j f = ( f ( x, y ) × φ 2 j (− x )φ 2 j (− y )) 2 − j m,2 − j n
D21 j −1 f =
(( f (x, y ) × φ (− x )ψ (− y ))(2 (
2
D
2
2 j −1
j −1
2
j −1
f =
(( f (x, y ) ×ψ (− x )φ (− y ))(2 (
2 j −1
− j −1)
2 j −1
m,2 − ( j −1) n ))
D23 j −1 f =
(( f (x, y ) ×ψ (− x )ψ (− y ))(2 (
2 j −1
− j −1)
2 j −1
m,2 −( j −1) n ))
(m, n) ∈ Z 2
where φ ( x ) is a one-dimension scaling function whose
Fourier transform is concentrated in low frequencies, and
φ 2 ( x ) = 2 j φ (2 j x ) . ψ ( x ) is one-dimensional wavelet
j
function,
which
is
ψ 2 ( x ) = 2 j ψ (2 j x ) .
j
a
band-pass
A2 j f
can
3.1 Image quality of an object
2
be captured by the detail coefficients D2 j −1 f , D2 j −1 f
3
2 j −1
3 Fusion techniques for different focus
points images
filter,
be
1
and
perfectly
2
reconstructed from A2 j −1 f , D2 j −1 f , D2 j −1 f
and
D23 j −1 f . A2 j −1 f , D21 j −1 f , D22j −1 f and D23 j −1 f can be
calculated with a pyramid algorithm proposed by
Mallat[10].
A1 f which
is measured at resolution 1, an original image A1 f can
We consider the original discrete image as
be completely represented by approximation component
A2− J f at resolution 2 − J and 3 J detail components
Before fusion with different focus points images, it is
needed to analyze the image quality of an object in order
to distinguish the objects in-focus from objects out-focus.
Suppose the original object is f ( x, y ) , the image of
such an object is g ( x, y ) in an optics system, which can
be assumed to shift invariant and linear. Suppose the
response function is h( x, y ) .
g ( x, y ) = f ( x, y ) ∗ h( x, y )
h( x, y ) affects the image quality of an object.
For a given object, h( x, y ) can be approximated as a
gauss function..
−
x2 + y2
h( x, y ) = G ( x, y, σ ) = e 2σ
where σ decides the quality of the image of an object. If
σ is small, then the image of an object is clear and the
2
response function can be seen as an impulse function
δ ( x, y ) ; If σ is large, then the image of an object is
blur and the response function can be seen as a blur
function. If there are several objects with different image
qualities in a scene, the image quality of every object can
be represented by different gauss functions with various
variances σ . That is, the objects in-focus can be
expressed as an original object convoluting to a gauss
function with a small variance σ , and the objects outfocus can be expressed as an original object convoluting
to a gauss function with a large variance σ .
3.2 Decision of in-focus objects and out-focus
objects
(1) If
Df 1 − Df 2' − Df1 − Df 2 ≥ T
and Df1 − Df 2 − Df 1 − Df 2 ≥ T ,
'
Based on the relationship of image qualities of objects
with impulse functions, it is known that the diversity
between in-focus objects and out-focus objects is
represented by the gauss function variance σ of response
functions. Next, we will discuss the decision rules of infocus objects and out-focus objects. Suppose f1 and f 2
then the in-focus object is in f1 ;
(2) If
Df1' − Df 2 − Df1 − Df 2 ≥ T
and Df 1 − Df 2 − Df 1 − Df 2 ≥ T ,
'
Then the in-focus object is in f 2 ;
'
are two original images with different focus points, f1
'
and f 2 are blurred images of f1 and f 2 by a gauss
function with variance
σ0
alternatively. We will analyze
the high frequencies of a neighborhood around a pixel to
decide whether the pixel is belonging to an in-focus
object or a out-focus object. There are three statuses:
(1)If the pixel belongs to an in-focus object in f1 , and
belongs to out-focus object in f 2 also. Then the object,
'
which is clear in f1 and is blur in f 2 , in f 2 is more
(3)
If
Df 1 − Df 2 − Df 1 − Df 2' < T
and Df1 − Df 2 − Df 1 − Df 2 < T ,
'
Then the in-focus object is in f1 or f 2 ;
A new fusion image, which contains all in-focus objects
of two original images, can be obtained by combining all
the pixels in two original images based on the deciding
rules.
blurring than that in f 2 . The difference between the high
3.3 Expression of the high frequencies in the
frequencies in the neighborhood of f1 and that of f is
neighborhood
more than that between f1 and f 2 . The difference
The expression of the high frequencies in the
'
between the high frequencies in the neighborhood of f1 neighborhood and the threshold T are discussed as
follow. the high frequencies in the neighborhood of an
and that of f 2 is less than that between f1 and f 2 .
'
2
(2)If the pixel belongs to out-focus object in f1 , and
belongs to an in-focus object in f 2 also. Then the object,
image can be determinated by the wavelet decomposition
coefficients. Suppose the original image f is
decomposed at resolution 2
−J
, let J = 1 , the image f
1
2
which is blurring in f1 and is clear in f 2 , in f1 is
can be decomposed into A2 −1 f , D2 −1 f , D2 −1 f and
more blurring than that in f1 . The difference between
D23−1 f , which corresponds to the lowest frequencies, the
'
'
the high frequencies in the neighborhood of f1 and that
of f 2 is more than that between f1 and f 2 . The
difference between the high frequencies in the
'
neighborhood of f1 and that of f 2 is less than that
between f1 and f 2 .
(3)If the pixel belongs to an in-focus object or out-focus
object both in f1 and f 2 . Then the difference between
vertical high frequencies, horizontal high frequencies and
the high frequencies in diagonal directions. In order to
keep the same size of original image, down sample is not
taken. The high frequencies in the neighborhood of an
image is defined as
Df =
å (D f )(m, n) + å (D
+ å (D f )(m, n )
(m , n )∈A
1
2 −1
( m ,n )∈A
(m , n )∈A
2
2 −1
f )(m, n )
3
2 −1
the high frequencies in the neighborhood of f and that
where A is the neighborhood of the current pixel.
of f 2 is more than that between f1 and f 2 . The
difference between the high frequencies in the
3.4 Threshold T
'
1
'
neighborhood of f1 and that of f 2 is more than that
between f1 and f 2 also.
Based on above analysis, the rules of deciding in-focus
objects and out-focus objects can be expressed. Let the
'
high frequencies in the neighborhood of f1 , f 2 , f1 and
f 2' are Df1 Df 2 , Df 1' and Df 2' alternatively.
The threshold T can be discussed based on the onedimension edge model. Suppose the ideal one-dimension
model is a step function u ( x ) . If the image of the edge is
in-focus, the gauss function variance is σ 1 ; if the image
of the edge is out-focus, the gauss function variance is
σ 2 . That is, the in-focus edge image e1 (x ) and the out-
focus edge image e2 ( x ) can be expressed as
æ
e1 ( x ) = u ( x ) * G ( x,σ 1 ) = ò expçç −
x
−∞
è
t2 ö
÷dt
2σ 12 ÷ø
æ
e2 ( x ) = u ( x ) * G ( x, σ 2 ) = ò expçç −
x
−∞
è
2
t ö
÷dt
2σ 22 ÷ø
alternatively. Harr wavelet is taken to decompose e1 ( x )
and e2 ( x ) . The high frequencies of e1 ( x ) at the region
[− 1,1] is
De1 =
æ
2ò expçç −
0
è
1
t2 ö
÷ dt
2σ 12 ÷ø
The integral formula can be approximated by Cotes
formula
æ (1 3)
2 6
De1 ≈ + expçç −
8 8
2σ 12
è
2
æ (2 3)
6
expçç −
8
2σ 12
è
2
ö
÷+
÷
ø
ö
÷+
÷
ø
æ
1 ö
2
÷
expçç −
2 ÷
8
è 2σ 1 ø
when x << 1 , exp(− x ) ≈ − x . The formula can be
approximated as
2 6 æ (1 3)
De1 ≈ + çç1 −
8 8è
2σ 12
2
6 æ (2 3)
ç1 −
8 çè
2σ 12
=2−
2
ö
÷+
÷
ø
ö
÷+
÷
ø
1 ö
2æ
ç1 −
÷
ç
8 è 2σ 12 ÷ø
1
3σ 12
Similarly, the high frequencies of e2 ( x ) at the region
[− 1,1] can be expressed as De2 ≈ 2 − 1 2 . e1' (x )
3σ 2
and e2 ( x ) are blurred edges of e1 ( x ) and e2 ( x ) by
'
where k is a modification coefficient.
We suppose the original images are registered before
image fusion. Original images are the same array size
and the objects are almost in same size also. If the image
sizes of a same object in different images are different to
a large extent, there will be a false contour around the
object in the fused image, which will occur in the wavelet
transform method and Laplacian pyramid method also.
The image fusion method proposed in this paper is
suitable to the images obtained in the same imaging
condition. That is, the brightness and contrast of two
images are similar. If one image is bright and the other
image is dark, we need to modify two images into the
same brightness and contrast. Next, the image fusion
method can be taken.
4 Experiments and comparison
We applied the above methodology to merge test images,
called Clock images and Face images. There are two
objects in the test images called Clock test images. In
Clock1 image, the front object is clear and the back object
is blurry. In Clock2 image, the front object is blurry and
the back object is clear. There are two objects in the
second test images called Face test images. In Face1, the
front object is blurry and the back object is clear. In
Face2, the front object is clear and the back object is
blurry. The perfect fusion images of the test images can
be obtained by manual cut and paste. We can quantify the
behavior of Laplacian pyramid method, Wavelet
transform method and our method in comparison with the
perfect fusion images. To compute the difference M F
we use the expression
MF =
åå (
1 M −1 N −1 '
g ij − g ij
n i =0 j = 0
)
2
where n = M × N , which is the size of image. g ij is
'
σ 0 alternatively. Using the pixel graylevel of the fusion image at the position
(i, j ) . g ij is the pixel graylevel of the perfect fusion
'
above method, the high frequencies of e1 ( x ) at the
image at the position (i, j ) . The better the behavior of
1
'
, the fusion image is, the smaller the difference M is.
region [− 1,1] can be expressed as De1 ≈ 2 −
2
F
3σ 1'
Table 1 shows the difference between the perfect fusion
'
the high frequencies of e2 ( x ) at the region [− 1,1] can images with the Laplacian pyramid method, Wavelet
transform method and our method. It can be shown from
1
Table 1 that the fusion images of our method are better
De2' ≈ 2 −
.
Where
be expressed as
2
than that of other methods.
3σ 2'
gauss function with variance
σ 1' = σ 02 + σ 12 , σ 2' = σ 02 + σ 22 . The threshold
T can be determinated as
1 1
1
T = k max ( ( 2 − 2
),
6 σ 1 σ 1 + σ 02
1 1
1
( 2 − 2
))
6 σ 2 σ 1 + σ 02
clock1
clock2
perfect fused image obtained by manual cut and paste
Fused image obtained
by the Laplacian
pyramid method
Fused image obtained
by the Wavelet
transform
Fused image obtained by our method
Difference from the
perfect image ( Laplacian
pyramid method )
Difference from the
perfect image (Wavelet
transform method )
Difference from the perfect image ( our method )
Figure 1: Fusion of Clock images
face1
face2
perfect fused image obtained by manual cut and paste
Fused image obtained
by the Laplacian
pyramid method
Fused image obtained
by the Wavelet
transform
Fused image obtained by our method
Difference from the
perfect image ( Laplacian
pyramid method )
Difference from the
perfect image (Wavelet
transform method )
Difference from the perfect image ( our method )
Figure 2: Fusion of Face images
Images
Laplacia
n
pyramid
method
Wavelet
transfor
m
method
Our
method
Clock
17.32
11.42
7.15
Face
5.20
4.42
1.95
Table 1 The difference from the perfect image of three
methods
5 Conclusions
A new image fusion method proposed in this paper
combines two images with different focus points by
deciding clear objects and blurring objects. With this
method the detail information from both images is
preserved. The method is capable of enhancing the image
quality while preserving its edge information to a
greatest extent. It is proved from experiments and
comparison that this method proposed in this paper
behaves better than Laplacian pyramid method and
Wavelet transform method.
References
[1] Genderen, J. L. VAN, POHL, C., Image fusion:
Issues, techniques and applications. Intelligent
Image Fusion, Proceedings EARSeL Workshop,
Strasbourg, France, 1994, edited by J. L. VAN
Genderen and V. Cappellini, pp.18-26
[2] D. L. Hall, Mathematical Techniques in Multisensor
Data Fusion, Artech House, Boston, 1992
[3] W. J. Carper, T. M. Lillesand, and R. W. Kiefer, The
use of intensity-hue-saturation transformations for
merging SPOT panchromatic and multispectral
image data, Photogram. Eng. Remote Sens., 56,
pp.459-467,1990
[4] E. M. Schetselaar, Fusion by the HIS transform:
shoud we use cylindrical or spherical coordinates?,
INT. J. Remote Sensing, 1998, vol.19, no.4, pp.759765
[5] Jorge N z, Xavier Otazu, and Octavi Fors, Image
fusion with additive multiresolution wavelet
decomposition Applications to SPOT+Landsat
images, J. Opt. Soc. Am. A, Vol.16, no.3, pp.467474, 1999
[6] David A. Yocky, Image merging and data fusion by
means of the discrete two-dimensional wavelet
transform, J.Opt. Soc. Am. A, Vol.12, no.9, pp.18341841, 1995
[7] J. Zhou, D. L. Civco, and J. A. Silander, A wavelet
transform method to merge Landsat TM and SPOT
panchromatic data, Int. J. Remote Sensing, Vol. 19,
no.4, pp.743-757, 1998
[8] F. Sunar, N. Musaoglu, Merging multiresolution
SPOT P and Landsat TM data: the effects and
advantages, Int. J. Remote Sensing, Vol.19, no.2,
pp.219-224, 1998
[9] P. J. Burt and E. H. Adelson, The Laplacian pyramid
as a compact image code, IEEE Trans. Commun.
COM-31, pp.532-540, 1983
[10] Mallat, S. G., 1989, Multifrequencies channel
decomposition of images and wavelet models, IEEE
Trans. Acoustics, Speech, and Signal Processing,
Vol.37, pp.2091-2110
[11]Christina E. Warrender and Marijke F. Augusteijn,
Fusion of image classifications using Bayesian
techniques with Markov random fields, Int., J.
Remote Sensing, 1999, vol. 20, no.10, pp.1987-2002
[12]Anne H. Schistad Solberg, Anil K. Jain, and Torfinn
Taxt, Multisource Classification of Remotely Sensed
Data: Fusion of Landsat TM and SAR images, IEEE
Trans. Geo. Rem. Sen., Vol.32, no.4, 1994, pp.768777
[13]N.D.A.Mascarenhas, G.J.F.Banon and A.L.B.
Candeias, Multispectral image data fusion under a
Bayesian approach, Int., J. Remote Sensing, 1996,
vol.17, no.8, pp.1457-1471
[14]Changqing Zhu, Xiaomei Yang, Study of remote
sensing image textrure analysis and classification
using wavelet, Int., J. Remote Sensing, 1998, vol.19,
no.16, pp.3197-3202
[15]Bassel Solaiman, Leland E. Pierce, Fawwaz T. Ulaby,
Multisensor data Fusion Using Fuzzy Concepts:
Application to Land-Cover Classification Using
ERS-1/JERS-1 SAR Composites, IEEE Trans. Geo.
Rem. Sen. Vol.37, no.3, 1999, pp.1316-1325
[16]Weijian Wan, Donald Fraser, Multisource Data
Fusion with Multiple Self-Organizing Maps, IEEE
Trans. Geo. Rem. Sen., Vol.37, no.3, 1999, pp.13441349
[17]Leen-Lait, Soh, Costas Tsatsoulis, Segmentation of
Satellite Imagery of Natural Scenes Using Data
Mining, IEEE Trans. Geo. Rem. Sen. Vol.37, no.2,
1999, pp.1086-1099
Download