IEEE Paper Template in A4 (V1)

advertisement
Target Detection on Hyperspectral Images using
Covariance Tracking
Cemalettin Koç#1, Abdullah Bal2
#
Department of Computer Engineering
Gebze Institute of Technology
Cayirova, Gebze, Kocaeli 41400 Turkey
1ckoc@bilmuh.gyte.edu.tr
2bal@yildiz.edu.tr
Abstract— In this study we present a robust target detection
algorithm on hyperspectral images. The proposed approach does
not lie on Euclidean space and takes advantage of the limited
subclass information of target such as shape, rotation and size.
This approach describes a fast method for computation of
covariance descriptor based on hyperspectral images. The
experimental results show a promising performance
improvement in terms of lower false alarm rate, prediction
accuracy, compared with the conventional algorithms which lie
on Euclidean space.
Keywords— Hyperspectral, Mutual Information, mRMR
Dimension Reduction, Feature Selection, Covariance, Statistic
I. INTRODUCTION
As the technology’s advance, hyperspectral image
technology has got rapid development. The applications of
hyperspectral image data have extended to agriculture,
environment, mine, and so on. Due to providing much higher
resolution, hyperspectral image data could use rather than
RGB image data. Hyperspectral image data is superior to
RGB data to discriminate the ground objects and their
characteristics.
onsidered quite new and exciting area of research.
Hyperspectral sensors are one of the best passive sensors
which can simultaneously record hundreds of narrow bands
from the electromagnetic spectrum, which in turns create a
new image cube is called a hyperspectral data cube.
Hyperspectral information had been used for detection of
objects in military applications such as detecting military
vehicles [1, 2, 3] and mines [3, 4] for land use applications [5],
and for many USDA product inspection applications [6-9].
Some of the main limitations with these techniques are the
computation complexity for sub-pixel detection and the tradeoff between in a low false alarm and detection rate. Because
of the high dimensionality of hyperspectral image data,
special spectral bands must be selected which have the
greatest influences on discriminating embedded targets from
hyperspectral image data. A low false positive rate coupled
with a high detection rate is necessary; especially in military
area where target detection was wide area surveillance is
needed. In these tasks, a very low false positive rate is
compulsory to increase the level of confidence that the targets
identified are real.
Fig. 2 Overview of Algorithm
Fig. 1 Sample Spectra Data
The applications of hyperspectral sensor imagery (HSI) for
automatic target detection and recognition (ATD/R) can be
Goals of the Work: Our main goal is to design a target
detection algorithm that is showing promising performance in
terms of computation and false alarm rate. According to our,
the technique can be divided into three sections: Section one
contains two sub stage: Covariance Tracking and mRMR
feature selection. As this stage, elimination unnecessary
spectra data and band information are covered. In section two,
an explanation is provided to group spectra data to target
detection phase. In last section another variance of covariance
tracking method is covered to detect targets. Figure 2 provides
an overview of our approach for detecting a known set of
targets in a hyperspectral image.
II. DIMENSION REDUCTION
In this paper, we have chosen a covariance descriptor both
for target detection and eliminating unnecessary spectra data
from image data. A brief description of the dimension
reduction algorithm is as follows. At each spectra data, we
construct a feature matrix. For a given spectra, we compute
the covariance matrix of the features. This descriptor has low
dimensionality and it is invariant to in-plane rotations and size.
Covariance matrix of interested area does not vary from
interested spectra of image data. For each spectra data,
covariance distance to target covariance of spectra is
calculated. As a result of spectra data covariance tracking, the
spectra data can be divided to two different classes that will be
input for mRMR feature selection (minimum Redundancy
Maximum Relevance Feature Selection) algorithm.
k Ci xk  C j xk  0
k  1..d
(3)
At each spectra pixel, our algorithm searches the whole
image to find the spectra which has the smallest distance from
the target spectra. Figure 3 indicates Covariance Tracking
performance compared with Vector Angle, Derivative
Difference and Euclidean Distance algorithms. Following
ROC analyses indicate %0, %20, and %40 SNR of the
Gaussian noise added and filtered with a mean filter
respectively.
B. mRMR band selection
Mutual information measures how much one random
variable tells us about another. High mutual information
indicates a large reduction in uncertainty; low mutual
information indicates a small reduction; and zero mutual
information between two random variables means the
variables are independent. [11] In this paper, we aim to
introduce mutual information based dimension reduction. The
goal is to select a feature subset set that best characterizes the
statistical property of a target classification variable, subject to
the constraint that these features are mutually as dissimilar to
each other as possible, but marginally as similar to the
A. Spectra Covariance Tracking
classification variable as possible. The fundamental idea of
The initial stage of our approach is starting at this stage.
our algorithm is that it selects first 10 bands that maximizes its
Each spectra data is converted to: number of bands of
mutual information, and does this using entire data set. If a
spectra x 4 dimensional matrix.
band has expressions randomly or uniformly distributed in
different spectra, its mutual information with these spectra
T

data is zero. If a band is strongly differentially expressed for
I ( x, y )  2 I ( x, y ) 
F ( x, y )   I ( x, y )  M ( x, y ), x,
,
 (1) different spectra data, it should have large mutual information.
2
y
y


As a result of this, we use mutual information as a measure of
relevance of bands.
where M(x,y) is the mean of the spectra values, I(x,y) is
Formally, the mutual information of two discrete random
intensity values of a spectra and I ( x, y ) and I ( x, y ) are variables X and Y can be defined as: mutual information of
the first and second order derivates respectively. The two variables and is based on their joint probabilistic
covariance of each spectra data generates a 4x4 matrix to distribution and mutual information can be expressed as:
compare with target spectra matrix.
1) Distance Metric: Covariance matrices are not suitable
for arithmetic operations such as multiplication or addition
because of not lying on the Euclidean space. We need to
compute distances between the covariance matrices
corresponding to the target spectra and the candidate spectra.
We use the distance measure proposed in [10] to measure the
dissimilarity of two covariance matrices. Shortly, sum of the
squared logarithms of the generalized eigenvalues are used to
compute the similarity between covariance matrices as
 (Ci , C j ) 
d
 ln
k 1
2
k (Ci , C j )
where { k (Ci , C j ) } are the generalized eigenvalues of
C i and C j , computed from
(2)
I ( x, y )   p ( x i , y j ) log
i, j
p( xi , y j )
p( xi ) p( y j )
(4)
For more than two random variables, several
generalizations of mutual information have been already
proposed such as total correlation and interaction information.
However our approach is based on “Minimum Redundancy
Maximum Relevance Feature Selection” algorithm. The idea
of minimum redundancy is selecting spectra mutually
maximally dissimilar. The minimum redundancy condition is
min( WI ),
WI 
1
S
2
 I (i, j )
(5)
i , jS
where I (i, j ) is the mutual information of two band, and
S
is the number of bands in S. To measure the level of
discriminant powers of genes when they are varying from one
class to another, mutual information based solution can again
provide a good solution. According to our approach, the
classes are provided stage one of our algorithm. Our spectra
covariance tracking is providing two classes, one of them is
for positives spectra which are identified similar to target
spectra and the other one is for negative samples. Thus
maximum relevance condition can be expressed as:
max( V I ),
VI 
1
S
 I (h, j)
iS
max( VI  WI ),
(7)
For further details, we refer the readers to [12] for a detailed
discussion. As a result of this stage, our spectra and band
information can be reduced dramatically. In our case, instead
of 210 bands, some irrelevant bands eliminated to 10 bands
(6) and instead of 310x310 spectra data we selected only 100
candidate spectra data.
We need to optimize these two equations simultaneously.
Fig. 3 Row indicates covariance tracking algorithm, Derivative Difference, Euclidean Distance, and Vector Angle algorithms
with SNR 0, 20, and 40 values respectively.
III. MEAN SHIFT CLUSTERING
The mean shift is a nonparametric estimator of density
gradient in order to apply it for pattern recognition problems.
However its first usages were introduced in [13]. Fukunaga
proposes [14] an algorithm using the mean shift procedure. He
applies the procedure to each point and when two data points
converge to the same final position, they are considered to
belong to the same cluster. In our approach, because of
dramatic reduction of specta, the complexity of mean shift
algorithm, O( n 2 ) is not a big problem. Spectral covariance
tracking is reducing huge computational time to negligible
times.
Comaniciu and Meer prove [15] that the mean shift vector
computed with kernel and kernel bandwidth is given by
Each cluster covariance distance to target covariance is
calculated. The metric used in (2) is used again. In these
selected distances, best n cluster is the result of our algorithm.
V. CONCLUSION
We have introduced a computationally efficient method in
high dimensional spaces that makes possible the detection of
the targets. By employing a dimension reduction algorithm
based on covariance tracking and mRMR, a significant
decrease in the running time and hit ratio obtained while
maintaining the quality of the results. This approach opens the
door to the development of high dimensional data exploiting
feature space analysis in high dimensions.
REFERENCES
[1]
x  xi
)
h
M h ( x) 
x
x  xi
n
i 1 K ( h )

n
i 1
[2]
xi K (
(8)
[3]
[4]
where K(x) is kernel function and h is the radius which is only
parameter that mean shift clustering algorithms need.
However, this parameter can be estimated in target size known
algorithms. Because of the nature of hyperspectral images, the
camera distance to target or resolution is known. Also enough
information about targets is provided in our case, one can
simply calculated this parameter as:
[5]
[6]
[7]
h = real size of target / resolution
Because of not assuming any prior shape and ability to
handle arbitrary feature spaces lead mean shift clustering very
powerful for our approach. The only disadvantage of mean
shifting clustering can unknown window size (h) is not a
problem for our case.
[8]
[9]
[10]
IV. TARGET DETECTION
The last stage of our approach is ending at this stage.
Clusters obtained by mean shift clustering provides us
candidates of target to detect. We will take benefit of
covariance tracking. However there will be a slightly different
approach for this case. Instead of using just one spectra data
and its features, we use entire spectra in a cluster to calculate a
covariance.
Let { x i } i  1n be an arbitrary set of n points in a
cluster S
F ( x)  S ( x1 ), S ( x 2 ),..., S ( x n )
[11]
[12]
[13]
[14]
[15]
B. Thai, and G. Healey, "Invariant subpixel target identification in
hyperspectral imagery," Proc. SPIE, vol. 3717, pp. 14-24, 1999.
T. Nichols, J. Thomas, W. Kober, and V. Velten, "Interferenceinvariant target detection in hyperspectral images," Proc. SPIE, vol.
3372, pp. 176-87, 1998.
D. Casasent, and X.-W. Chen, "Feature reduction and morphological
processing for hyperspectral image data," Applied Optics, vol. 43(2),
pp. 227-236, Jan 2004.
J. Goutsias, and A. Banerji, “A morphological approach to automatic
mine detection problems,” IEEE Trans. Aerospace and Electronic
Systems, vol. 34, No. 4, pp.1085-1096, 1998
M.J. Muasher, and D.A. Landgrebe, "The K-L expansion as an
effective feature ordering techniques for limited training sample size,"
IEEE Trans. Geosci. Remote Sensing, vol. GE-21, pp. 438-441, Oct.
1983.
D. Casasent, and X.-W. Chen, "Waveband selection for hyperspectral
data: optimal feature selection," Proc. SPIE, vol. 5106, April 2003.
T.C. Pearson, D.T. Wicklow, E.B. Maghirang, F. Xie, and F.E. Dowell,
“Detecting aflatoxin in single corn kernels by transmittance and
reflectance spectroscopy,” Trans. of the ASAE, vol.44(5), pp.12471254, 2001.
J. Heitschmidt, M. Lanoue, C. Mao, and G. May, "Hyperspectral
analysis of fecal contamination: a case study of poultry," Proc. SPIE,
vol. 3544, pp. 134-137, 1998.
W.R. Windham, B. Park, K.C. Lawrence, and D.P. Smith, "Analysis
of reflectance spectra from hyperspectral images of poultry carcasses
for fecal and ingesta detection," Proc. SPIE, vol. 4816, pp. 317-324,
2002.
Förstner, W., Moonen, B.: A metric for covariance matrices. Technical
report,Dept. of Geodesy and Geoinformatics, Stuttgart University
(1999)
http://en.wikipedia.org/wiki/Mutual_information
Hanchuan Peng, Fuhui Long, and Chris Ding IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol. 27, No. 8, pp.12261238, 2005.
Cheng, Y. Mean Shift, Mode Seeking, and Clustering. IEEE
Transactions on Pattern Analysis and Machine Intelligence,Vol. 17, No.
8, 1995.
Fukunaga, K., and Hostetler, L.D. The Estimation of the Gradient of a
Density Function. IEEE Transactions on Information Theory, Vol. 21,
1975, 32-40.
Comaniciu, D., and Meer, P. Mean Shift: A Robust Approach Toward
Feature Space Analysis. IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 24, No. 5, 2002
Download