Image Texture Classification using Gray Level Co

advertisement
Image Texture Classification using Gray Level Co-occurrence Matrix
and Neural Network
Muhammad Suzuri Hitam, Mohd Yahyal Haq Muslan,
Mustafa Mat Deris and Md. Yazid Mohd Saman
Department of Computer Science
University College of Science and Technology Malaysia
21030 Mengabang Telipot, Kuala Terengganu
MALAYSIA
http://www.kustem.edu.my/suzuri
Abstract: - This paper presents the use of Grey Level Co-occurrence Matrix (GLCM) method together with
multilayer fully-connected feed-forward perceptron (MLP) for image texture classification. GLCM method
was employed as a features extractor technique and MLP was used as a image texture classifier. A range of
Brodatz textures were employed to evaluate the proposed system. Results from various experimental
investigations including general image textures, noise added images textures and rotated image textures
showed that the MLP with GLCM works well as a texture classifier.
Key-Words: - Grey Level Co-occurrence Matrix (GLCM), Neural Network, Texture and Classification
1 Introduction
Texture recognition and classification is an
important area of study in computer image analysis.
It has wide range of applications in many fields,
from remote sensing to industrial products. In the
literature, various methods for texture classification
methods have been proposed [1] – [5].
Some of the popular statistical image properties
that can be used for texture classification are; the
first order statistics of local property values such as
mean and variance [6], second order statistics such
as co-occurrence matrices [7]-[9] and higher order
statistics such as Statistical Geometric Features
(SGF) [10]. These properties are used as elements
of feature vectors in performing texture
classification.
In this paper, one of the most powerful method
for general texture classification known as the Grey
Level Co-occurrence Matrix (GLCM) [7]-[9] is
employed for texture features extraction techniques.
A multi-layered fully connected perceptron (MLP)
is used as a texture classifier. The objective of this
paper is to study the texture classification capability
of using the neural network as a classifier with
GLCM as a feature extractor.
The paper is organized as follows. Section 2
describes the GLCM method in detail. Section 3
explains the classifier used in classification process.
Section 4 presents some experimental results
obtained in classification experiment involving 4
classes of textures from Brodatz album [11].
Finally, conclusions and discussion are presented in
Section 5.
2 Grey Level Co-Occurrence
Matrix (GLCM)
The GLCM was introduced by Haralick et al. [12].
It is a second order statistical method which is
reported to be able to characterize textures as an
overall or average spatial relationship between grey
tones in an image [13]. Its development was
inspired by the conjectured from Julesz [14] that
second order probabilities were sufficient for human
discrimination of texture.
In general, GLCM could be computed as follows.
First, an original texture image D is re-quantized
into an image G with reduced number of grey level,
Ng. A typical value of Ng is 16 or 32. Then, GLCM
is computed from G by scanning the intensity of
each pixel and its neighbor, defined by displacement
d and angle ø. A displacement, d could take a value
of 1,2,3,…n whereas an angle, ø is limited 0, 45,
90 and 135.
The GLCM P(i,j|d,ø) is a second order joint
probability density function P of grey level pairs in
the image for each element in co-occurrence matrix
by dividing each element with Ng. Finally, scalar
secondary features are extracted from this cooccurrence matrix. In this paper, 8 of most
commonly used GLCM secondary features as
defined in [12] and [7] were employed as defined in
Eq. 1 to Eq.8. All these features were employed as
inputs to the neural network classifier.
 Pi, j 
Entropy :   Pi, j  log Pi, j 
Energy :
2
i, j
i, j
(1)
(2)

Homogeneity :
Inertia :
i, j
 i  j  Pi, j 
2
i, j

Correlation : 
Shade :
1
Pi, j 
2
1  i  j 
i, j
(3)
(4)
i    j   Pi, j 
2
 i  j  2  Pi, j 
(5)
(6)
3
i, j
 i  j  2  Pi, j 
Variance :  i    Pi, j 
4
Prominence :
i, j
2
i, j
(7)
(8)
where    x   y  ii  j Pi, j    j j iPi, j 
and   i  
i
  Pi, j     j     Pi, j 
2
x
2
j
j
y
i
To visualize the mechanism of GLCM, it is best
described by example. To make the example simple,
consider a re-quantized image of four intensities as
illustrated in Fig. 1(a). Let’s assumed that the
displacement d and angle, ø is 1 and 0, respectively.
The co-occurrence matrix element P(1,2) is
computed by counting all pixels in an image which
intensity value of 1 and its next neighboring pixel in
a same row (d = 1 and ø = 0) of intensity 2. In this
example, there are 2 of such cases, thus P(1,2) = 2 as
shown in Fig.1(b). Fig. 1(c) shows the GLCM in the
form of probability estimates.
1 1 2
1 2 3
2 2 3
4 3 3
3 3 3
Fig. 1(a).
3 4
1 2 3 4
4 1
1 1 2 0 0
3 3
2 1 1 3 0
2 1
3 0 1 5 3
4 4
4 1 0 1 1
Image
Fig. 1(b). Co-occurrence
matrix
matrix
1
2
3
4
1 0.0625 0.125
0
0
2 0.0625 0.0625 0.1875
0
3
0
0.0625 0.3125 0.1875
4 0.0625
0
0.0625 0.0625
Fig. 1(c). Actual GLCM values.
parallel structure and its ability to learn from
experience. Neural networks could be used for
fairly accurate classification of input data into
categories when they are fully trained. A fully
trained network is said to be converged when some
predetermined specification had been achieved,
typically the sum square error. The knowledge
gained by the network during training phase is
stored in the form of connection weights. With these
connection weights values, a neural network is
capable to make decisions on new fresh input.
In this paper, a fully connected feed-forward
multi-layer perceptron (MLP) with various learning
algorithms were employed for texture classification.
Fig. 3 shows a sketch of a three-layer MLP that is
employed in this work. The 8 network’s inputs are
energy, entropy, homogeneity, inertia, correlation,
shade, prominence and variance and the number of
output target is varied depending on the number of
class. For example, for a 2-class problem, only 1
output neuron is used and for 8-class problem, 3
output neurons were employed.
In all of the experiments, except when stated
differently, a hyperbolic tangent function and linear
activation function is employed as a default in
hidden layer and output layer of the network,
respectively. The number of hidden layer is varied in
the experiments to find the optimum network size.
Each training process, except mentioned, consists of
10 times cross validation with sum square error
performance goal is set to 0.0001 and maximum
epoch is set to 300. After completing 10 times
training, the average mean square error (MSE) was
taken and the best network produced from the
training phase was later used in the testing phase.
1
2
1
3
1
4
2
5
3
2
3
4
3 Neural Network Classifier
Artificial neural network (ANN) or simply referred
to as neural networks are biologically inspired
network which have been proven useful in
application areas such as pattern recognition,
classification, prediction, etc. [16]. The neural
networks derive their power due to their massively
6
5
7
n
8
Fig. 2. Neural network classifier structure
Experimental investigations includes finding the
optimum setting for the MLP classifier and
evaluating the network classification performance
when the image texture data are changed for
example, texture images were corrupted with
different degree of Gaussian white noise as well as
the image textures were rotated.
3.1 Image Data
In this paper, texture images are all taken from
Brodatz’s album [11]. Fig. 3 shows the examples of
the original 8 Brodatz image textures employed in
this study. Each image has resolution of 643 x 643
pixels with 256 gray levels. Each image was divided
into 64 x 64 pixels samples, which produced 100
samples per image sample, resulting 800 samples in
total. Fig. 4 shows samples of each image after
division.
From every image class, 50% randomly chosen
image samples were used as training set and the rest
were used as testing set. The classification
experiments were conducted on 2, 4 and 8-class
problem, in a way similar to the experiments
conducted by [13] i.e., for a 2-class problem. It
should be noted that for 2-class classification
problem, only 200 samples from selected 2-class of
images were used in the experiments and for a 4class classification problem, 400 image samples
from 4 selected image class were employed in the
experiment and so forth.
scaled conjugate gradient. For this experiment, the
number of hidden layer neuron is randomly set to
10. Fig. 5 shows the results of the classification
performance. It can be clearly seen that the use of
Levernberg Marquardt learning algorithm results in
the best overall classification accuracy. Thus, this
learning algorithm is employed throughout this
study. From this figure, it also can be observed that,
in general, the classification performances are
degraded as the number of textures to be classified is
increased.
(a) D’1
(b) D’10
(c) D’101
LevenbergMarquardt
100
80
adaptive Gradient
descent
60
40
Gradient descent
(adaptive
momentum)
20
0
2
4
6
class
(a) D1
(b) D10
(c) D101
(d) D’102
(e) D’105 (f) D’106 (g) D’16
(h) D’17
Fig. 4. Image samples after division.
pe rformance
3 Texture Classification Experiments
8
Scaled conjugate
gradient
(d) D102
Fig. 5. Textures classification under different
learning algorithm.
(e) D105 (f) D106 (g) D16
(h) D17
Fig. 3. The Brodatz textures used in this study.
3.2 Classification Results
In the first experiment, the effect of using different
learning algorithm on classification performance
accuracy is studied. This experiment involves
training the neural network classifier to classify a
range of image textures from 2 class to 8 class using
4 different types of learning algorithms, i.e.,
Levernberg- Marquardt, adaptive gradient descent,
gradient descent with adaptive momentum and
In the second experiment, the number of hidden
neurons is varied with Levernberg Marquardt
learning algorithm. Fig. 6 shows the classification
performance of the network for this experiment.
From this experiment, it can be concluded that the
number of hidden neuron needed for different
number of classes is varied. However, a general
trend in Fig. 6 shows that 8 to 12 hidden neurons
could provide good classification performance.
The third experiments compare classification
performance when the network is subjected to two
class classification problem. This experiment is
divided into two categories; a similar look textures
(Fig. 3(e) D105 and Fig. 3(f) D106) and very
different look textures (Fig. 3(b) D10 and Fig. 3(h)
D17). Fig. 7 shows that the network performs very
well (100% accuracy) when subjected to two
different look textures. However, when the network
is presented with a similar look textures, its
classification performance degrades to 87%
accuracy.
2
99
Performance
variance is 0 in Fig 9. This happen because the
image becomes whiter and thus some of the data
may be lost. Astonishingly, the classification
accuracy increase as the number of texture class
increases which is in contrast from the earlier
findings.
These prove the superiority of the
proposed method in classifying textures images.
89
4
6
79
8
69
10
12
59
49
2
4
6
8
(a) (0, 0.05)
(b) (0, 0.10)
(c) (0.05, 0)
(d) (0.10, 0)
14
16
Class
Fig.6. Classification performance with different
number of hidden neurons.
Fig. 8. Samples of image D1 after Gaussian white
noise added (mean, variance).
100
98
96
94
92
90
88
86
84
82
80
D105-D106 (Similar
texture)
D10-D17 (Different
texture)
Performance
100
90
Original
Image
80
m=0,v=0.05
70
m=0,v=0.10
60
50
m=0.05,v=0
40
Fig. 7. Classification performance when subjected
to similar look textures and different look textures.
In the fourth experiment, the ability of the proposed
classifier system is evaluated when the original
image is corrupted under different degree of
Gaussian white noise. In this experiment, the mean
and variance value of the Gaussian white noise is set
to 0 and 0.05, 0 and 0.10, 0.05 and 0, and 0.10 and
0, respectively. To illustrate the textures after noise
added, fig. 8 shows samples of such images. Fig. 9
shows the classification results of the network. It is
found that the neural classifier system performs very
well even after different degree of Gaussian white
noise is added. It is expected that the classification
accuracy degrades as the amount of white noise
increases. This could be observed when the mean
value of the Gaussian white noise is at 0.1 and
2
4
6
8
m=0.10,v=0
Class
Fig. 9. Classification performance under different
degree of Gaussian white noise.
In the final experiment, the neural network
classifier system is evaluated to classify rotated
similar look textures. In this experiment, image
D105 and D106 were again selected and are rotated
to 45, 90, 135 and 180, respectively. Fig. 10
shows examples of rotated image employed in this
study.
To further evaluate the robustness the proposed
system, experiments are conducted in two different
experiments. In the first experiment, rotated images
were used during the testing phase only and no
rotated images were employed during the network
training phase. This is to ensure that the network
has never seen such rotated images before. In the
second experiment, the rotated images were
employed during the training stage. Fig. 11 and Fig.
12 show the result of these experiments. From these
figures, it can be observed that the proposed system
performed equally well even when the system never
been trained to classify a rotated image. Thus, it can
be concluded that the use of GLCM method with the
neural network classifier provides a very robust and
efficient method for image texture classification.
(a) 45
(b) 90
(c) 135
(e) 45
(f) 90
(g) 135
Fig. 10. Rotated images.
(d) 180
(h) 180
Performance
D105-D106
100
90
80
70
0
45
90
135
180
Degree
Fig. 11. Classification performance when the neural
network never seen the rotated image during training
phase.
Performance
105-106
100
95
90
85
80
75
70
0
45
90
135
180
Degree
Fig. 12. Classification performance when the neural
network been trained with rotated images.
4 Conclusions
In this paper, it had been shown that a multi-layered
fully connected perceptron could be employed
successfully as a image texture classifier with
GLCM technique as a feature extractor. It had also
been shown that the proposed system is robust to
image rotation and noise levels. It is also found that
for a different look image textures, 100%
classification accuracy could be achieved.
However, for a very similar look image texture, the
classification accuracy degrades to 87%. Therefore,
further researches should be carried out to enhance
the classification of similar look image texture in the
future. It is suggested that the use different degree
of phase angle in GLCM techniques perhaps could
improve classification accuracy for similar look
textures and rotated image textures.
References:
[1] R.M. Haralick, Statistical and Structural
Approaches to Texture, Proc. IEEE, 67, 1979,
pp.786-804.
[2] R.M. Haralick and L.G. Shapiro, Computer and
Robot Vision, Vol. 1, Addison-Wesley, Reading,
MA, 1992.
[3] T.R. Reed and J.M.H. Du Buf, A Review of
Recent Texture Segmentation and Feature
Extraction
Techniques,
CVGIP:
Image
Understanding, 57, 1993, pp. 359-372.
[4] C.H. Chen, L.F. Pau, and P.S.P. Wang,
Handbook of Pattern Recognition and Computer
Vision, World Scientific, Singapore, 1993.
[5] L. Van Gool, P. Dewaele and A. Oosterlinck,
Texture Analysis Anno 1983, Comput. Vision,
Graphics, Image Process, 29, 1985, pp. 336357.
[6] T. Ojala, K. Valkealahti, E. Oja, and M.
Pietikainen, Texture Discrimination with
Multidimensional Distributions of Signed Graylevel Differences, Pattern Recognition, 34,
2001, pp. 727-739.
[7] R. W. Conners, M. Trivedi, and C.A. Harlow,
Segmentation of a High Resolution Urban Scene
using Texture Operators, Computer Vision,
Graphics, and Image Processing, 25, 1984, 273310.
[8] P. P. Ohanian and R. C. Dubes, Performance
Evaluation for Four Classes of Textural
Features, Pattern Recognition, 25(8), 1992, 819833.
[9] C.C. Gotlieb, and KH.E. Kreyszig, Texture
Descriptors based on Co-occurrence Matrices,
Computer Vision, Graphics, and Image
Processing, 51, 1990, 70-86.
[10] Y.Q. Chen, M.S. Nixon, and D.W.
Thomas, Statistical Geometric Features
for Texture Classification, Pattern
Recognition, 28 (4), 1995, pp. 537-552.
[11] P. Brodatz, Textures: A Photographic
Album for Artists and Designers, Dover,
New York, 1966.
[12] R.M. Haralick, K. Shanmugam and I.
Dinstein, Textural Features for Image
Classification, IEEE Trans. on Systems,
Man, and Cybernetics SMC-3, 1973, pp.
610-621.
[13] R.F. Walker, Adaptive Multi-scale Texture
Analysis with Application to Automated
Cytology, Ph.D. Thesis, The University of
Queensland, 1997.
[14] B. Julesz, Visual Pattern Discrimination,
IRE Transactions on Information Theory
IT-8(2), 84-92, (1962).
[15] M. Sonka, V. Hlavac, R. Boyle, Image
Processing, Analysis and Machine Vision,
PWS Publishing, 1999.
[16] R.P. Lippman, Pattern Classification using
Neural Networks, IEEE Communication
Mag., 47-64,(1989)
Download