COLOR IMAGE SEGMENTATION USING THE DEMPSTER-SHAFER THEORY OF

advertisement
ISPRS Archives, Vol. XXXIV, Part 3/W8, Munich, 17.-19. Sept. 2003
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
COLOR IMAGE SEGMENTATION USING THE DEMPSTER-SHAFER THEORY OF
EVIDENCE FOR THE FUSION OF TEXTURE
J. B. Mena, J.A. Malpica
Alcalá University, Escuela Politécnica, 28871, Alcalá de Henares, Madrid, Spain – juan.mena@uah.es
KEY WORDS: Color, Texture, Segmentation, Theory of evidence, Automatic extraction.
ABSTRACT:
We present a new method for the segmentation of color images for extracting information from terrestrial, aerial or satellite images.
It is a supervised method for solving a part of the automatic extraction problem. The basic technique consists in fusing information
coming from three different sources for the same image. The first source uses the information stored in each pixel, by means of the
Mahalanobis distance. The second uses the multidimensional distribution of the three bands in a window centred in each pixel, using
the Bhattacharyya distance. The last source also uses the Bhattacharyya distance, in this case coocurrence matrices are compared
over the cube texture built around each pixel. Each source represent a different order of statistic. The Dempster - Shafer theory of
evidence is applied in order to fuse the information from these three sources. This method shows the importance of applying context
and textural properties for the extraction process. The results prove the potential of the method for real images starting from the three
RGB bands only. Finally, some examples about the extraction of linear cartographic features, specially roads, are shown.
1. INTRODUCTION
Segmentation of images represents a first step in many of the
tasks that pattern recognition or computer vision has to deal
with. There are many papers dealing with segmentation of
images using color, see (Skarbek, 1994) for an early survey and
(Cheng, 2001) for a more recent one. Several authors are
applying different techniques for color in order to improve the
final result of the segmentation, for example, (Park, 1998)
presents a new algorithm based in mathematical morphology
that performs a clustering in 3D color space; fuzzy techniques
are applied by Yang et al. (Yang, 2002). Markov Random
Fields are applied for clustering in (Jayanta, 2002).
coming from the three sources of information in the same
image, the theory of evidence will be introduce in section 4.
Finally some results with real images will be given in section 5
along with some conclusions.
2. THE SOURCES
Only color images have been considered in this work. Each
image can be thought of as a set of points in a three dimensional
euclidean space. Each pixel x is represented as a point in this
euclidean space, where the three coordinates could be RGB
values of HSI values.
The training set is represented by several pixels in the area of
interest. Let us call (mt , 6t) the mean and covariance matrix of
the training set. For example, the training pixels used to
segment the path in figure 2a) are shown in figure 2b).
Of the all possibilities for studying segmentation of color
images in this paper we are going to focus on color texture.
Texture is appealing for dealing with context information, so
important in the psychological characteristics in recognising
objects in images. Color textures have also studied by several
authors, L. Song et al. (Song, 1996) have obtained the fractal
dimension of color texture with the correlation among the color
bands for feature extraction. Others that have used color and
fractal dimension for color texture segmentation are A. Conci
and C. B. Proenca (Conci, 1997). R. Krishnamoorthi and P.
Bhattacharyya (Krishnamoorthi, 1997) have put forward an
orthogonal polynomial base color texture model for the
unsupervised segmentation of images with interesting results.
The algorithms for the segmentation of images using color
texture have a very broad field of application, such as clothing
(Chang, 1996); automated surveillance (Paschos, 1999);
retrieving of images from a large database (Zhong, 2000).
In order to segment the image three order statistics have been
used: In the first order statistic the Mahalanobis distance
between pixels to be detected and the training set are used. In
the second order a coocurrence matrix that interweaves color
bands is studied, and there is also an intermediate statistics
which uses the Battacharyya distance between the distributions
of pixels for the training area and the area to be detected.
2.1 Source 1: First order statistic
Since pixels are also points in a three dimensional space, the
simplest way of classifying the pixels in the image, as belonging
or not to the area of interest, would be to calculate the distance
between the pixels to be classified and the training pixels. To
see how far the new pixel is from the pixels in the area of
interest it would be necessary to see how far the pixel in study x
is from mt of the training set. The Mahalanobis distance d
(Fukunaga, 1990) will be use instead of the euclidean distance,
since the former gives a better approximation thank to the
introduction of the covariance matrix. Its equation is:
In this paper a new technique, called Texture Progressive
Analysis (TPA), is presented. In this method color texture
means using the interweaving of color information in the three
bands by different order statistics. In order to obtain the greatest
amount of information from the different order statistics, the
method presented uses the theory of evidence (Shafer, 1976) as
a fusion technique.
In section 2 the different order statistics will be presented. In
section 3 the relationship between the systems RGB and HSI
and the order statistics is applied. To fuse the information
139
d
(m t x) t ¦ t 1 (m t x)
(1)
where x represents the pixel in study and (mt ,6t) are the mean
and covariance matrix of the training set.
ISPRS Archives, Vol. XXXIV, Part 3/W8, Munich, 17.-19. Sept. 2003
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
The result of applying the Mahalanobis distance to the detection
of the path in the image in figure 2a) is presented in figure 2c).
This last image represents the values obtained from the
Mahalanobis distance normalized on a scale from 0 to 255. The
brighter the pixel the closer it is to the training texture. In this
case several pixels from the path area were taken as training
pixels (figure 2b).
2.2 Source 2: One and a half order statistic
Instead of comparing (mt , 6t) with isolated pixels, as has just
been done in the first order above, the group of pixels in the
training set in this order statistic is going to be compared with
the group of pixels around the pixel x, to be analysed at that
moment. This, in fact, is going to be established comparing the
distribution of pixels in the training area, with the distribution of
the pixels in an area around the pixel x. In order to do that a
window 5x5 has been taken around the pixel x, which allow to
calculate (mx , 6x) mean and covariance matrix for the
distribution of the pixels in the window. We have called this the
one and a half order statistics, since it is going to represent an
intermediate state between the image obtained with the
Mahalanobis distance and the one which is going to be obtained
with the cube texture later on. If distributions of pixels in the
window are close enough to the distribution of pixel in the
training set then the central pixel in the window could be
considered as belonging to the same group of the pixels in the
training set. These distributions are compared using the
Bhattacharyya distance b (Fukunaga, 1990):
b
1
m t m x t
8
§ 6t 6 x
¨¨
2
©
·
¸¸
¹
1
m t m x 1 log
2
6t 6 x
2
A model of color texture have been created similar to the one in
Mao and Jain (Mao, 1992) in order to study the second order.
We have tried to keep the major amount of information through
out the whole process, using as few parameters as possible.
Actually there are only two, the size of the window and the
threshold for the plausibility that we will see in the following
section. These two parameters are fixed for all the images so
there is not necessity of tuning them for each picture, in that
sense the process is without user intervention and therefore
automatic.
Figure 1. The texture cube
For each pixel x a cube of edge three pixels is considered, as the
one represented in figure 1. This cube is called the cube texture.
The reason of fixing this size is to direct this method toward
road extraction for a wide range of images. In fact, given the
linear characteristics of roads, if a big window is used we run
the risk of taking pixels outside the road.
First the three sections formed for the bands are considered, that
is, each of the three sections Sk, k = 1,2,3. They correspond to
each of the bands in the decomposition HSI of the image, where
the pixel x is in the centre position of the cube. The cube texture
is formed with 27 cells, and since it has been consider 256
digital levels, each of the cells will have a number between 0
and 255. This has been done in this way, not using a threshold,
in order to retain the maximum amount of information possible.
(2)
6t 6 x
where 6x is the covariance matrix of the group of pixels in the
window around the one in study x. The result of applying the
distance between distributions according to the function exp(-b),
to the real image of figure 2a) is presented in figure 2d).
As a second step, three new sections from the cube texture will
be considered, this time corresponding to the columns as shown
in figure 1. Therefore there would be six sections altogether.
2.3 Source 3: Second order statistic. The texture cube
Pixels that are close together tend to be more related than pixels
that are far away from each other. Statistics for pairs of pixels
could be studied through many different techniques. We have
looked for one that keeps the greatest amount of information
with the minor complexity in terms of computing time.
Sk
The technique of coocurrence matrices has shown its
possibilities in many practical applications of textures. A long
time ago J. Weszka et al. (Weszka, 1976) showed that
coocurrence matrices gave better results than spatial frequency
methods for terrain classification. Gagalowicz (Gagalowicz,
1987) has shown the power of coocurrence matrices for
synthetic textures. He has also studied color and third order
statistic, synthesising very complicated textures that requires a
great amount of computing time for processing the images.
In order to obtain a robust method only the pixels that are less
than three standard deviations from the media will be
considered for the calculation of the mean and covariance
matrix of the training set. Some pixels of the training set could
be noise, or they could be not properly representing the texture
for some reason. These pixels would be considered as outliers
for the training set.
§ t11 t12
¨
¨ t 21 t 22
¨t
© 31 t32
t13 ·
¸ ­k 1,2, ,6
t 23 ¸; ®
tij  >0,255@ i, j 1,2,3
t33 ¸¹ ¯
For each section Sk, four coocurrence matrices are built for each
direction 0º, 45º, 90º and 135º. Therefore there are 24 matrices
altogether. In fact, these matrices are not built directly for
computing time reasons, since each matrix would have 65536
elements (256x256). The Section Sk is only of order three,
therefore it would result in a sparse matrix with a high number
of zeros. To avoid this, the coocurrence distributions has suffer
a minor modification that is showed as follow.
Let us be given the section Sk of pixel x of equation (3). The
coocurrence distribution in the level of grey tij for the direction
0º is considered in the following manner:
DCk >0 º @
­t11 , t12 , t12 , t13 , t13 , t 21 , t 21 , t 22 , ½
®
¾
¯t 22 , t 23 , t 23 , t31 , t31 , t32 , t32 , t33 ¿
and similarly in the directions 45º, 90º and 135º.
140
(3)
ISPRS Archives, Vol. XXXIV, Part 3/W8, Munich, 17.-19. Sept. 2003
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Note that we enforce a sort of cyclic distributions, all built by
eight elements with frequency equal to one. Therefore, the four
directions shown above are symbolic. In this way the 65536
values of the coocurrence matrix have been substituted by the
pairs of coocurrence which actually happen on the grey levels.
-
The features given in R. M. Haralick (Haralick, 1979) has been
calculated from the 24 matrices constructed above. The features
are these: Correlation, energy, entropy, maximum probability,
contrast, and inverse difference moment. However, to avoid the
problem of the sparsely in the matrices, as said before, the
concept of pairwide probability has been modified in an ad hoc
manner. Given the bidimensional distribution:
DCk >D º @
we define the pseudoprobability of element (xi, yi) as:
xi yi
For each pixel three values of evidence for each order statistic
will be obtained Pi (i= 1,2,3)
xjyj
j 1
Pi (Z ) , Pi (Y ) , Pi (T )
Then the values for the Haralick features are the following:
* Correlation :
3.1 The first order statistic
¦ psi2
The Mahalanobis distances d (1) between pixel x and the set of
training pixels are calculated. Then the maximum dmax and
minimum dmin values are obtained for normalising, and the
complement to one is computed:
i
* Pseudoentropy :
¦ psi log psi
(5)
i
* Maximum pseudoprobability : Max psi * Pseudocontrast :
¦
(6)
with the condition Pi (Z ) Pi (Y ) Pi (T ) 1 , i = 1,2,3.
We will see that using the theory of evidence the three group of
three values are fused to obtain only one group for each pixel.
x x y y 1
¦ i V Vi
8 i
x y
* Pseudoenergy :
The results obtained from the three techniques above give
values that range between zero and one. This means that these
values can be considered pieces of evidence for the recognition
of the texture in study (Shafer, 1976).
4 = {ZYT}
(4)
8
¦
3. FUSION OF THE INFORMATION LAYERS
We are going to consider only two classes in the DempsterShafer theory of evidence. Either a pixel belongs to the texture
in study (to be detected) Z, or it belongs to the background Y.
There is also an uncertainty T inherent in the theory of evidence.
All this constitute the frame of discernment 4 in our case.
^x1 , y1 , x2 , y2 , , x8 , y8 `
psi
The HSI system in the last layer, the texture layer.
Here, the results using HSI are significantly better
than applying RGB, therefore the former system has
been applied.
dc 1
xi yi psi
d d min
d max d min
(7)
i
psi
i xi yi
* Inverse difference pseudomoment : ¦
For each x a distribution of 24 vectors of dimension six (one for
each Haralick feature) is obtained. This distribution (mx ,6x) is
compared with the corresponding to the training set (mt ,6t)
through the Battacharyya distance (2); then the respective value
exp (-b) is calculated. Figure 2e) shows the layer that is obtained
after applying the texture cube technique to the image in 2a).
It appears that as the order of statistics increase the closer we
are to the psychological concept of color detection. Some
authors are using psychological concepts in automatic
segmentation in order to improve some existing methods; such
is the work done by Mirmehdi and Petrou (Mirmehdi, 2000).
-
P1 (Z ) d ' 1 V (8)
and by using its results the new evidence masses are obtained.
These are the definitive values of evidence of each pixel of
being close to the feature represented by the training set.
Therefore the values for not belonging to the training set would
be given by:
P1 (Y ) 1 P1 (Z ) P1 (T ) (1 d ' )(1 V )
(9)
3.2 The first and a half and second order statistics
The Bhattacharyya distances b (2) between the training set (mt ,
6t) and the window of each pixel (mx , 6x) are calculated. Then
the corresponding values d’ will be obtained by the following
equation,
After many trials with several images and under a visual
evaluation we have chosen to apply:
-
The standard deviation for the distance’s values for all the
pixels obtained in (7) is taken as the uncertainty P1 (T ) V .
In order to verify the condition of summing equal to one, in
equation (6), the values P1 (Z ) are obtained in what follows:
The RGB system in the first order statistics layer. It
appears that results using HSI are not as good as with
RGB.
The HSI system in the intermediate layer. Though
some results are regardless of which system is
applied, RGB or HSI.
dc
1
eb
(10)
Where b is the Bhattacharyya distance.
With the values d’ the evidence masses are calculated with (8)
and (9).
141
ISPRS Archives, Vol. XXXIV, Part 3/W8, Munich, 17.-19. Sept. 2003
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
The standard deviation for the distances values is assigned to
the uncertainty. P 2 (T ) V .
The second order statistic is calculated very similarly to the 1.5
order. Again the Bhattacharyya distance is used, and the
evidence mass Z will be obtained by equation (10).
3.3 Orthogonal product and plausibility layer
With the values Pi(Z), Pi(Y) and Pi(T) for the three orders, i =1,
2, 3, the Dempster’s rule or orthogonal product of the theory of
evidence is applied (Shafer, 1976):
(11)
¦
( Pi … P j )( A)
where A ˆ T
Pi ( B) P j (C )
B ˆC A
; (A,B,C 4)
Pi ( B) P j (C )
B ˆ C zI
¦
A, A  4 .
In our case the frame of discernment is quite simple 4 =
{ZYT}. It is useful to represent the orthogonal product with
the following square of unit side:
Z
Z
Y
Z
Y
T
Z
T
o Pj
Z
Y
Y
Y
T
p
Pi
One source Pi is represented along one side, where the
evidences ZYT, are given by their values. The other source Pj
is represented in the other side. The orthogonal product
Pi … P j , for one element of the frame of discernment, is given
by the sum of the rectangles inside of the square that has been
label with this element.
Figure 2f) shows the layer that is obtained after applying the
orthogonal product of the theory of evidence to the images in
figure 2c), 2d) and 2e). Since the orthogonal product is
associative a new set of masses is obtained by fusing the
information
coming
from
the
three
order
of
statistics: P f (Z ) , P f (Y ), P f (T ) ,where P f P1 … P 2 … P 3 .
In order to give the final result of the segmentation a binary
image is obtained showing which pixels belong to the texture
and which do not. The concept of plausibility from the theory of
evidence is applied. In our case the plausibility 3 for the pixel x
is the sum of the masses of the evidences for labelling the pixel
as belonging to the training set plus the uncertainty, that is:
3 x P f (Z ) P f (T ) 1 P f (Y )
(12)
142
Figure 2. (Up to down and left to right): a) Original image.
b) Training set. c) First order layer. d) First and a haft order
layer. e) Second order layer. f) Evidence masses. g) Plausibility
layer. h) Clean plausibility layer.
Here it is necessary to define a threshold in order to proceed
with the segmentation. As said above, it can be observed this is
one of the few parameters of the algorithm. It is the more
relevant since it could not be guarantee that an optimal
segmentation would be obtained in all cases. In figure 2g) is
represented the binary image obtained, and the figure 2h)
presents the image result after a clean process of small noise
areas, with the only objective of a better visualisation.
ISPRS Archives, Vol. XXXIV, Part 3/W8, Munich, 17.-19. Sept. 2003
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
4. RESULTS, EVALUATION AND CONCLUSIONS
4.1 Other results obtained
The process explain with the image of figure 2, has been
followed with many images, in all cases the results obtained
with the fusion of information from the different order of
statistics have been better than the ones obtained only with
information from the first order of statistic (Mahalanobis
distance). Some of these images are shown in the following
figures: The figure 3 shows the binary segmentation of the
woman’s dress; the figure 4 presents the extraction of the lake
on a terrestrial image; and the figures 5, 6, 7 and 8 present the
application of our method in the automatic road extraction on
aerial and satellite images.
Figure 6. (Left to right): Original aerial image. Training set.
Automatic result.
Figure 7. (Left to right): Original aerial image. Training set.
Automatic result.
Figure 3. (Left to right): Original terrestrial image. Training set.
Automatic result.
Figure 8. (Left to right): Original aerial image. Training set.
Automatic result.
Figure 4. (Left to right): Original terrestrial image. Training set.
Automatic result.
4.2 Evaluation
The first source has the disadvantage with reference to the
second or third source (higher statistical order) that the results
dependent on the training set chosen. However, when
information from the first source is mixed with the information
from the other layers using the theory of evidence this problem
is partially solved. This is the principal advantage of our system.
When the second order statistic classifier fails to recognise a
pixel as belonging to the training area, the mixer of information
of the layers with lower order statistics compensate for the
miscalculation. To summarise, the mixing of information
through the orthogonal product produces better results than
those obtained using only just one order statistics, being
immaterial of whether it is the first, the second o the third
sources.
In order to check the efficiency of our method, the comparison
of TPA technique with the known Mahalanobis classifier is
presented in table 1 on diverse types of imagery. For each of
images 2 to 5, in the first row of each box; it shows the percent
of correct pixels detected for the first source only, and the
corresponding percent for the method developed with the TPA.
Figure 5. (Left to right): Original IKONOS satellite image.
Training set. Automatic result.
143
ISPRS Archives, Vol. XXXIV, Part 3/W8, Munich, 17.-19. Sept. 2003
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
The percents of errors type 1 (pixels erroneously detected) and
type 2 (pixels no detected) are presented in the other rows for
both methods.
MAHALANOBIS
CLASSIFIER
TECHNIQUE
TEXTURE
PROGRESSIVE
ANALYSIS
90.68
8.69
0.63
83.82
15.95
0.23
63.27
36.64
0.09
71.33
28.28
0.39
95.15
1.47
3.38
92.54
0.16
7.31
99.63
0.09
0.28
93.88
4.02
2.09
Correctness
Errors type 1
Errors type 2
Correctness
Errors type 1
Errors type 2
Correctness
Errors type 1
Errors type 2
Correctness
Errors type 1
Errors type 2
5. REFERENCES
Chang, C. C., L. L. Wang, 1996. Color texture segmentation for
clothing in a computer-aided fashion design system. Image and
Vision Computing. 14(9), pp. 685-702.
Cheng, H. D., X. H. Jiang, Y. Sun, Jingli Wang, 2001. Color
image segmentation: advances and prospects. Pattern
Recognition, 34, pp. 2259-2281.
Fig 2
Fig 3
Fig 4
Fig 5
Table 1. Comparison between TPA technique and
Mahalanobis classifier.
About the automatic road extraction, the evaluation of our
system is achieved through an approach to the Wiedemann
method, presented in the paper (Wiedemann, 1998). The
corresponding central axis of the results are then compared with
the manual extraction, obtaining the values presented in table 2
on images 5, 6, 7 and 8.
Measures
Resolution (m)
Completeness (%)
Correctness (%)
Quality (%)
RMS difference(m)
Redundancy
Gaps per Km.
Gap length (m)
5
1
79
82
67
0.6
0.004
0
0
6
2
66
65
48
1.2
0.01
0
0
7
1
86
93
81
0.6
0.02
0
0
8
2
91
94
87
1.2
0.01
0
0
Mean
1.5
81
84
71
0.9
0.01
0.0
0.0
Table 2. Evaluation on road extraction.
4.3 Conclusions
Herein we have proposed a new method in order to extract
automatically information on terrestrial, aerial and satellite
images. The segmentation process is especially important for
solving the problem of thematic mapping with use of remote
sensing data, since a binary image of high quality is essential for
solving the later raster–vector conversion.
One of the major advantages of the technique described in this
paper is that it needs only a few parameters, and most of the
information is retained until the very end. These few parameters
are fix throughout out the whole algorithm and are the same for
all the pictures so the process could be considered almost
automatic, therefore it is proved that the introduction of
information of high order of statistics is good in all the cases.
At the end, the principal advantage of our approach is the
decreasing of the manual work in the process of bringing up to
date of data bases, geographical or other type, starting from a
minimum of input data.
144
Conci, A., C.B. Proenca, 1997. A box-counting approach to
color segmentation. Proc. Int Conf on Image Processing, IEEE
Comput. Soc, Los Alamitos, California, pp. 228-230.
Fukunaga, K., 1990. Statistical Pattern Recognition. Academic
Press, Second Ed.
Gagalowicz, A., 1987. Texture modelling applications. The
Visual Computer, Springer Verlag, 3, pp. 186-200.
Haralick, R. M., 1979. Statistical and Structural Approaches to
Texture. Proc. IEEE, 67(5), pp. 786-804.
Jayanta, M., 2002. MRF clustering for segmentation of color
images. Pattern Recognition Letters, 23(8), pp. 917-929.
Krishnamoorthi, R., P. Bhattacharyya, 1997. On unsupervised
segmentation of color texture images. Proc. Fourth Int Conf on
High-Performance Computing, IEEE Computer. Soc Press, Los
Alamitos, California, pp. 500-504.
Mao, J., A. K. Jain, 1992. Texture Classification and
Segmentation Using Multeresolution Simultaneous Autoregressive Models. Pattern Recognition, 25(2), pp. 173-188.
Mirmehdi, M., M. Petrou, 2000. Segmentation of color texture.
IEEE Trans. Pattern Analysis and Machine Intelligence, 22(2),
pp. 142-158.
Park, S., Il Dong Yun, Sang Uk Lee, 1998. Color image
segmentation based on 3-D Clustering: Morphological
approach. Pattern Recognition, 31(8), pp. 1061-1076.
Paschos, G., F.P. Valavanis, 1999. A color texture based visual
monitoring system for automated surveillance. IEEE Transactions on Systems, Man and Cybernetics, 29(2), pp. 298-307.
Shafer, G., 1976. A Mathematical theory of Evidence. Princeton
University Press, Priceton, NJ.
Skarbek, W., A. Koschan, 1994. Color Image Segmentation.
Report, Technical University, Berlin.
Song, L., Y. Ruikang, I. Saarinen, M. Gabbouj, 1996. Use of
fractals and median type filters in color texture segmentation.
Proc. IEEE International Symposium on Circuits and Systems.
Circuits and Systems Connecting the World, New York, USA,
pp. 108-111.
Weszka, J., C. Dyer, A. Rosenfeld, 1976. A comparative study
of texture measures for terrain classification. IEEE Trans. Syst.,
Man, and Cybern, 6(4), pp. 269-285.
Wiedemann, C., C. Heipke, H. Mayer, O. Jamet, 1998.
Empirical evaluation of automatically extracted road axes. In:
Empirical Evaluation Methods in Computer Vision, Kevin J.
Bowyer, P. Jonathon Phillips (Editors), IEEE Computer Society
Press pp 172-187. Also in: 9th Australasian Remote Sensing
and Photogrammetry Conference, The University of New South
Wales, Sydney, Paper No. 239 (CD).
Yang, J. F., Shu-Sheng Hao, Pau-Choo Chung, 2002. Color
image segmentation using fuzzy C-means and eigenspace
projections. Signal Processing, 82(3), pp. 461-472.
Zhong, Y., A. K. Jain, 2000. Object location using color texture
and shape. Pattern Recognition, 33(4), pp. 671-684.
Zugaj, D., V. Lattuati, 1998. A new approach of color images
segmentation based on fusing region and edge segmentations
outputs. Pattern Recognition, 31(2), pp. 105-113.
Download