International Journal of Computer Science and Applications c Technomathematics Research Foundation Vol. 8 No. 1, pp. 36 - 53, 2011 Quantitative Evaluation of Color Image Segmentation Algorithms ANDREEA IANCU Software Engineering Department, University of Craiova, Bd. Decebal 107 Craiova, Romania andreea.iancu@itsix.com BOGDAN POPESCU Software Engineering Department, University of Craiova, Bd. Decebal 107 Craiova, Romania bogdan.popescu@itsix.com MARIUS BREZOVAN Software Engineering Department, University of Craiova, Bd. Decebal 107 Craiova, Romania brezovan marius@software.ucv.ro EUGEN GANEA Software Engineering Department, University of Craiova, Bd. Decebal 107 Craiova, Romania ganea eugen@software.ucv.ro The present paper addresses the nowadays field of image segmentation by offering an evaluation of several existing approaches. The paper offers a comparison of the experimental results from the error measurement point of view. We introduce a new method of salient object recognition that takes into consideration color and geometric features in order to offer a conclusive result. Our new segmentation method introduced in the paper has revealed very good results in terms of comparison with the already known object detection methods. We use a set of error measures to analyze the consistency of different segmentations provided by several well known algorithms. The experimental results offer a complete basis for parallel analysis with respect to the precision of our algorithm, rather than the individual efficiency. Keywords: color segmentation; graph-based segmentation; contour-based evaluation methods. 1. Introduction Segmentation of color images is one of the most important subjects when it comes to image processing. There have been several studies achieved so far with good results in terms of relevant results, but it is still an open topic for computer vision field. The evaluation of image segmentation [Martin (2001)] focuses on two main 36 2 properties: objectivity and generality. By objectivity we understand that all the test images in the benchmark should have an unambiguous ground-truth segmentation so that the evaluation can be conducted objectively. On the other hand, generality refers to the fact that the test images in the benchmark should have a large variety so that the evaluation results can be extended to other images and applications. Quantification of the performance on a segmentation algorithm remains a challenging task. This is largely due to image segmentation being a non-strict-defined problem; there is no single ground truth segmentation against which the output of an algorithm may be compared. Rather the comparison is to be made with the set of all possible perceptually consistent interpretations of the image, of which only a small fraction is usually available. The problem of segmentation is an important research field and many segmentation methods have been proposed in the literature so far ([Fu (1981)],[Felzenszwalb (2004)],[Comaniciu (2002)],[Shi (2000)]). The target of image segmentation is the domain-independent partition of the image into a set of regions which are visually distinct and uniform with respect to some property, such as grey level, texture or color. One of the main characteristics of the studied approaches is the time of fusion which can be either embedded in the detection of regions or placed at the end of both processes [Falah (1994)]. Embedded integration is related to the integration through the definition of new parameters or a new decision criterion for the segmentation operation. Most commonly, the edge information is first extracted and is then used within the segmentation algorithm which is based on regions. As an example, the edge information can be used to define the seed points from which regions are grown. The purpose of this integration strategy is to use boundary information to avoid some of the issues of region-based techniques. Post-processing integration is performed after the image has been processed using the two different methods (boundary-based and region-based techniques). Contour and region information are extracted in the first step. After that a fusion process tries to exploit the dual information in order to modify, or refine, the initial segmentation obtained by a single technique. The goal is the improvement of the initial results and a more accurate segmentation.[Freixenet (1997)] The main objective of this paper is to emphasize the very good results of image segmentation obtained by our segmentation technique, Graph Based Salient Object Detection, and to compare them with other existing methods. The algorithms that we use for comparison are: Normalized Cuts, Efficient Graph-Based Image Segmentation (Local Variation) and Mean Shift. All of them are complex and well known algorithms, with very good results in this area and building the knowledge based on their results represents a solid reference. The experiments were completed using the images and ground-truth segmentations in the Berkeley segmentation data set [Berkeley (2003)]. The dataset has 3 proved quite useful for our work in order to evaluate the effectiveness of different edge filters as indicators of boundaries. Since the ground-truth segmentation may not be well and uniquely defined, each test image in the Berkeley benchmark is manually segmented by a group of people. Being various and extensive, we expect that our results will find further use based on the mechanism for computing the consistency of different segmentations. The segmentation accuracy is measured taking into consideration the global consistency error and the local consistency error. We will provide comparative results that reflect a well-balanced behavior of the algorithm we propose. The paper is organized as follows. In Section 3 we briefly present previous studies in the domain of image segmentation and the segmentation method we propose. The performance evaluation methodology is presented in Section 4. The experiments and their results are presented in Section 5. Section 6 concludes the paper and outlines the main directions of the future work. 2. Related Work Evaluation of image segmentation is an open subject in today’s image processing field. The goal of existing studies is to establish the accuracy of each individual approach and find new improvement methods. Some of previous works in this domain do not require ground-truth image segmentation as the reference. In these methods, the segmentation performance is usually measured by some contextual and perceptual properties, such as the homogeneity within the resulting segments and the inhomogeneity across neighboring segments. Most of segmentation methods require ground truth image segmentations as reference. Since the construction of the ground-truth segmentation for many real images is labor-intensive and sometimes not well or uniquely defined, most prior image segmentation methods are only tested on some special classes of images used in special applications where the ground-truth segmentations are uniquely defined, synthetic images where ground-truth segmentation is also well defined, and/or a small set of real images. The main drawback of providing such reference is represented by the resources that are needed. However, after analyzing the differences between the image under study and the ground truth segmentation, a performance proof is obtained. Region-based segmentation methods can be broadly classified as either modelbased [Carson (2002)] or visual feature-based [Fauqueur (2004)] approaches. A distinct category of region-based segmentation methods that is relevant to our approach is represented by graph-based segmentation methods. Most graph-based segmentation methods attempt to search a certain structures in the associated edge weighted graph constructed on the image pixels, such as minimum spanning tree [Felzenszwalb (2004)], or minimum cut [Shi (2000)]. 4 Related work on quantitative segmentation evaluation includes both standalone evaluation methods, which do not make use of a reference segmentation, and relative evaluation methods employing ground truth. For standalone evaluation of image segmentations metrics for intra-object homogeneity and inter-object disparity have been proposed in [Levine (1985)]. A combined segmentation and evaluation scheme is developed in [Zhang (2002)]. Although standalone evaluation methods can be very useful in such applications, their results do not necessarily coincide with the human perception of the goodness of segmentation. However, when a reference mask is available or can be generated, relative evaluation methods are preferred in order the segmentation results to coincide with the human perception of the goodness of segmentation. Berkeley image segmentation benchmark is the reference that we use for our study. Using the same input, the provided image dataset, we are developing a customized methodology in order to efficiently evaluate our algorithm. Martin et al. [Martin (2001)] propose two metrics that can be used to evaluate the consistency of a pair of segmentations based on human segmentations. The measures are designed to be tolerant to refinement. A methodology for evaluating the performance of boundary detection techniques with a database containing groundtruth data was developed in [Martin (2004)]. It is based in the comparison of the detected boundaries with respect to human-marked boundaries using the PrecisionRecall framework. Another evaluation are based on pixel discrepancy measures. In [Huang (1995)], the use of binary edge masks and scalable discrepancy measures is proposed. Odet et al. [Odet (2002)] propose also an evaluation method based on edge pixel discrepancy, but the establishment of a correspondence of regions between the reference mask and the examined one is considered. The closest work to ours is [Felzenszwalb (2004)], in which an image segmentation is produced by creating a forest of minimum spanning trees of the connected components of the associated weighted graph of the image. The novelty of our contribution concerns two main aspects: (a) in order to minimize the running time we construct a hexagonal structure based on the image pixels, that is used in both color-based and syntactic-based segmentation algorithms, and (b) we propose an efficient method for segmentation of color images based on spanning trees and both color and syntactic features of regions. 3. Segmentation Methods We will compare four different segmentation techniques, the Mean Shift-Based segmentation algorithm [Comaniciu (2002)], Efficient Graph-Based segmentation algorithm [Felzenszwalb (2004)], Normalized Cuts segmentation algorithm [Shi (2000)] and our own region-based segmentation method. We have chosen Mean Shift-Based segmentation because it is generally effective and has become widely-used in the vision community. The Efficient Graph-Based segmentation algorithm was chosen as an interesting comparison to the Mean Shift. Its general approach is similar, 5 however, it excludes the mean shift filtering step itself, thus partially addressing the question of whether the filtering step is useful. Due to its computational efficiency, Normalized Cuts represents a solid reference in our study. We use all these algorithms as terms of comparison for the evaluation we performed. 3.1. Graph-Based Salient Object Detection We present an efficient segmentation method that uses color and some geometric features of an image to process it and create a reliable result [Burdescu (2009)]. The used color space is RGB because of the color consistency and its computational efficiency. Fig. 1. The grid-graph constructed on the hexagonal structure of an image What is particular at this approach is the basic usage of hexagonal structure instead of color pixels. In this way we can represent the structure as a grid-graph G = (V, E) where each hexagon h in the structure has a corresponding vertex v ∈ V , as presented in Figure 1. Each hexagon has six neighbors and each neighborhood connection is represented by an edge in the set E of the graph. For each hexagon on the structure two important attributes are associated: the dominant color and the coordinates of the gravity center. Basically, each hexagonal cell contains eight pixels: six from the frontier and two from the middle. Image segmentation is realized in two distinct steps. The first step represents a pre-segmentation step when only color information is used to determine an initial segmentation. The second step represents a syntactic-based segmentation step when both color and geometric properties of regions are used. The first step of the segmentation algorithm uses a color-based region model and will produce a forest of maximum spanning trees based on a modified form of the Kruskal’s algorithm. In this case the evidence for a boundary between two 6 adjacent regions is based on the difference between the internal contrast and the external contrast between the regions. The color-based segmentation algorithm builds a maximal spanning tree for each salient region of the input image. The second step of the segmentation algorithm uses a new graph, which has a vertex for each connected component determined by the color-based segmentation algorithm. In this case the region model contains in addition some geometric properties of regions such as the area of the region and the region boundary. The final segmentation step produces a forest of minimum spanning trees based on a modified form of the Borůvka’s algorithm. Each determined minimum spanning tree represents a final salient region determined by the segmentation algorithm. 3.2. Efficient Graph-Based Image Segmentation Efficient Graph-Based image segmentation [Felzenszwalb (2004)], is an efficient method of performing image segmentation. The basic principle is to directly process the data points of the image, using a variation of single linkage clustering without any additional filtering. A minimum spanning tree of the data points is used to perform traditional single linkage clustering from which any edges with length greater than a given threshold are removed [Unnikrishnan (2007)]. Let G = (V, E) be a fully connected graph, with m edges {ei } and n vertices. Each vertex is a pixel, x, represented in the feature space. The final segmentation will be S = (C1 , ..., Cr ), where Ci is a cluster of data points. The algorithm [Felzenszwalb (2004)] can be shortly presented as follows: (1) Sort E = (e1 , ..., em ) such that |et | ≤ |e′t |∀t < t′ (2) Let S 0 =({x1 }, ..., {xn }) in other words each initial cluster contains exactly one vertex. (3) For t = 1, ..., m (a) Let xi and xj be the vertices connected by et . (b) Let Cxt−1 be the connected component containing point xi on iteration t − 1 i and li = maxmst Cxt−1 be the longest edge in the minimum spanning tree of i t−1 Cxi . Likewise for lj . (c) Merge Cxt−1 and Cxt−1 if: i j { |et | < min li + k Cxt−1 i , lj + k Cxt−1 j } (1) (4) S = S m . 3.3. Normalized Cuts Normalized Cuts method models an image using a graph G = (V, E), where V is a set of vertices corresponding to image pixels and E is a set of edges connecting 7 neighboring pixels. The edge weight w(u, v) describes the affinity between two vertices u and v based on different metrics like proximity and intensity similarity. This weight can be particulary defined for each case, depending on the nature of the experiment and the area of interest. For example, we can take into consideration the Euclidean distance between two nodes or we can combine the brightness value of a pixel with its spatial location and define a complex weight formula. The algorithm segments an image into two segments that correspond to a graph cut (A, B), where A and B are the vertices in the two resulting subgraphs. The segmentation cost is defined by: N cut(A, B) = cut(A, B) cut(A, B) + assoc(A, V ) assoc(B, V ) (2) ∑ where cut(A, B) = u∈A,v∈B w(u, v) is the cut cost of (A, B) and assoc(A, V ) = u∈A,v∈V w(u, v) is the association between A and V . The algorithm finds a graph cut (A, B) with a minimum cost in Eq.(1). Since this is a NP-complete problem, a spectral graph algorithm was developed to find an approximate solution [Shi (2000)]. This algorithm can be recursively applied on the resulting subgraphs to get more segments. For this method, the most important parameter is the number of regions to be segmented. Normalized Cuts is an unbiased measure of dissociation between the subgraphs, and it has the property that minimizing normalized cuts leads directly to maximizing the normalized association relative to the total association within the sub-groups. ∑ 3.4. Mean Shift The Mean Shift-Based segmentation technique [Comaniciu (2002)] is one of many techniques dealing with “feature space analysis”. Advantages of feature-space methods are the global representation of the original data and the excellent tolerance to noise [Duda (2000)]. The algorithm has two important steps: a mean shift filtering of the image data in feature space and a clustering process of the data points already filtered. During the filtering step, segments are processed using the kernel density estimation of the gradient. Details can be found in[Comaniciu (2002)]. A uniform kernel for gradient estimation with radius vector h = [hs , hs , hr , hr , hr ] is used. hs is the radius of the spatial dimensions and hr the radius of the color dimensions. Combining these two parameters, complex analysis can be performed while training on different subjects. Mean shift filtering is only a preprocessing step. Another step is required in the segmentation process: clustering of the filtered data points {x′ }. During filtering, each data point in the feature space is replaced by its corresponding mode. This suggests a single linkage clustering that converts the filtered points into a segmentation. Another paper that describes the clustering is [Christoudias (2002)]. A region adjacency graph (RAG) is created to hierarchically cluster the modes. Also, edge information from an edge detector is combined with the color information to better 8 guide the clustering. This is the method used in the available EDISON system, also described in [Christoudias (2002)]. The EDISON system is the implementation we use in our evaluation system. 4. Region-based performance evaluation We present comparative results of segmentation performance for our region based segmentation method and the three alternative segmentation methods mentioned above. Our evaluation measure is mainly related to the consistency between segmentations. We use segmentation error measures that provide an objective analysis of the segmentation algorithms. The region-based scheme is intended to evaluate segmentation quality in terms of the precision of the extracted region boundaries. We referred a set of metrics in order to provide a relevant comparison between the segmentation methods and the human segmentation. The two error measures are GCE (Global Consistency Error ) and LCE (Local Consistency Error ). The Global Consistency Error assumes that one of the segmentations must be a refinement of the other, and forces all local refinements to respect the same criteria. The Local Consistency Error allows for refinements to occur in both ways at different locations in the segmentation (namely from one segmentation to another and viceversa). We applied the measures to the human segmentation from Berkeley segmentation dataset and the segmentation results we obtained applying the four selected algorithms. A potential problem for a measure of consistency between segmentations is that there is no unique human segmentation of an image, since each human perceives the scene differently. In this situation you could declare the segmentations inconsistent. However, if one segmentation is a refinement of the other, then the error should be small. Therefore, the measures are designed to be tolerant to refinement. Some other aspects to be taken into account are that error measure should not depend on the pixelation level and should be tolerant to noise along region boundaries.[Unnikrishnan (2000)] We will evaluate the performance of our algorithm on the Berkeley Segmentation Database (BSD) [Berkeley (2003)]. We will refer the characteristics of the error metrics previously defined by Martin et al. [Martin (2001)], explore potential problems with these metrics in order to evaluate the quality of each segmentation and to characterize its performance over a range of parameter values. The current public version of the Berkeley Segmentation Database is composed of 300 color images. The images have a size of 481 × 321 pixels, and are divided into two sets, a training set containing 200 images that can be used to tune the parameters of a segmentation algorithm, and a testing set containing the remaining 100 images on which the final performance evaluations should be carried out. We built a custom benchmark framework, that processes the Berkeley dataset, 9 converts it to our proprietary format and preforms parallel analysis. Additionally, we adapted the other mentioned algorithms to the same evaluation format for unitary purposes. The human segmented images provide the ground truth boundaries. Therefore, any boundary marked by a human subject is considered to be valid. Since there are multiple segmentations of each image by different subjects, it is the collection of these human-marked boundaries that constitutes the ground truth. Based on the output of the previously presented algorithms for a set of images, we will determine how well the ground truth boundaries are approximated. In order to determine an algorithm’s efficiency by comparing it to the ground truth boundaries, a threshold of the boundary map is needed. We are providing an additional evaluation based on histogram representation of the error density characteristic for each algorithm. 5. Experimental Results Our study of segmentation quality is based on experimental results and uses the Berkeley segmentation dataset provided at [Berkeley (2003)]. 5.1. GCE and LCE Metrics We used two metrics in order to provide an objective comparison between the four segmentation methods and the human segmentation. The two error measures are described below. We applied the measures to the Berkeley segmentation database and the segmentation results of the four algorithms. In order to describe the segmentation errors, we considered two different segmentations S1 and S2 and calculated a value in the range [0..1] where 0 represents no error. For a given pi we considered segments S1 and S2 that contain the pixel. If one segment is a proper subset of the other, then the pixel lies in an area of refinement, and the local error should be zero. Otherwise, the two regions overlap in a inconsistent manner and we should calculate the corresponding error. We use \ to denote the set difference and |x| for cardinality of set x.If R(S, pi ) is the set of pixels corresponding to the region in segmentation S that contains pixel pi , the local refinement error is defined as: E(S1 , S2 , pi ) = |R(S1 , pi )\R(S2 , pi )| |R(S1 , pi )| (3) This local error measure is not symmetric. It encodes a measure of refinement in one direction only: E(S1 , S2 , pi ) is zero precisely when S1 is a refinement of S2 at pixel pi , but not vice versa. Considering this local refinement error in each direction at each pixel, there are two methods to combine the values into an error measure for the entire image. We apply two error measures as follows: Global Consistency Error (GCE) that forces all local refinements to be in the same direction and Local 10 Consistency Error (LCE) that allows refinement in different directions in different parts of the image.[Martin (2001)] For a given n as the number of pixels we have: { } ∑ ∑ 1 GCE(S1 , S2 ) = min E(S1 , S2 , pi ), E(S2 , S1 , pi ) n i i LCE(S1 , S2 ) = 1∑ min{E(S1 , S2 , pi ), E(S2 , S1 , pi )} n i (4) (5) In order to proper evaluate the segmentation method we propose, we first need to better understand how the GCE and LCE error metrics work. Given two extreme cases:an under-segmented image, where every pixel has the same label (i.e. the segmentation contains only one region spanning the whole image), and a completely over-segmented image in which every pixel has a different label. From the definitions of the GCE and LCE we can see that both measures evaluate to 0 on both of these extreme situations regardless of what segmentation they are being compared to. The reason for this can be found in the tolerance of these measures to refinement. Any segmentations is a refinement of the completely undersegmented image, while the completely over-segmented image is a refinement of any other segmentation. In order to have a better analysis result and a more complete description for the errors we considered, we have performed 10 different tests for each subject per algorithm - Fig. 2. Fig. 2. Average GCE vs. LCE for Berkeley test images More precisely, by varying several key parameters, we have obtained 10 dis- 11 tinct points that define the errors for each approach. For N ormalized Cuts [Shi (2000)] we have modified the number of segments in the range of {5, 10, 12, 15, 20, 25, 30, 40, 50, 70}. The variable parameter for Ef f icient Graph − Based Image Segmentation [Felzenszwalb (2004)] was the scale of observation, k , in range {100, 200, 300, 400, 500, 600, 700, 800, 900, 1000}. For Mean-Shift [Comaniciu (2002)] we have made 10 combinations from Spatial Bandwidth {8, 16} and Range Bandwidth {4, 7, 10, 13, 16}. We calculated the GCE and LCE average values for the 100 test images provided by Berkeley. Figure 2 illustrates the GCE vs. LCE graphic representation. In the resulting diagram (Fig. 2) we can see that the GCE vs. LCE error metric for our proposed method, denoted GBSOD - Graph Based Salient Object Detection is situated below the values for the other algorithms indicating a better performance result, a smaller average error and a balanced algorithm. Analyzing the set of results for each parameter per algorithm, it’s easy to distinguish which algorithm is generating better results; the smaller the error it is, the better is the accuracy of the respective algorithm. 5.2. Histogram based evaluation We elaborated a histogram-based evaluation mechanism aimed to compare the segmentation results for the studied algorithms via the errors metrics. In order to achieve this, we considered the human segmentation as the groundtruth segmentation and compared each algorithm with it, measuring the error metrics GCE and LCE. For each algorithm we analyzed the 100 test images from Berkley and calculated the corresponding GCE and LCE. The histograms presented below illustrate this approach (Fig.3 - Fig. 10). For a better description of the histogram based analysis, in Fig.3 and Fig.4 we have depicted the distribution of the the values of GCE respectively LCE for the 100 images processed using GBSOD-Graph Based Salient Object Detection algorithm. It is very important that these value are more concentrated on smaller error values, which gives us the confidence that the presented method has good results. Fig.3 and Fig.4 show that both GCE and LCE values are concentrated on small values (between 0.3 and 0.4). Both errors are taking values between 0 and 1, where 0 is no error case and 1 is worse case, thus in our results the errors are taking small values, closer to 0. In Fig.5 and Fig.6 it can be seen the same analysis for Normalized Cuts, in Fig.7 and Fig.8 for Efficient Graph-Based Image Segmentation and in Fig.9 and Fig.10 for Mean-Shift algorithm. Taking into consideration the interval where GCE and LCE are taking values (0,1), we can make an easy comparison and notice that N −Cuts is giving the worse results of all the algorithms we’ve selected. The segmentation accuracy provides an upper bound of the segmentation perfor- 12 Fig. 3. GCE for Graph Based Salient Object Detection Fig. 4. LCE for Graph Based Salient Object Detection mance by assuming an ideal postprocess of region merging for applications without a priori known exact ground truth. For the extreme case where each pixel is partitioned as a segment, the upper-bound performance obtained is a meaningless value of 100%. This is similar to the GCE and LCE measures developed in the Berkeley benchmark. But the difference is that GCE and LCE also result in meaningless high accuracy when too few segments are produced, such as the case where the whole image is partitioned as a single segment. In this paper, we always set the segmentation parameters to produce a reasonably small number of segments when applying the strategy to merge the image regions. Figures Fig.11 and Fig.12 are illustrating a comparison between all the four presented algorithms; it gives a good perspective on the what error values generates 13 Fig. 5. GCE for Normalized Cuts Fig. 6. LCE for Normalized Cuts each studied algorithm. Figures Fig. 13, 14, 15 are showing several comparisons between the different segmentation methods we have studied and presented in this paper. 6. Conclusion In this paper we described a novel graph-based algorithm for image segmentation and extraction of visual objects. We’ve intended to perform a complex evaluation experiment starting from several strong segmentation strategies and some evaluation metrics. Our segmentation method and other three segmentation methodologies were 14 Fig. 7. GCE for Efficient Graph-Based Image Segmentation Fig. 8. LCE for Efficient Graph-Based Image Segmentation chosen for the experiment, and the complementary nature of the methods was demonstrated in the results. The study results offer a clear view of the effectiveness of each segmentation algorithm, trying in this way to offer a solid reference for future studies. The images presented above offer a good evidence that our own segmentation method performs very well on a variety of images from different domains. There has been recent progress in developing quantitative measures of segmentation quality that can be used to evaluate and compare different segmentation algorithms. More than that, considerable progress has been made on studying and implementing segmentation methods. Future work will be carried out in the direction of extending the evaluation 15 Fig. 9. GCE for Mean-Shift Fig. 10. LCE for Mean-Shift mechanism, improving the evaluation metrics and enlarging the reference segmentation methods. We will also continue to work on integration of syntactic visual information into a semantic level of a semantic image processing and indexing system. Acknowledgment This work has been supported by CN CSIS−U EF ISCSU , project number P N II− IDEI code/2008. 16 Fig. 11. GCE overall comparison Fig. 12. LCE overall comparison Fig. 13. Comparative segmentation results A : Human Segmentation, Graph-based Salient Object Detection, Normalized Cuts, Efficient Graph-Based, Mean-Shift References Berkeley Segmentation and Boundary Detection Benchmark and Dataset, 2003, 17 Fig. 14. Comparative segmentation results B: Human Segmentation, Graph-based Salient Object Detection, Normalized Cuts, Efficient Graph-Based, Mean-Shift Fig. 15. Comparative segmentation results C: Human Segmentation, Graph-based Salient Object Detection, Normalized Cuts, Efficient Graph-Based, Mean-Shift http://www.cs.berkeley.edu/projects/vision/grouping/segbench. Burdescu, D., Brezovan, M., Ganea, E. and Stanescu, L. A New Method for Segmentation of Images Represented in a HSV Color Space, Lecture Notes in Computer Science, 5807, 606-617, 2009. Carson, C., Belongie, S., Greenspan, H. and Malik, J. Blobworld: Image segmentation using expectation-maximization and its application to image querying and classification, IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(8),1026–1037, 2002. Christoudias, C., Georgescu, B. and Meer, P. Synergism in Low Level Vision, Proc. Intl Conf. Pattern Recognition, vol. 4, pp. 150-156, 2002. Comaniciu, D. and Meer, P. Mean Shift: A Robust Approach toward Feature Space Analysis, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, pp. 603-619, 2002. Duda, R.O., Hart, P.E. and Stork, D.G. Pattern Classification, John Wiley & Sons, New York, 2000. Falah, R., Bolon, P., Cocquerez, J.A region-region and region-edge cooperative approach of image segmentation. International Conference on Image Processing. Volume 3., Austin, Texas 470474, 1994. Fauqueur, J. and Boujemaa, N. Region-based image retrieval: Fast coarse segmentation and fine color description, Journal of Visual Languages and Computing, 15(1), 69–95, 2004. Felzenszwalb, P. and Huttenlocher, D. Efficient Graph-Based Image Segmentation, Intl J. Computer Vision, vol. 59, no. 2, 2004. 18 Freixenet, J., Munoz, X., Raba, D., Marti, J. and Cufi, X. Yet Another Survey on Image Segmentation: Region and Boundary Information Integration University of Girona. Institute of Informatics and Applications. Campus de Montilivi s/n. 17071. Girona, Spain Fu, K. and Mui, J. A survey on image segmentation. Pattern Recognition, 1981. Haralick, R. and Shapiro, L. Image segmentation techniques. Computer Vision,Graphics and Image Processing, 1985. Huang, Q. and Dom, B. Quantitative methods of evaluating image segmentation Proc. of the Int. Conf. on Image Processing (ICIP95), 3, 5356, 1995 Levine, M. and Nazif, A. Dynamic measurement of computer generated image segmentations IEEE Trans. on Pattern Analysis and Machine Intelligence, 7, 155164, 1985 Martin, D., Fowlkes, C., Tal, D. and Malik,J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, Proc. Int. Conf. Comp. Vis., vol. 2, pp. 416-425, 2001. Martin, D., Fowlkes C. and Malik, J. Learning to detect natural image boundaries using local brightness, color and texture cues IEEE Trans. on Pattern Analysis and Machine Intelligence, 26(5), 530549, 2004 Odet, C., Belaroussi, B. and Benoit-Cattin, H. Scalable discrepancy measures for segmentation evaluation Proc. of the Int. Conf. on Image Processing (ICIP02), 1, 785788, 2002 Shi, J. and Malik, J. Normalized Cuts and Image Segmentation, IEEE Transactions on pattern analysis and machine intelligence, Vol. 22, No. 8, 2000. Unnikrishnan, R., Pantofaru, C. and Hebert, M. Toward Objective Evaluation of Image Segmentation Algorithms, IEEE Transactions on pattern analysis and machine inteligence, Vol. 29, No. 6, 2007. Unnikrishnan, R., Pantofaru, C. and Hebert, M. A Measure for Objective Evaluation of Image Segmentation Algorithms Yang, C., Duraiswami, R. and DeMenthon, D. Mean-Shift Analysis using Quasi-Newton Methods Zhang, Y. and Wardi, Y. A recursive segmentation and classification scheme for improving segmentation accuracy and detection rate in realtime machine vision applications Proc. of the Int. Conf. on Digital Signal Processing (DSP02), vol. 2, July, 2002