a multi-level thresholding approach using a hybird optimal

advertisement

A MULTI-LEVEL THRESHOLDING APPROACH USING A HYBIRD

OPTIMAL ESTIMATION ALGORITHM

Shu-Kai S. Fan

§

and Yen Lin

Department of Industrial Engineering and Management, Yuan Ze University, Taoyuan County,

Taiwan 320, Republic of China simonfan@saturn.yzu.edu.tw

ABSTRACT

In this paper, a hybrid optimal estimation algorithm for parameter estimation and image segmentation in multi-thresholding is presented. The distribution of image intensity is modeled as a random variable, which is approximated by a mixture Gaussian model. Thus, the Gaussian’s parameter estimates are iteratively computed by using the proposed PSO+EM algorithm, which consists of two main steps: (1) near-optimum search for

Gaussian’s estimates using particle swarm optimization (PSO) algorithm; (2) mixture Gaussian curve fitting using expectation maximization (EM) algorithm. In PSO+EM, the initial parameter estimates used in the EM procedure is obtained from the near-optimum search performed by using PSO, expecting to expedite the search of the EM method while fitting mixture Gaussians model. The preliminary experimental results show that the hybrid PSO+EM method could solve the multi-level thresholding problem quite swiftly, and also present competitive effectiveness in the context of image size and intensity contrast.

Key Words: Multi-level thresholding; Gaussian curve fitting; Expectation maximization (EM); Particle swarm optimization (PSO).

§

To whom correspondence should be addressed. E-mail: simonfan@saturn.yzu.edu.tw

1

1.

Introduction

In recent years, image thresholding is very useful in separating objects from background image, or discriminating objects from objects that have distinct gray-levels. Sahoo et al . [1] have presented a thorough survey of a variety of thresholding techniques, among which global histogram-based algorithms (Glasbey [2]) are employed to determine the threshold. In parametric approaches (Weszka and Rosenfeld [3]; Snyder et al . [4]), the gray-level distribution of each class has a probability density function that is assumed to obey a (mixture)

Gaussian distribution. An attempt to find an estimate of the parameters of the distribution that will best fit the given histogram data is made by using the least-squares method. Typically, it leads to a nonlinear optimization problem, of which the solution is computationally expensive and time-consuming. Snyder et al . presented an alternative method for fitting curves based on a heuristic method called tree annealing. Yin [5] proposed a fast scheme for optimal thresholding using genetic algorithms, and Yen et al . [6] proposed a new criterion for multilevel thresholding, termed Automatic Thresholding Criterion (ATC) to deal with automatic selection of a robust, optimum threshold for image segmentation. Recently, Zahara et al . [7] proposed a hybrid Nelder-Mead simplex search and particle swarm optimization (NM-PSO) method to solve the objective functions of Gaussian curve fitting for multi-level thresholding. For further details of the hybrid NM-PSO, see Fan and Zahara [8] and

Zahara et al . [7].

In this paper, an improvement upon Gaussian curve fitting is reported that the effectiveness and the efficiency of image thresholding methods could be enhanced in the case of multilevel thresholding. We present a hybrid expectation maximization (EM) and particle swarm optimization (PSO+EM) algorithm to solve the objective function of Gaussian curve fitting. The PSO+EM algorithm is applied to image thresholding with multimodal histograms, and the performances of PSO+EM on Gaussian curve fitting are compared to the

NM-PSO method. The remainder of this paper is outlined as follows. Section 2 introduces the parametric objective functions that need to be solved in image thresholding. In Section 3, the proposed hybrid PSO+EM algorithm is presented. Section 4 gives the experimental results and compares performances among the methods.

Eventually, Section 5 concludes this research.

2

2. Parametric Approaches by Gaussian Curve Fitting

A properly normalized multimodal histogram

 

of an image, where x represents the gray levels, can be fitted with the sum of d probability density functions (pdf’s) for finding the optimal thresholds for use in image segmentation (Syender [4]). With Gaussian pdf’s, the model has the following form

 i d 

1

P i

2

 i

 exp

 x

  i

2

 i

2

(1) where P is the a priori probability and i

 i d

1

P

1 , d is the levels of thresholding,

is the mean, and i

 2 i is the variance of mode i . A pdf model must be fitted to the histogram data, typically by using the maximum likelihood or mean-squared error approach, in order to locate the optimal threshold. Given the histogram data P j

(observed probability of gray level j ), it can be defined below: where p j

L

1 i

0

(2)

 

denotes the occurrence of gray-level j over a given image range

0, L

1

, and L is the total number of gray levels. We wish to find a set of parameters, denoted by

, which minimize the fitting error:

Minimize H

  j

ˆ j

 j

,

 

2

(3) and j ranges over the bins in the measured histogram. Here, H is the objective function to be minimized with respect to

, a set of parameters defining the Gaussian pdf’s and the probabilities, as given by

 

P i

,

 i

,

 i

; i

1, 2, , d

After fitting the multimodal histogram, the optimal threshold could be determined by minimizing the overall probability of error, for two adjacent Gaussian pdf’s, given by

  i

P i

 T i

 p i

 

P i

1

T i

 p i

1

 

, i

1, 2, , d

1 (4) with respect to the threshold T , where i p i

 

is the i th pdf (Gonzalez and Woods [9]). To find the threshold value for which this error is minimal requires differentiating

  i

with respect to T (using Leibniz’s rule) i and equating the result to zero. The result is i

 i

  i

P i

1

 p i

1

  i

(5)

3

This equation is solved for T to find the optimum threshold. Using equation (1) in the general solution of i equation (5) results in the following solution for the threshold T : i

AT i

2 

BT i

 

0 (6) where

C

B

A

  i

2  

2

 i

2

1

  i i

2

1

   i

1 i

2

   i i

2

1

   i 1 i

2 

4

  i

2 i

1 ln

  i

1

P i

/

P i i

1

Since a quadratic equation has two possible solutions, only one of which is a feasible solution.

(7)

3. Hybrid PSO+EM Method

In this study, the development of the hybrid algorithm wishes to improve the performance of the optimal thresholding techniques currently applied in practice. The goal of integrating expectation maximization (EM) algorithm and particle swarm optimization (PSO) is to combine their advantages and compensate for each other’s disadvantages. For instance, the EM algorithm is efficient but excessively sensitive to the starting point

(initial estimates). A poor starting point might make the EM algorithm terminate prematurely or get stuck due to computational difficulties. PSO belongs to the class of global search procedures but requires much computational effort than classical search procedures. Similar hybridization ideas have ever been discussed in hybrid methods using genetic algorithms and direct search techniques, and they emphasized the trade-off between solution quality, reliability and computation time in global optimization (Renders and Flasse [10] and

Yen et al . [11]). This section starts by introducing the procedure of the EM algorithm and PSO, followed by a description of our hybrid method.

3.1 The procedure of EM

For the pdf of mixture Gaussian function (see equation (1)), we defined the likelihood function as follows:

X ;

 N  n

1 i d 

1 i

 n

;

  i

, i

2

(8)

4

The logarithm of the likelihood function

X ;

is given by

 

X ;

 N d i

1 i

 n

;

  i

, i

2

(9) n

1

To find expressions that are valid at local maxima of

(or equivalently

), we compute the derivatives of

with respect to P , i

, and i

. Setting the derivatives equal to zero, we obtain three groups of equations i for the mixing probabilities, means, and standard deviations (Tomasi [12]):

P i

1 N 

N n

1

 

 i

N  n

1 n

N 

1

 

 x n

 

 i

1 d

N n

1

 

 x n

 n

N

1

 

 i

2

(10)

Now assume that approximate (initial) estimates P , i

 i

, and

 i

( )

are available for the parameters of the likelihood function

 

X ;

 

or its logarithm

 

X ;

 

. Then, better estimates P i

( k

1)

,

 i

( k

1)

, and

 i

( k

1)

can be computed by first using the old estimates to construct a lower bound b k

 

for the likelihood function, and then maximizing the bound with respect to P , i

, and i

. i

Expectation maximization (EM) starts with initial values

( 0 )

P , i

 i

( 0 )

,

 i

( 0 )

for the parameters, and iteratively performs these two steps until convergence. Construction of the bound b k

 

is called the “E step,” since the bound is the expectation of a logarithm, derived from use of Jensen’s inequality. The maximization of b k

 

that yields the new estimates P i

( k

1)

,

 i

( k

1)

and

 i

( k

1)

is called the “M step.”

Given the old parameter estimates P , i

 i

and

 i

( )

, we can compute estimates P k

 

for the membership probabilities

P

 

K m

P i

 n

1

;

P i

 n

,

 i

; i

,

 i

 i

 (11)

5

This is the actual computation performed in the E step. The rest of the “construction” of the bound b k

 

is theoretical, and uses Jensen’s inequality to bound the logarithm

 

X ;

 

of the likelihood function in the following.

The membership probabilities

 

add up to one, and their estimates P

|

do so as well, because they are computed from equation (11), which includes explicit normalization. Thus, we can obtain (by

Jensen’s inequality)

 

X ;

 N n

1 d i

1

 

 n

N d 

1

1 i

P

 

 log

 

P

 

 b k

The bound b k

 

thus obtained can be rewritten as follows

(12) b k

N d  n

1

1 i

P

 

 log

 

N d  n

1

1 i

P

 

 log P

 

(13)

Since the old membership probabilities P

|

are known, minimizing b k

 

is the same as minimizing the first of the two summations; that is, the function

 k

N d  n

1 i

1

P

 

 log

 

(14)

Prima facie , the expression for

 k

 

would seem to be even more complicated than that for

. This is not so. However, rather than the logarithm of a sum,

 k

 

contains a linear combination of d logarithms, and this breaks the coupling of the equations obtained by setting the derivatives of

 k

 

, with respect to the parameters, equal to zero.

The derivative of

 k

 

with respect to

is easily found to be i

  k

  i

N  n

1

P

 

 i

 i

2 x n (15)

Upon setting this expression to zero, the variance

can be cancelled, and the remaining equation contains i only

as the unknown as follows i

 i n

N 

1

P

 

 n

N 

1

P

 

 x n

(16)

This equation can be solved immediately to yield the new estimate of the mean:

6

 i

( k

1) 

N  n

1

P

 

 x n

N  n

1

P

  which is only a function of old values (with superscript k ).

(17)

The resulting value

 i

( k

1)

is plugged into the expression for

 k

 

, which can now be differentiated with respect to

through a very similar manipulation to yield i

 i

( k

1)  d

1

N n

1

P

 

 x n

N n

1

P

 

 i

( k

1)

2

(18)

The derivative of

  k

 

with respect to P , subject to the constraint that the i

P 's add up to one, can be i handled again through the soft-max function just as we did before. This yields the new estimate for the mixing probabilities as a function of the old membership probabilities:

P i

( k

1) 

1

N

N  n

1

P

 

(19)

In summary, given an initial estimate

( 0 )

P , i

 i

( 0 )

and

 i

( 0 )

, EM iterates the following computations until convergence to a local maximum of the likelihood function:

E Step:

P

 

K m

P i

 n

1

;

P i

 n

,

 i

; i

,

 i

 i

M Step:

 i

( k

1) 

 i

( k

1)  n

N 

1

P

   x n n

N 

1

P

 

1

 d

N n

1

P

 

 x n

N n

1

P

 

 i

( k

1)

2

P i

( k

1) 

1

N

N  n

1

P

 

7

(20)

(21)

3.2 The procedure of PSO

Particle swarm optimization (PSO), developed by Kennedy and Eberhart [13], is one of the latest evolutionary optimization techniques. PSO concept is based on a metaphor of social interaction such as bird flocking and fish schooling. Akin to genetic algorithms, PSO is also population-based and evolutionary in nature, with one major difference from genetic algorithms that it does not implement filtering; that is, all members in the population survive through the entire search process. PSO simulates a commonly observed social behavior, where members of a group tend to follow the lead of the best of the group. The procedure of PSO is illustrated as follows. i.

Initialization. Randomly generate a population of the potential solutions, called “particles,” and each particle is assigned a randomized velocity. ii.

Velocity Update. The particles then “fly” through search hyperspace while updating their own velocity, which is accomplished by considering its own past flight and those of its companions.

The particle’s velocity and position are dynamically updated by the following equations: v

N E W i d

 w i

 v i d

1

C

 

1

 r x p d

 x

O L D i d

2

C

2 r x g d

 O L D i d

(22) x

N E W  i d x i d v i d

N E W

(23) where the acceleration coefficients C and

1

C are two positive constants;

2 w is an inertia weight and i r ,

1 r

2 is a uniformly generated random number from the range [0, 1] which is generated every time for each iteration.

Eberhart and Shi [14] and Hu and Eberhart [15] suggested using C

1

C

2

2 and w i

0.5

 rand 2 .

Equation (22) shows that, when calculating the new velocity for a particle, the previous velocity of the particle

( v ), their own best location that the particles have discovered previously ( id x ) and the global best location ( id x ) gd all contribute some influence on the outcome of velocity update. The global best location ( x ) is identified, gd based on its fitness, as the best particle among the population. All particles are then accelerated towards the global best particle as well as in the directions of their own best solutions that have been visited previously.

While approaching the current best particle from different directions in the search space, all particles may

8

encounter by chance even better particles en route, and the global best solution will eventually emerge. Equation

(23) shows how each particle’s position ( x ) is updated in the search of solution space. id

3.3 Hybrid PSO+EM

The population size of this hybrid PSO+EM approach is set at 3 N

1 when solving an N -dimensional problem. Note that, in Gaussian curve fitting we need to estimate N

3 d

1 parameters. For example, in bi-level thresholding, 5 parameters,

 

P

1

,

1

,

1

,

2

,

2

and P

2

1 P

1

, for Gaussian curve fitting.

The initial population is created in two steps: using a predetermined starting point, N

1 particles are spawned with a positive step size of 1.0 in each coordinate direction, and the other 2 N particles are randomly generated. A total of 3 N

1 particles are sorted by the fitness, and the top one particle (elite) is then fed into the EM algorithm to update its location (solutions) in parameter space. The other 3 N particles are adjusted by the PSO method by taking into account the position of this updated particle.

This procedure for adjusting the 3 N particles involves selection of the global best particle and finally velocity updates. The global best particle of the population is determined according to the sorted fitness values.

Note that if the global best particle could not be updated through EM algorithm, the PSO+EM system will generate a new one to replace the old one, and reselect the global best particle in the new PSO population. By equations (22) and (23), a velocity update for each of 3 N particles is then carried out. The 3 N

1 particles are sorted in preparation for repeating the entire run. The process terminates when a certain convergence criterion is satisfied. Here, Figure 1 shows a flowchart of the PSO+EM algorithm and Figure 2 summarized the procedure of PSO+EM.

9

Start

Generate

PSO population of size 3 N +1

Evaluate the fitness of each particle

Image Histogram

Rank particle based on the fitness

Apply EM approach to update the best particle

Successful update ?

Yes

Compute likelihood of new solution

No

Generate a new solution randomly.

Apply velocity update to the remaining 3 N particles

Yes

Does likelihood increase?

No

Gain the optimum solution (parameters)

Finish

Figure 1.

Flowchart of PSO+EM algorithm.

1.

Initialization . Generate a population of size

Repeat

3 N

1

.

2.

Evaluation & Ranking . Evaluate the fitness of each particle.

Rank them based on the fitness.

3.

EM Algorithm.

Apply EM approach to the top one particle and replace the particle with the update.

If the particle could not be updated by EM approach, generate a new one randomly and go to step

2.

4.

PSO Method . Apply PSO operator for updating

3 N

particles with worst fitness.

Velocity Update. Apply velocity update to the

3 N

particles with worst fitness according equations (22) and (23).

Selection. From the population select the global best particle.

Until a termination condition is met.

Figure 2.

The hybrid PSO+EM algorithm.

10

4. Experimental Results

In this section, we evaluate and compare with the performances of the following three methods: the Otsu’s method [16], the NM-PSO method, and the proposed PSO+EM algorithm while implementing Gaussian curve fitting for multi-level thresholding. The test images are of size 256 256 pixels with 8 bit gray-levels, taken under natural room lighting without the support of any special light source. The PSO+EM algorithm is implemented on an AMD Athlon XP 2800+ with 1GB RAM using Matlab. The stopping criterion of NM-PSO referred to Zahara et al . [7] is 10 N iterations when solving an N -dimensional problem. However, the PSO+EM algorithm is halted while the EM procedure converges. To fairly compare these three methods’ efficiency and effectiveness, initial parameter estimates P for PSO+EM are set at i

P i

1 d ,

 i

's are selected from the d possible peaks of the histogram, and

 i

's are all set to 1.0 (see Zahara et al . [7]).

The experiment starts with two standard images with rectangular objects of uniform gray values, which are bi-level and tri-level respectively as shown in Figures 3-4, and aims to verify that the PSO+EM algorithm can deliver satisfactory performance. As can be seen from these two figures, the Otsu’s method, NM-PSO, and

PSO+EM perform equally well in terms of the quality of image segmentation (see also Tables 1-2). For the complete comparison report between the Otsu’s method and the NM-PSO algorithm, interested readers can refer to Zahara et al . [7]. Comparison results of the other two more complicated images (Images 3-4) indicate that both the NM-PSO and PSO+EM algorithms returned almost identical optimal threshold values, but the PSO+EM algorithm converges much faster than the NM-SPO method in multilevel thresholding (see Tables 3-4). Note that the segmentation results shown in Figures 3-6 are all obtained by using the proposed PSO+EM algorithm.

11

(a) original image (c) original histogram of (a)

(b) bi-level thresholding (d) Gaussian distribution curve fitting

Figure 3.

Experiment result of bi-level thresholding for Image 1.

(a) original image (c) original histogram of (a)

(b) tri-level thresholding (d) Gaussian distribution curve fitting

Figure 4.

Experiment result of tri-level thresholding for Image 2.

12

(a) original image (c) original histogram of (a)

(b) bi-level thresholding (d) Gaussian distribution curve fitting

Figure 5.

Experiment result of bi-level thresholding for Image 3 (screw inage).

(a) original image (c) original histogram of (a)

(b) bi-level thresholding (d) Gaussian distribution curve fitting

Figure 6.

Experiment result of tri-level thresholding for Image 4 (PCB image).

13

For Images 3-4 (as will be discussed in detail later), the speed advantage by the Otsu’s method still prevails for the multi-thresholding cases, but the segmentation quality is no longer acceptable for industrial applications. Comparing execution times between the PSO+EM and NM-PSO algorithms, it is evident to see that the PSO+EM algorithm requires the way less time to fit the curves (see Tables 3-4). Even so, the results of

Figures 5-6 reveal that the PSO+EM algorithm is still able to fit the original histogram pretty well. This renders a degree of accuracy in practical applications, which is unattainable with the Otsu’s method. It indicates that the

PSO+EM algorithm may serve as a potential candidate for dealing with the thresholding situations where the

Otsu’s method fails to yield acceptable outcomes. Aside from this, the PSO+EM algorithm is more efficient than the NM-PSO algorithm for Gaussian curve fitting.

The following illustrates the difficulties encountered by using the Otsu’s method. Table 3 shows the bi-level thresholding results of a screw image obtained by utilizing the Otsu’s method, the NM-PSO algorithm, and the PSO+EM algorithm. From a visualization perspective, the quality of the segmented images resulting from applying the PSO+EM and NM-PSO algorithms are superior to that returned by the Otsu’s method.

Looking at the original image in Figure 3(a), there displays a low-intensity contrast between the screws and background, making the Otsu’s method unable to expose the screw on the right and the screw threads clearly. On the contrary, in Table 3(b-c), the contour of the screws is perfectly identified and then rehabilitated. The very different optimal threshold values of 210 (returned by the Otsu’s method) and 184 (returned by the PSO+EM and

NM-PSO algorithms) dictate a considerable disparity in segmentation quality between these two types of thresholding techniques. Moreover, Table 3(e-f) points out that the PSO+EM algorithm requires much less CPU time than the NM-PSO algorithm to fit the original histogram. A similar difficulty arises when the Otsu’s method is applied to a tri-level thresholding application of a PCB image, as shown in Table 4. Likewise, the PSO+EM algorithm converges clearly much faster than the NM-PSO algorithm with equal quality in image segmentation.

14

Table 1.

Bi-level thresholding results via Otsu’s method, NM-PSO, and PSO+EM for Image 1.

(a) Otsu Method thresholding

P

Threshold

Level 1 Level 2

— —

133

CPU time 0.00 sec

(d) Otsu Method statistical result

(b) NM-PSO thresholding

P

Threshold

Level 1 Level 2

0.21

98.70 165.43

2.38 5.72

118

0.79

CPU time 22.81 sec

(e) NM-PSO statistical result

(c) PSO+EM thresholding

P

Threshold

Level 1 Level 2

0.21

99.18 165.72

3.25 6.03

120

0.79

CPU time 7.88 sec

(f) PSO+EM statistical result

Table 2.

Tri-level thresholding results via Otsu’s method, NM-PSO, and PSO+EM for Image 2.

(a) Otsu Method thresholding

P

Thres.

CPUt

L1 L2 L3

— — —

111 、 146

0.02 sec

(d) Otsu Method statistical result

(b) NM-PSO thresholding

L1 L2 L3

P 0.14 0.53 0.33

98.91 127.48 170.13

3.87 5.44 10.02

Thres. 110 、 143

CPUt 79.51 sec

(e) NM-PSO statistical result

(c) PSO+EM thresholding

L1 L2 L3

P 0.17 0.52 0.31

95.63 124.75 166.39

3.33 6.89 9.81

Thres. 106 、 143

CPUt 23.42 sec

(f) PSO+EM statistical result

15

Table 3.

Bi-level thresholding results via Otsu’s method, NM-PSO, and PSO+EM for Image 3 (screw image).

(a) Otsu Method thresholding

P

Threshold

Level 1 Level 2

— —

210

CPU time 0.00 sec

(d) Otsu Method statistical result

(b) NM-PSO thresholding

P

Threshold

Level 1 Level 2

0.56

180.79 195.13

1.30 13.88

184

0.44

CPU time 22.78 sec

(e) NM-PSO statistical result

(c) PSO+EM thresholding

P

Threshold

Level 1 Level 2

0.57

180.87 205.11

1.29 18.79

184

0.43

CPU time 8.43 sec

(f) PSO+EM statistical result

Table 4.

Tri-level thresholding results via Otsu’s method, NM-PSO, and PSO+EM for Image 4 (PCB image).

(a) Otsu Method thresholding

P

Thres.

CPUt

L1 L2 L3

— — —

83 、 175

0.02 sec

(d) Otsu Method statistical result

(b) NM-PSO thresholding

L1 L2 L3

P 0.18 0.75 0.17

58.90 101.87 141.10

8.95 8.66 20.30

Thres. 78 、 125

CPUt 79.72 sec

(e) NM-PSO statistical result

(c) PSO+EM thresholding

L1 L2 L3

P 0.15 0.73 0.12

60.14 102.05 229.20

7.54 8.94 40.46

Thres. 80 、 129

CPUt 21.13 sec

(f) PSO+EM statistical result

16

5. Conclusions

The NM-PSO method is very effective in the bi-level thresholding case, but its computation time becomes aggravated in the case of multilevel thresholding. To make the evolutionary computation method more practical in on-line object segmentation, we have proposed a faster searching scheme called the PSO+EM algorithm that solves the objective functions of Gaussian curve fitting by combining the EM algorithm and the PSO algorithm.

Experimental results show that the PSO+EM algorithm converges much faster than the NM-PSO algorithm in multilevel thresholding, which could be applied to real-time applications and does not degrade the quality of image segmentation. In addition, comparison results of the PSO+EM and NM-PSO algorithms to the Otsu’s method demonstrate that the two hybrid algorithms offers higher quality in visualization, object size and contrast of image segmentation, particularly when the image has a complex structure or low contrast between the object and background. Not surprisingly, the PSO+EM and NM-PSO algorithms require higher computation time than the Otsu’s method since curve fitting needs to search for the optimum values of more parameters. However, the efficiency has been greatly improved by the PSO+EM algorithm over 200% as compared to the NM-PSO algorithm. To sum up, the PSO+EM algorithm is a promising and viable tool for on-line object segmentation in multi-thresholding due to its computational efficiency, and this method also proves to be effective due to its quality performance.

References

[1] Sahoo, P., S. Soltani and A. Wong. A survey of thresholding techniques. Computer Vision, Graphics, and

Image Processing.

1988; 41: 233-260.

[2] Glasbey, C. A. An analysis of histogram-based thresholding algorithms. CVGIP: Graphical Models and

Image Processing.

1993; 55: 532-537.

[3] Weszka, J. and A. Rosenfeld. Histogram modifications for threshold selection. IEEE Trans. Syst. Man

Cybernet.

1979; 9: 38-52.

[4] Synder, W., G. Bilbro, A. Logenthiran and S. Rajala. Optimal thresholding-a new approach. Pattern

Recognition Letters.

1990; 11: 803-810.

17

[5] Yin, P. Y. A fast scheme for optimal thresholding using genetic algorithms. Signal Processing.

1999; 72:

85-95.

[6] Yen, J. C., F. J. Chang and S. Chang. A new criterion for automatic multilevel thresholding. IEEE Trans.

Image Process.

1995; 4(3): 370-378.

[7] Zahara, E., Fan, S.-K. S., and Tsai, D.-M. Optimal multi-thresholding using a hybrid optimization approach. Pattern Recognition Letters.

2005; 26: 1082-1095.

[8] Fan, S. K. and E. Zahara. A hybrid simplex search and particle swarm optimization for unconstrained optimization. Proceedings of the 32 nd International Conference on Computers and Industrial Engineering.

2002; Limerick, Ireland.

[9] Gonzalez, R. C. and R. E. Woods. Digital Image Processing . 2002; Prentice Hall, Upper Saddle River, NJ.

[10] Renders, J. M. and S. P. Flasse. Hybrid methods using genetic algorithms for global optimization. IEEE

Trans. Syst. Man Cybernet. Part B: Cybernetics.

1996; 26: 243-258.

[11] Yen. J., J. C. Liao, B. Lee and D. Randolph. A hybrid approach to modeling metabolic systems using a genetic algorithm and simplex method. IEEE Trans. Syst. Man Cybernet.-Part B: Cybernetics.

1998; 28:

173-191.

[12] Tomasi, C. Estimating Gaussian Mixture Densities with EM–A Tutorial. 2005; http://www.cs.duke.edu/courses/spring04/cps196.1/handouts/EM/tomasiEM.pdf

[13] Kennedy, J. and R. C. Eberhart. Particle Swarm Optimization. Proceedings of the IEEE International

Conference on Neural Networks. 1995; Piscataway, NJ, USA: 1942-1948.

[14] Eberhart, R. C. and Y. Shi. Tracking and optimizing dynamic systems with particle swarms. Proceedings of the Congress on Evolutionary Computation.

2001; Seoul, Korea, 94-97.

[15] Hu, X. and R. C. Eberhart. Tracking dynamic systems with PSO: where’s the cheese?” Proceedings of The

Workshop on Particle Swarm Optimization.

2001; Indianapolis, IN, USA.

[16] Otsu, N. A threshold selection method for gray-level histogram. IEEE Trans. Syst. Man Cybernet.

1979; 9:

62-66.

18

Download