Uploaded by kanovic

Generalized Particle Swarm Optimization Algorithm - Theoretical and Empirical Analysis with Application in Fault Detection

advertisement
Our reference: AMC 16111
P-authorquery-v8
AUTHOR QUERY FORM
Journal: AMC
Please e-mail or fax your responses and any corrections to:
E-mail: corrections.esch@elsevier.sps.co.in
Article Number: 16111
Fax: +31 2048 52799
Dear Author,
Please check your proof carefully and mark all corrections at the appropriate place in the proof (e.g., by using on-screen annotation in
the PDF file) or compile them in a separate list. To ensure fast publication of your paper please return your corrections within 48 hours.
For correction or revision of any artwork, please consult http://www.elsevier.com/artworkinstructions.
Any queries or remarks that have arisen during the processing of your manuscript are listed below and highlighted by flags in
the proof. Click on the ‘Q’ link to go to the location in the proof.
Location in
article
Q1
Query / Remark: click on the Q link to go
Please insert your reply or correction at the corresponding line in the proof
Please check the significant values are bold in table 4,7.
Thank you for your assistance.
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
Applied Mathematics and Computation xxx (2011) xxx–xxx
1
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier.com/locate/amc
3
Generalized particle swarm optimization algorithm - Theoretical
and empirical analysis with application in fault detection
4
Željko Kanović ⇑, Milan R. Rapaić, Zoran D. Jeličić
5
Faculty of Technical Sciences, Trg Dositeja Obradovića 6, 21000 Novi Sad, Serbia
2
6
8
1 7
9
10
11
12
13
14
15
16
a r t i c l e
i n f o
Keywords:
Analysis of algorithms
Global optimization
Particle swarm optimization
Control theory
Fault detection
a b s t r a c t
A generalization of the particle swarm optimization (PSO) algorithm is presented in this
paper. The novel optimizer, the Generalized PSO (GPSO), is inspired by linear control theory. It enables direct control over the key aspects of particle dynamics during the optimization process. A detailed theoretical and empirical analysis is presented, and parametertuning schemes are proposed. GPSO is compared to the classical PSO and genetic algorithm
(GA) on a set of benchmark problems. The results clearly demonstrate the effectiveness of
the proposed algorithm. Finally, an application of the GPSO algorithm to the fine-tuning of
the support vector machines classifier for electrical machines fault detection is presented.
Ó 2011 Published by Elsevier Inc.
18
19
20
21
22
23
24
25
26
27
28
29
1. Introduction
30
Successful optimizers are often inspired by natural processes and phenomena. Indeed, the natural world is extraordinarily
complex, but it provides us with remarkably elegant and robust solutions to even the toughest problems. The field of global
optimization has prospered much from these nature-inspired techniques, such as genetic algorithms (GAs) [1], simulated
annealing (SA) [2], ant colony optimization (ACO) [3] and others. Among these search strategies, the particle swarm optimization (PSO) algorithm is relatively novel, yet well studied and proven optimizer based on the social behavior of animals
moving in large groups (particularly birds) [4]. Compared to other evolutionary techniques, PSO has only a few adjustable
parameters, and it is computationally inexpensive and very easy to implement [5,6].
PSO uses a set of particles called swarm to investigate the search space. Each particle is described by its position (x) and
velocity (v). The position of each particle is a potential solution, and the best position that each particle achieved during the
entire optimization process is memorized (p). The swarm as a whole memorizes the best position ever achieved by any of its
particles (g). The position and the velocity of each particle in the kth iteration are updated as
31
32
33
34
35
36
37
38
39
40
41
43
44
45
46
47
48
49
v ½k þ 1 ¼ w v ½k þ cp rp½k ðp½k x½kÞ þ cg rg½k ðg½k x½kÞ;
x½k þ 1 ¼ x½k þ v ½k þ 1:
ð1Þ
Acceleration factors cp and cg control the relative impact of the personal (local) and common (global) knowledge on the
movement of each particle. Inertia factor w, which was introduced for the first time in [7], keeps the swarm together and
prevents it from diversifying excessively and therefore diminishing PSO into a pure random search. Random numbers rp
and rg are mutually independent and uniformly distributed on the range [0, 1].
Numerous studies have been published addressing PSO both empirically and theoretically [7,8]. Over the years, the effectiveness of the algorithm has been proven for various engineering problems [9–12]. However, the theoretical justification of
⇑ Corresponding author.
E-mail address: kanovic@uns.ac.rs (Ž. Kanović).
0096-3003/$ - see front matter Ó 2011 Published by Elsevier Inc.
doi:10.1016/j.amc.2011.05.013
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
2
50
51
52
53
54
55
56
58
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
the PSO procedure remained long an open question. The first formal theoretical analyses were undertaken by Ozcan and Mohan [13]. They addressed the dynamics of a simplified, one-dimensional, deterministic PSO model. Clerc and Kennedy [14]
also analyzed PSO by focusing on swarm stability and the so-called ‘‘explosion’’ phenomenon. Jiang et al. [15] were the first
to analyze the stochastic nature of the algorithm. Rapaić and Kanović [16] explicitly addressed PSO with time-varying
parameters, which is the most common case in practical implementations.
Most theoretical analyses reported in the literature are conducted on the basis of the second-order PSO model
x½k þ 1 ð1 þ w cp rp½k cg rg½kÞx½k þ w x½k 1 ¼ cp rp½k p½k þ cg rg½k g½k;
ð2Þ
78
which is equivalent to the model described by (1). The current paper addresses a generalization of PSO that is based on the
general linear discrete second-order model that is well known from control theory [17]. The algorithm itself was proposed by
authors in [18], where it was named Generalized PSO (GPSO). In the current paper, a detailed theoretical and empirical analysis of the algorithm is conducted and new parameter-tuning procedures are proposed.
The idea of the authors is in fact to address PSO in a new and conceptually different fashion, i.e., to consider each particle within the swarm as a second-order linear stochastic system with two inputs and one output. The output of such a
system is the current position of the particle (x), while its inputs are personal and global best positions (p and g, respectively). Such systems are extensively studied in engineering literature [17,19]. The authors found that the stability and
response properties of such a system can be directly related to its performance as an optimizer, i.e., its explorative and
exploitative properties. Thus, one can overcome the inherent flaw of the PSO algorithm, which is its inability to independently control various aspects of the search, such as stability, oscillation frequency and the impact of personal and global
knowledge [16].
The practical implementation of GPSO for the parameter-tuning of a fault detection classifier in industrial plants is also
presented in the paper. GPSO is used in a cross-validation procedure of a support vector machines classifier in order to provide a more reliable and accurate fault detection and classification process.
The outline of the paper is as follows. The idea of GPSO is presented in Section 2. A theoretical analysis of the algorithm
with an emphasis on convergence is presented in Section 3. An empirical analysis of the algorithm with an emphasis on
parameter selection as well as a comparative analysis with respect to the classical PSO and genetic algorithm (GA) is presented in Section 4. In Section 5, an application to electric machine fault detection is described. Conclusions are presented
in Section 6.
79
2. GPSO - the idea
80
Eq. (2) can be interpreted as a difference equation describing the motion of a stochastic, second-order, discrete-time, linear system with two external inputs. In general, a second-order discrete-time system can be modeled by the recurrent
relation
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
81
82
83
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
110
x½k þ 1 þ a1 x½k 1 þ a0 x½k ¼ bp p½k þ bg g½k;
ð3Þ
where some or all of the parameters a1, a0, bp and bg are stochastic variables with appropriate, possibly time-varying probability distributions. Several restrictions should be imposed on these parameters in order to make (3) a successful optimizer. First, the system should be stable (in the sense of control theory [17]), and its stability margins should grow
during the optimization process. The swarm should explore the search space first, and the particles should initially be allowed to move more freely. As the search process approaches its final stages, the optimizer should exploit good solutions
that were previously found, and the effort of the swarm should be concentrated in the vicinity of known solutions. Second,
the response of the system to the perturbed initial conditions should be oscillatory in order for the particles to overshoot
or fly over their respective attractor points. Further, if both external inputs approach the same limit value as k grows, the
particle position x should also converge to this limit. Convergence is understood in the mean square sense, as will be elaborated later. Finally, in the early stages of the search, the system should primarily be governed by the cognitive input p,
allowing particles to behave more independently in these stages. In later stages, the social input g should be dominant
because the swarm should become more centralized, and global knowledge of the swarm as a whole should dominate
the local knowledge of each individual particle. All of these requirements are formulated in a sequel using notions from
control theory.
The characteristic polynomial of (3) is f(z) = z2 + a1z + a0. The dynamics of each particle is primarily defined by roots of this
polynomial, which are also known as the eigenvalues of the system. The system is stable if and only if the modulus q of the
eigenvalues is less than 1. In order for the system to be able to oscillate, the roots of the characteristic polynomial must not
be positive real numbers. The argument / of the eigenvalues determines the discrete frequency of the characteristic oscillations of the system. Argument values close to p result in more frequent oscillations. A typical pair of eigenvalues (p1, p2) is
depicted in Fig. 1.
The requirement that particle positions tend to the global best position when personal best and global best are equal is
the same as the requirement that system (3) has unit gain when both of its inputs are equal. This is equivalent to
1 þ a1 þ a0 ¼ bp þ bg :
ð4Þ
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
3
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
Im (z)
p1
ρ
φ
Re (z)
ζ
p2
Fig. 1. A typical pair of eigenvalues of a second order discrete-time system.
111
112
113
114
116
117
118
119
120
121
122
It is also clear that an increase in bp favors the cognitive component of the search, while an increase in bg favors the social
component. All of the requirements can easily be satisfied if system (3) is rewritten in the following canonical form, often
used in control theory [17]:
x½k þ 1 2fqx½k þ q2 x½k 1 ¼ ð1 2fq þ q2 Þðc p½k þ ð1 cÞ g½kÞ:
ð5Þ
In (5), q is the eigenvalues module, and f is the cosine of their arguments; see Fig. 1. Parameter c is introduced to replace both
bp and bg. Clearly, requirement (4) is satisfied by (5). The primary idea of GPSO algorithm is to use (5), instead of (1) or (2), in
optimizer implementation. The parameters in this equation allow a more direct and independent control of the various aspects of the search procedure. Fig. 2 depicts particle trajectories governed by (5) with different parameter values in a twodimensional search space. For simplicity, both attractor points (i.e., global and personal best) are assumed to be equal to zero.
Note that lower values of parameter q lead to faster convergence, while higher values result in less stable motion and slower
ρ = 0.9 ; ζ = 0.5
1
ρ = 0.9 ; ζ = −0.5
1
0.5
0.5
0
x2
x2 0
-0.5
-0.5
-1
-2
-1
-1
0
1
-1.5
-3
2
-2
-1
x1
1
0
1
2
x1
ρ = 0.5 ; ζ = 0.5
ρ = 0.5 ; ζ = −0.5
1
0.5
0.5
x2
x2
0
0
-0.5
-1
0
1
x1
2
-0.5
-2
-1
0
1
2
x1
Fig. 2. GPSO particle trajectories with different sets of parameters. The search space is assumed to be twodimensional, and both attractors (personal and
global best) are assumed to be zero. The starting point is assumed to be (1, 1).
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
4
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
127
convergence. Thus, higher values of qwould enable the swarm to cover a wider portion of the search space. Lower values of q
are beneficial in the later stages of the search, when faster convergence is preferable. Parameter f determines the way particles oscillate over attractor points. For f equal to 1, a particle would approach its attractor in a non-oscillatory fashion. For f
equal to 1, a particle would erratically oscillate from one side of the attractor to another. By adjusting f to a value between
1 and 1, a desired behavior can be obtained.
128
3. Theoretical analysis
129
There are several ways to define convergence for stochastic sequences. The notion of mean square convergence is adopted
in the current paper. A stochastic sequence x[k] is said to converge in mean square to the limit point a if and only if
123
124
125
126
130
131
lim Eðx½k aÞ2 ¼ 0;
133
134
135
ð6Þ
k!1
where E denotes the mathematical expectation operator [20]. The investigation of convergence of a stochastic sequence can
therefore be replaced by the investigation of two deterministic sequences, since (6) is equivalent to
136
138
139
lim Ex½k ¼ a
ð7Þ
lim Ex2 ½k ¼ a2 :
ð8Þ
k!1
and
140
142
143
144
145
146
147
148
150
151
152
153
k!1
In the sequel, () is used instead of E() to simplify notation and make it more compact.
If we assume that p[k] and g[k] both converge to their respective values p and g, GPSO’s convergence is equivalent to the
stability of the dynamical system (5). It has already been mentioned that p and g can be considered as inputs, which do not
affect stability, as is well known from control theory [17]. Let us introduce l = c p + (1 c) g and y[k] = x[k] l. The following equation is a direct consequence of (5):
y½k þ 1 2fqy½k þ q2 y½k 1 ¼ 0:
ð9Þ
Because the inertia factor w of the classical PSO is closely related to the eigenvalue modulus, q is considered a deterministic
parameter. The f factor, on the other hand, is assumed to be stochastic and independent of particle position.
Applying the expectation operator to (9) one obtains
154
156
y½k þ 1 2fqy½k þ q2 y½k 1 ¼ 0:
157
The eigenvalues of this system are
160
qffiffiffiffiffiffiffiffiffiffiffiffiffi
p1;2 ¼ qf jq 1 f2 ;
158
161
162
164
165
166
167
168
169
171
ð10Þ
ð11Þ
where j is the imaginary unit, and the stability conditions are given by
0 < q < 1;
ð12Þ
jfj 6 1:
ð13Þ
If system (10) converges, its limit point is zero, and therefore, the mathematical expectation of any particle’s position tends
to l. If personal best and global best positions are asymptotically equal, which is true for the particle achieving the global
best position, then l = p = g.
The variance of x[k] is addressed next. From (9), it is readily obtained that
y2 ½k þ 1 ¼ 4f2 q2 y2 ½k þ q4 y2 ½k 1 4fq3 y½ky½k 1;
ð14Þ
y½k þ 1y½k ¼ 2fqy2 ½k q2 y½ky½k 1:
ð15Þ
172
174
175
176
177
179
180
181
183
Eqs. (14) and (15) can be considered a model of a deterministic linear system. State variables can be designated as
a1 ½k ¼ y2 ½k, a2 ½k ¼ y2 ½k 1 and a3 ½k ¼ y½ky½k 1. The system can be rewritten in state-space form as
a½k þ 1 ¼ Aa½k;
ð16Þ
T
with state vector a = [a1 a2 a3] and the matrix A defined as
2
0
6
A ¼ 4 q4
0
1
4f2 q2
2fq
0
3
7
4fq3 5:
2
q
ð17Þ
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
184
185
186
188
189
190
191
193
194
195
The system (16) is stable if and only if all eigenvalues of A are less than 1 in modulus. The eigenvalues of a matrix are roots of
its characteristic polynomial. The characteristic polynomial of the matrix A is
f ðzÞ ¼ z3 þ z2 q2 ð1 4f2 Þ zq4 ð4f2 8ðfÞ2 þ 1Þ q6 :
f ð1Þ > 0;
f ð1Þ < 0;
ja20 1j > ja0 a2 a1 j:
ja0 j < 1;
199
200
201
202
204
205
206
ð20Þ
ð1 q Þð1 q Þ > 4q ðf þ q ðf 2ðf2 ÞÞ;
ð21Þ
jq6 j < 1;
ð22Þ
1 q12 > q4 j1 q4 þ 4f2 ð1 þ q4 Þ 8ðfÞ2 j:
ð23Þ
4
2
2
2
2
Due to (12), both (21) and (22) are identically satisfied; that is, right-hand side of (21) is always negative, whereas its lefthand side is always positive. The stability conditions are therefore (20) and (23). Unfortunately, these conditions are not as
nearly as compact as (12) and (13), and they provide no direct insight into the influence of parameters q and f on stability.
However, some clarification can be obtained by the introduction of the helper polynomial
f1 ðqÞ ¼
f ðq2 qÞ
q6
¼ q3 þ q2 ð1 4f2 Þ qð4f2 8ðfÞ2 þ 1Þ 1
ð24Þ
and investigation of its roots by means of (19):
f1 ð1Þ ¼ 8ððfÞ2 f2 Þ > 0;
f1 ð1Þ ¼ 8ðfÞ2 < 0;
208
ð19Þ
Applying these conditions to (18) yields
2
198
ð18Þ
Stability can be investigated by various methods. Jury’s criterion [17] is utilized in the sequel. For a general third-order polynomial f(z) = z3 + a2z2 + a1z + a0, the conditions under which all zeros are less than one in modulus are
ð1 þ q2 Þð1 q4 Þ > 4q2 ðf2 þ q2 ðf2 2ðfÞ2 ÞÞ;
197
5
j1j < 1;
0 > 8f2 ðfÞ2 :
ð25Þ
ð26Þ
ð27Þ
ð28Þ
217
Condition (25) is not satisfied because the variance of f (v arf ¼ f2 ðfÞ2 Þ is non-negative. Conditions (27) and (28) are also
violated. It is therefore clear that f1 has at least one root outside the unit circle or on its boundary at best. Since the roots of f
are q2 times the roots of f1, it can be concluded that at least one root of f is greater than or equal to q2. Thus, it can be concluded that an increase in qhas a negative influence on the convergence of both the mean value and the variance of particle
positions. It is also clear that by increasing the variance of f, f1(1) decreases, and the right-hand side of (28) increases; therefore, the increase in the variance of f impedes the convergence of the algorithm.
The limit point of a is [000]T, and therefore, y2 ½k is zero in limit. Because y2 ½k is asymptotically equal to the variance of x,
it follows that the variance of the position of any particle is asymptotically equal to zero. This concludes the proof that under
conditions (12), (13), (20) and (23), GPSO system (5) exhibits mean square convergence.
218
4. Empirical analysis
219
It is known that proper parameter selection is crucial for the performance of PSO. The relationships between adjustable
factors within classical PSO and GPSO are straightforward, with
209
210
211
212
213
214
215
216
220
221
pffiffiffiffi
q ¼ w;
223
224
225
226
227
228
229
230
231
232
1 w cp rp cg rg
pffiffiffiffi
;
f¼
2 w
cp rp
:
c¼
cp rp þ cg rg
ð29Þ
ð30Þ
ð31Þ
The inertia factor w is the ‘‘glue’’ of the swarm; its responsibility is to keep the particles together and to prevent them from
wandering too far away. Since its introduction by Shi and Eberhart [7], it was noted that it is beneficial for the performance of
the search to gradually decrease its value. A common choice is to select a value close to 0.9 at the beginning of the search and
a value close to 0.4 at the end. Based on (29), it would be reasonable to designate q as decreasing from about 0.95 to about
0.6. Regarding the acceleration factors, cp = cg = 2 is the choice used in the original variant of the PSO algorithm [4]. Other
commonly used schemes include cp = cg = 0.5 as well as cp = 2.8 and cg = 1.3. It is generally noted that the choice of acceleration factors is not critical, but they should be chosen so that cp + cg < 4 [6]. In fact, if the last condition is violated, PSO
does not converge, regardless of the inertia value [15,16]. Ratnaweera et al. [5] introduced the time-varying acceleration
coefficients PSO (TVAC-PSO) and demonstrated that it outperforms other commonly used PSO schemes. They suggested that
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
6
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
cp should linearly decrease from 2.5 to 0.5, while cg should simultaneously linearly increase from 0.5 to 2.5. This would
correspond to changing c from about 0.8 to approximately 0.2.
It is important to note that the novelty of the GPSO algorithm does not reduce to a change in coefficient expressions. Indeed, if the coefficients of GPSO are fixed, then according to formulas (29)–(31), one can find an equivalent set of classical
PSO parameters. However, if parameters vary in time, which is the case in most practical applications, linear variations in
GPSO parameters are equivalent to nonlinear variations in inertia w and acceleration coefficients cpand cg. It is this nonlinearity that accounts for the performance of GPSO, as the effects of the new parameterization cannot be achieved by the linear
alteration of the classical PSO parameters. One could utilize the classical PSO with a highly nonlinear parameter adaptation
law to achieve this, but this would be impractical. In GPSO, nonlinearity has been hidden within the algorithm definition,
thus allowing the same effect to be achieved by linear alteration of the newly proposed parameters. These parameters have
a well-defined interpretation in terms of particle dynamics, and they allow independent control of the particles’ dynamic
properties, such as stability, characteristic frequency and relative impact of local to global knowledge.
Based on recommendations stated earlier, authors have proposed two parameters adjustment schemes, with q linearly
decreasing from 0.95 to 0.6 and c linearly decreasing from 0.8 to 0.2. The proper selection of f proved to be the most difficult
task, as this factor has no direct analogy to any of the parameters of the classical PSO. The fparameter equals the cosine of
particle eigenvalues in the dynamic model. Because this is a discrete system, the eigenvalue argument equals the discrete
circular characteristic frequency of particle motion. Thus, f is directly related to the ability of particles to oscillate around
attractor points (i.e., global and personal best) and, therefore, has a crucial impact on the exploitative abilities of the algorithm. If f equals 1, this would prevent particles from flying over the attractor points; however, if f is close to 1, this would
result in the desultory movement of the particles. In both cases, particles cannot explore the vicinity of the attractor points.
Based on thorough empirical analysis and numerous numerical experiments conducted by the authors, it is shown that the
most appropriate and robust choice is to select f uniformly distributed across a wider range of values. The two most successful schemes are presented further. In the first GPSO scheme (GPSO1), f was adopted as a stochastic parameter with uniform
distribution ranging from 0.9 to 0.2, while in the second scheme (GPSO2), f was uniformly distributed in the range
[0.9, 0.6].
The empirical analysis of various GPSO parameter adjustment strategies has been performed using several well-known
benchmark functions presented in Table 1, all of which have a minimum value of zero.
The proposed GPSO schemes are compared to TVAC-PSO developed by Ratnaweera et al. [5]. A comparison is also made
with respect to GA with linear ranking, stochastic universal sampling, uniform mutation and uniform crossover [1]. In recent
literature, several other modifications of the original PSO have emerged; these incorporate mutation-like operators [21,22],
modify the topology of the swarm [8,23–25] or utilize hybridization with other techniques [26]. However, because the goal
of this study is to investigate possible improvements of the optimizer resulting from a better control over particle dynamics,
the analysis of such PSO modifications is beyond the scope of the present paper.
Two experiments were performed. In both of them, the search was conducted within a 5-dimensional search space using
100 iterations with 30 particles in the swarm. However, in the first experiment, the population was initialized within a
hypercube of edge 10 centered around the global optimal point of a particular benchmark, while in the second experiment,
the initial hypercube was shifted away, with its center displaced by 100 in each direction. The results of the experiments are
presented in Tables 2 and 3; values are shown for the mean, median and standard deviation of the obtained minimum after
100 consecutive runs.
It is clear that both newly proposed schemes perform better than either TVAC-PSO or GA in the majority of cases; the
exception is Michalewitz’s benchmark, where they are consistently outperformed by GA and slightly outperformed by
TVAC-PSO. Note also that in many of the considered cases, both GPSO variants show results that are several orders of magnitudes better than the results obtained by the other two optimizers.
Figs. 3 and 4 depict changes in the objective value of the best, mean and worst particles within the PSO and GPSO1
swarms, respectively.
Table 1
Account of benchmark functions used for comparison.
Dixon-Price
Rosenbrock
Zakharov
Griewank
Rastrigin
Ackley
Michalewitz
Perm
Spherical
P
f ðxÞ ¼ ðx1 1Þ2 þ ni¼2 ið2x2i xi1 Þ2
h
i
P
2
2
2
f ðxÞ ¼ n1
i¼1 100ðxi xiþ1 Þ þ ðxi 1Þ
Pn
2 Pn
4
P
þ
f ðxÞ ¼ ni¼1 x2i þ
i¼1 0:5ixi
i¼1 0:5ixi
pffi Pn x2i
Qn
f ðxÞ ¼ i¼1 4000 i¼1 cos xi = i þ 1
P
f ðxÞ ¼ 10n þ ni¼1 ðx2i 10 cosð2pxi ÞÞ
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi
P
P
f ðxÞ ¼ 20 þ e 20 exp 15 1n ni¼1 x2i exp 1n ni¼1 cosð2pxi Þ
P
f ðxÞ ¼ 5:2778 ni¼2 sinðxi Þðsinðix2i =pÞÞ2m ; m ¼ 10
i2
Pn hPn
k
k
f ðxÞ ¼ k¼1
i¼1 ði þ 0:5Þððxi =iÞ 1Þ
P
f ðxÞ ¼ ni¼1 x2i
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
7
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
Table 2
Results of the first experiment (initial population centered on global optimum, search conducted for 100 iterations): mean, median and standard deviation of
the minimal obtained values.
Dixon-Price
Rosenbrock
Zakharov
Griewank
Rastrigin
Ackley
Michalewitz
Perm
Spherical
TVAC PSO
mean
median
std. dev.
1.77 101
6.36 106
2.94 101
1.61 101
1.19
7.85 101
2.62 106
4.15 107
6.48 106
1.02 101
9.40 102
4.90 102
4.21
3.02
2.96
4.95 102
1.11 104
2.82 101
1.42
1.42
5.97 101
2.70 102
5.92 101
4.61 102
7.72 109
2.88 109
1.51 108
GPSO1
mean
median
std. dev.
2.17 101
6.74 1016
3.09 101
3.62
5.38 101
1.55 101
2.03 1015
2.04 1020
1.99 1014
6.58 102
5.78 102
4.26 102
2.49
1.98
1.55
1.95 1011
4.93 1012
9.37 1011
1.88
1.95
6.26 101
1.20 101
4.10
2.66 101
9.20 1021
5.03 1024
6.47 1020
GPSO2
mean
median
std. dev.
2.43 101
1.03 1014
3.20 101
1.05 101
1.47
3.44 101
1.53 1014
2.00 1017
1.47 1013
6.91 102
6.77 102
4.07 102
3.07
2.98
2.06
4.81 1010
1.54 1010
2.16 109
2.11
2.19
5.20 101
3.90 101
6.96
1.39 102
1.51 1019
6.57 1021
7.93 1019
GA
mean
median
std. dev.
2.61
1.09
4.05
3.96 101
1.99 101
5.50 101
1.77
1.07
2.11
4.59 102
4.02 102
2.35 102
5.94
5.61
2.59
1.13
1.06
5.74 101
9.70 101
9.48 101
2.52 101
7.61 103
2.85 103
1.62 104
8.20 102
6.12 102
7.04 102
Table 3
Results of the second experiment (initial population shifted away from global optimum, search conducted for 100 iterations): mean, median and standard
deviation of the minimal obtained values.
Dixon-Price
Rosenbrock
Zakharov
Griewank
Rastrigin
Ackley
Michalewitz
Perm
Spherical
TVAC PSO
mean
median
std. dev.
9.29 101
2.50 102
1.82 101
3.167 102
9.17 101
6.127 102
7.60 102
2.19 101
2.11 103
8.06 101
3.32 101
9.25 101
4.12
3.08
3.25
2.00 101
2.00 101
2.00 102
2.05
2.06
4.49 101
5.50 1014
1.43 103
2.85 1015
2.45 106
4.43 107
1.50 105
GPSO1
mean
median
std. dev.
2.60 101
8.25 1015
3.22 101
1.29 102
2.59 101
2.64 102
2.21 1011
6.88 1017
3.62 1010
7.50 102
6.64 102
4.68 102
2.87
1.98
1.88
2.00 101
2.00 101
2.99 102
1.99
2.07
5.32 101
2.08 101
4.79
9.85 101
4.48 1019
2.29 1023
1.39 1017
GPSO2
mean
median
std. dev.
2.34 101
1.47 1014
3.17 101
1.94 102
7.62 101
3.19 102
4.85 1011
1.38 1014
3.15 1010
8.39 102
7.02 102
5.25 102
3.50
2.98
2.30
2.00 101
2.00 101
4.10 102
2.17
2.17
4.35 101
2.72 101
8.49
6.33 101
3.44 1020
6.30 1021
8.65 1020
GA
mean
median
std. dev.
1.06 103
2.94 102
1.710 103
1.81 103
2.29 102
3.77 103
1.76 101
2.86
4.46 101
8.94 102
7.23 102
6.46 102
7.39
7.27
2.82
2.00 101
2.00 101
5.40 103
9.94 101
9.65 101
2.38 101
2.46 1012
2.78 103
1.96 1013
1.22 101
9.85 102
1.18 101
10000
best objective
mean objective
objective value
8000
w orst objective
6000
4000
2000
0
0
20
40
60
iteration number
80
100
Fig. 3. Objective values of the best, the mean and the worst particle within a TVAC-PSO swarm during 100 iterations.
278
279
280
281
282
283
284
285
It is clear that the best particle converges very fast in both settings. However, in the GPSO swarm, other particles do not
follow so rapidly, effectively keeping the diversity of the population sufficiently large. To illustrate further, Fig. 5 shows the
maximum distance between two particles in the swarm during the optimization process. It is clear that the GPSO swarm
becomes extremely diverse at certain points, spreading across the vast area of the search space, which is the main and most
important effect of the newly proposed parameterization. This phenomenon is likely the explanation of the superior performance that GPSO exhibits in most of the analyzed cases.
A third experiment was also conducted with the same settings as the second one, though with iteration number set to
1000. The particles were initially shifted away from the global optimum, but they were allowed to search for a longer period
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
8
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
10000
best objective
mean objective
objective value
8000
w orst objective
6000
4000
2000
0
0
20
40
60
iteration number
80
100
Fig. 4. Objective values of the best, the mean and the worst particle within a GPSO1 swarm during 100 iterations.
maximum distance between two particles
15
GPSO1
PSO
10
5
0
0
20
40
60
80
100
iterations
Fig. 5. Maximum distance between two particles during the optimization process for GPSO1 and PSO.
286
287
288
289
290
Q1
of time. The results are presented in Table 4. In most of the cases, all of the optimizers performed significantly better, the
exception being the Ackley function, for which all of them fail. GA is again superior to the other techniques when optimizing
Michalewitz function as well as when Griewank and Rastrigin functions are considered. TVAC-PSO optimizes the Rosenbrock
function slightly better than the other considered algorithms. With other benchmarks, both GPSO variants exhibit better or
even significantly better performance.
Table 4
Results of the third experiment (initial population shifted away from global optimum, search conducted for 1000 iterations): mean, median and standard
deviation of the minimal obtained values.
Dixon-Price
Rosenbrock
Zakharov
Griewank
Rastrigin
Ackley
Michalewitz
Perm
Spherical
TVAC PSO
mean
median
std. dev.
1.26 101
9.86 1032
2.62 101
4.02 101
2.15 101
9.49 101
5.22 1067
1.84 1078
5.22 1066
2.77 101
1.84 101
3.02 101
6.21
5.96
4.54
2.00 101
2.00 101
4.12 107
1.11
1.05
5.26 101
2.48
4.17 101
4.61
1.50 1090
1.00 1092
5.91 1090
GPSO1
mean
median
std. dev.
1.64 101
9.86 1032
2.82 101
3.72 101
3.66
7.38 101
4.53 1067
3.53 1091
4.17 1066
3.53 102
2.95 102
2.06 102
6.76 101
4.97 101
7.85 101
1.98 101
2.00 101
2.00
1.70
1.77
5.00 101
9.93 101
1.89 101
2.71
3.67 1091
6.2 10120
3.67 1090
GPSO2
mean
median
std. dev.
2.03 101
9.86 1032
3.05 101
5.35 101
3.34
1.70 102
2.44 1070
3.48 1094
1.94 1069
4.57 102
3.81 102
2.62 102
9.45 101
9.94 101
1.06
2.00 101
2.00 101
2.92 102
2.01
2.07
3.83 101
2.47
3.93 101
4.28
5.98 1090
1.8 10109
5.98 1089
GA
mean
median
std. dev.
1.81 101
6.60 102
2.38 101
7.26 102
7.89 101
3.37 103
7.43 103
5.16 103
6.74 103
1.54 102
1.47 102
8.66 103
1.51 101
8.41 102
2.30 101
2.00 101
2.00 101
6.92 105
3.73 101
3.70 101
4.07 102
3.19 102
6.34 101
9.03 102
5.59 104
3.33 104
6.77 104
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
9
291
5. Application example: GPSO in fault detection
292
This section describes an application of GPSO to induction motor fault detection. The classification algorithm, which includes support vector machines with GPSO parameter optimization (i.e., GPSO-based cross-validation), represents a new
procedure that is proposed in this paper for the first time.
The experiment was conducted in a sunflower oil processing and production plant during a maintenance period. Vibration analysis was used, which is a popular technique due to its easy measurability, high accuracy and reliability [27]. Electrical current signature analysis, which is also a commonly used technique, could not be used because all motors are driven
by frequency converters. Two faults were considered, static eccentricity and bearing wear. For each of them, a classifier was
constructed based on acquired vibration signals that detects whether a fault on the observed motor is present. Classifier
parameters were optimized using GPSO, resulting in better performance, efficiency and accuracy.
The fault detection procedure is briefly described. Vibration signals from horizontal and vertical vibration sensors
mounted on ten induction motors were acquired. High sensitivity (100 mV/g) ceramic shear accelerometers with magnetic
mounting were used. Two different types of motors were considered, five of each type. The first type is a 5.5 kW motor with
one pair of poles, a nominal speed of 2925 rpm and a nominal current of 10.4 A. These motors drive the screw conveyors in
the process plant. Two motors of this type were healthy; one motor had wear on both the inner and outer race of the bearing,
one motor had a static eccentricity level of 30%, and one motor had both faults present at the same time (i.e., inner and outer
race wear and 50% static eccentricity). The second type is a 15 kW motor driving the crushed oilseed conditioners with one
pair of poles, a nominal speed of 2940 rpm and a nominal current of 26.5 A. Two of these motors were healthy, two had static
eccentricity levels of 30% and 50%, and one had defects on the bearing ball, inner and outer race. Multiple vibration measurements were conducted on each motor. Each signal was collected for 2 s with a sampling frequency of 25.6 kHz. A total of 200
signals were collected, with 78 healthy signals, 58 signals representing only static eccentricity, 44 signals representing only
bearing wear and 20 signals with both faults present at the same time.
These signals were analyzed in both time and frequency domain, and characteristic features were calculated. Nine characteristic statistical features were used in the time domain, including arithmetic mean value, root mean square value, square
mean root value, skewness index, kurtosis index, C factor, L factor, S factor and I factor [28,29]. In the frequency domain, eight
characteristic features were used, which represent the sum of amplitudes of the power spectrum in the region around the
characteristic frequencies. The summation is performed in the band [fc 3 Hz, fc + 3 Hz], where fc is the characteristic frequency. First three features represent the sum around twice the supply frequency (fs = 50 Hz) and its sidebands (2fs ± fr,
where fr is rotor frequency), which are the indicators of static eccentricity in an induction motor [30]. The next five features
are related to the bearing conditions. They represent the sum of the amplitudes of the power spectrum around the bearing
characteristic frequencies, which are:
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
325
326
327
329
330
331
333
334
335
337
338
339
341
342
343
344
345
346
347
- outer race fault frequency:
frpfo ¼ fr N
d
1 cos / ;
2
D
ð32Þ
- inner race fault frequency:
fbpfi ¼ fr N
d
1 þ cos / ;
2
D
ð33Þ
- rotation frequency of the rolling element:
fbsf ¼ fr "
D
d
1
2d
D
#
2
cos2 / ;
ð34Þ
- rolling element fault frequency:
fbff ¼ 2 fbsf ;
ð35Þ
- cage fault frequency:
fftf ¼ fr 1
d
1 cos / :
2
D
ð36Þ
Note that N is the number of rolling elements in the bearing, u is the contact angle of the rolling element, dis the rolling
element diameter, and D is the diameter of the bearing shell [28]. Data on bearing geometry and expected characteristic frequencies are listed in Tables 5 and 6, respectively.
The feature set of totally seventeen features was then dimensionally reduced using principal component analysis [27,31].
Only the first six principal components were used, resulting in a more comprehensive and less redundant feature set that
contains sufficient information for successful classification and fault detection. This modified feature set was then divided
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
10
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
Table 5
Bearing geometry data.
Bearing type
6208 2ZC3
6209 2ZC3
Shell diameter D [mm]
Ball diameter d [mm]
Number of balls N
Contact angle u[o]
60
11.48
9
0
65
12.7
9
0
Table 6
Expected characteristic bearing frequencies.
348
349
350
351
352
353
354
355
Motor power [kW]
5.5
15
Rpm [min1]/ fr[Hz]
frpfo[Hz]
fbpfi[Hz]
fbsf[Hz]
fbff[Hz]
fftf[Hz]
2925/ 48.75
177.4
261.35
122.73
245.46
19.71
2940/ 49
178.31
262.69
123.36
246.72
19.81
into two subsets, the training set and the test set, containing features for 146 and 54 signals, respectively. These sets were
formed using signals of different motors to avoid overfitting at the classifier training stage.
Classification was performed using support vector machines (SVM), a kind of learning machine based on statistical learning theory. A variant of SVM with a soft margin (penalty parameter C) and a Gaussian RBF kernel function (parameter r) was
used [32,33]. For both the static eccentricity and bearing wear faults, a classifier was applied, and parameters C and rwere
optimized using GPSO algorithm. Classifiers were trained using a training feature set, and the classification error on this data
set, i.e., the number of false classifications, was used as the optimality criterion for SVM parameter optimization. A similar
procedure, with a different modification of PSO for parameter-tuning of the support vector regression model, was proposed
START
Initial SVM parameters
(σ, C)
SVM training using training
data set
optimal SVM
parameters (σ, C)
Classification of the training
data set
SVM parameters
(σ, C) optimization
using GPSO
Classification error calculation
100 iterations
Optimal SVM classifier
END
Fig. 6. The diagram of the SVM parameters optimization process.
Table 7
Fault classification results.
Test data set
False fault classification
Classification error [%]
Classifier for static eccentricity
SVM parameters C = 0.87, r = 0.83
Classifier for bearing wear
SVM parameters C=2.79, r = 0.36
Faulty
Fault-free
22
32
18
36
1
1.85
1
1.85
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
11
361
in [34,35]. GPSO parameters were set according to the GPSO1 settings, with 100 iterations and 30 particles. The diagram of
the SVM parameter optimization process is shown in Fig. 6.
The trained classifiers were tested using the test set. Classification results are listed in Table 7.
These results demonstrate the efficiency of classifiers and the effect of SVM parameter optimization. Both classifiers had
only one false classification. These experimental results proved that GPSO can be successfully applied in fault detection, providing accurate, efficient and reliable fault classifiers applicable in real industrial systems.
362
6. Conclusions
363
378
Finding the global minimum of a function is generally an ill-posed problem [6]. However, swarm-based methods in general and PSO in particular provide us with powerful and robust tools for tackling the global optimization problems encountered in science and engineering. By incorporating requirements concerning exploration and exploitation properties in a
formalism usually connected to the linear control theory, GPSO was recently proposed [18]. A theoretical analysis of this novel optimization algorithm is presented in this paper. Convergence conditions have been derived, and the influence of
parameters on particle dynamics and optimizer performance has been investigated. A broad empirical study based on various benchmark problems was also conducted. Based on an analysis of the obtained results, two sets of parameters are recommended. In most of the examples, the proposed GPSO schemes perform better in comparison to TVAC-PSO and GA (see
Tables 2–4). Finally, an application of GPSO to the parameter optimization of classifiers used in induction motor fault detection is presented, demonstrating the potential of this algorithm for practical engineering applications.
There are several possibilities for further development. In particular, more complex models than (5), whether linear or
even non-linear, can be derived. Further research should also address the topology of the swarm and exploit various patterns
of communication among the particles. Different swarm topologies have already been proven to be beneficial for the performance of classical PSO [23]. Hybridization with evolutionary algorithms is also known to improve particle swarm optimizers
[21]. The incorporation of evolutionary operators, such are crossover and mutation, should also improve the performance of
GPSO.
379
Acknowledgement
380
381
The authors gratefully acknowledge the financial support from the FP7 European Research Project entitled ‘‘PRODI Power plants RObustification by fault Diagnosis and Isolation techniques’’, Grant No. 224233.
382
References
356
357
358
359
360
364
365
366
367
368
369
370
371
372
373
374
375
376
377
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programming, 3rd ed., Springer, 1999.
S. Kirkpatrick, C.D. Gellat, M.P. Vecchi, Optimization by simulated annealing, Science 220 (4598) (1983) 671–680.
M. Dorigo, C. Blum, Ant colony optimization theory: a survey, Theoretical Computer Science 344 (2005) 243–278.
J. Kennedy, R.C. Eberhart, Particle Swarm Optimization, in: Proceedings of IEEE International Conference on Neural Networks, Perth, Australia 1995, pp.
1942–1948.
A. Ratnaweera, K.H. Saman, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE
Transactions on Evolutionary Computation 8 (3) (2004) 240–255.
J.C. Schutte, A.A. Groenwold, A study of global optimization using particle swarms, Journal of Global Optimization 31 (2005) 93–108.
Y. Shi, R.C. Eberhart, Empirical study of particle swarm optimization, in: Proceedings of IEEE International Congress on Evolutionary Computation 3
(1999), pp. 101–106.
F. Van der Bergh, An Analysis of Particle Swarm Optimizers, Ph.D. thesis, University of Pretoria, Pretoria, 2001.
Dong et al, An application of swarm optimization to nonlinear programming, Computers and Mathematics with Applications 49 (2005) 1655–1668.
G.G. Dimopoulos, Mixed-variable engineering optimization based on evolutionary and social metaphors, Computer methods in applied mathematics
and engineering 196 (2007) 803–817.
Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for constrained engineering design problems, Engineering Applications of
Artificial Intelligence 20 (2007) 89–99.
Y. Olamaei, T. Niknam, G. Gharehpetian, Application of particle swarm optimization for distribution feeder reconfiguration considering distributed
generators, Applied Mathematics and Computation 201 (2008) 575–586.
E. Ozcan, C.K. Mohan, Analysis of simple particle swarm optimization system, Intelligent Engineering Systems Through Artificial Neural Networks 8
(1998) 253–258.
M. Clerc, J. Kennedy, The particle swarm - explosion, stability and convergence in a multidimensional complex space, IEEE Transactions on
Evolutionary Computation 6 (1) (2002) 58–73.
M. Jiang, Y.P. Luo, S.Y. Yang, Stochastic convergence analysis and parameter selection of standard particle swarm optimization algorithm, Information
Processing Letters 102 (2007) 8–16.
M.R. Rapaić, Ž. Kanović, Time-varying PSO - convergence analysis, convergence related parameterization and new parameter adjustment schemes,
Information Processing Letters 109 (2009) 548–552.
K.J. Å´ström, B. Wittenmark, Computer-Controlled Systems - Theory and Design, 3rd ed., Prentice Hall, 1997.
M.R. Rapaić, Ž. Kanović, Z.D. Jeličić, D. Petrovački, Generalized PSO Algorithm – an application to Lorenz system identification by means of neuralnetworks, in: Proceedings of NEUREL-2008, 9th Symposium on Neural Network Applications in Electrical Engineering, Belgrade, Serbia (2008)
pp. 31–35.
S. Elaydi, An introduction to difference equations, 3rd ed., Springer, 2005.
A. Papoulis, Probability, Random Variables and Stochastic Processes, McGraw-Hill, 1965.
M. Senthil Arumungam, M.V.C. Rao, On the improved performances of the particle swarm optimization algorithms with adaptive parameters,
cross-over operators and root mean square (RMS) variants for computing optimal control of a class of hybrid systems, Applied Soft Computing 8 (2007)
324–336.
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
AMC 16111
No. of Pages 13, Model 3G
14 May 2011
12
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
Ž. Kanović et al. / Applied Mathematics and Computation xxx (2011) xxx–xxx
[22] Z. Cui, X. Cai, J. Zeng, G. Sun, Particle swarm optimization with FUSS and RWS for high dimensional functions, Applied Mathematics and Computation
205 (2008) 98–108.
[23] B. Niu, Y. Zhu, X. He, H. Wu, MCPSO: A multi-swarm cooperative particle swarm optimizer, Applied Mathematics and Computation 185 (2006)
1050–1062.
[24] C. DeBao, Z. ChunXia, Particle swarm optimization with adaptive population size and its application, Applied Soft Computing 9 (2009) 39–48.
[25] Y. Jiang, C. Liu, C. Huang, X. Wu, Improved particle swarm algorithm for hydrological parameter optimization, Applied Mathematics and Computation
217 (2010) 3207–3215.
[26] M.R. Chen, X. Li, X. Zhang, Y.Z. Lu, A novel particle swarm optimizer hybridized with extremal optimization, Applied Soft Computing 10 (2010)
367–373.
[27] A. Widodo, B. Yang, T. Han, Combination of independent component analysis and support vector machines for intelligent faults diagnosis of induction
motors, Expert Systems with Applications 32 (2007) 299–312.
[28] P. Stepanić, I. Latinović, Ž. ÐDurović, A new approach to detection of defects in rolling element bearings based on statistical pattern recognition, The
International Journal of Advanced Manufacturing Technology 45 (2009) 91–100.
[29] B. Samanta, C. Nataraj, Use of particle swarm optimization for machinery fault detection, Engineering Applications of Artificial Intelligence 22 (2009)
308–316.
[30] D.G. Dorell, W.D. Thomson, S. Roach, Analysis of air gap flux, current, and vibration signals as a function of the combination of static and dynamic air
gap eccentricity in 3-phase induction motors, IEEE Transactions on Industry Applications 33 (1) (1997) 24–34.
[31] Q. He, R. Yan, F. Kong, R. Du, Machine condition monitoring using principal component representations, Mechanical Systems and Signal Processing 23
(2009) 446–466.
[32] L. Wang (Ed.), Support Vector Machines: Theory and Applications, Springer, 2005.
[33] P.K. Kankar, S.C. Sharma, S.P. Harsha, Fault diagnosis of ball bearings using machine learning methods, Expert Systems with Applications 33 (2011)
1876–1886.
[34] W.C. Hong, Rainfall forecasting by technological machine learning models, Applied Mathematics and Computation 200 (2008) 41–57.
[35] Q. Wu, Car assembly line fault diagnosis based on modified support vector classifier machine, Expert Systems with Applications 37 (2010) 6352–6358.
442
Please cite this article in press as: Ž. Kanović et al., Generalized particle swarm optimization algorithm - Theoretical and empirical analysis
with application in fault detection, Appl. Math. Comput. (2011), doi:10.1016/j.amc.2011.05.013
Download