1. Generate an initial population of Ns randomly constructed

advertisement
Uneven allocation of elements in linear multi-state
sliding window system
Gregory Levitin
The Israel Electric Corporation Ltd., Reliability Department,
P. O. Box 10, Haifa 31000, Israel
Tel. +972-48183726, Fax. +972-48183790, E-mail: levitin@iec.co.il
Abstract
The linear multi-state sliding window system consists of n linearly ordered positions and m
statistically independent elements with different characteristics that are to be allocated at these
positions. Each element can have different states: from complete failure up to perfect
functioning. A performance rate is associated with each state. The system fails if the sum of the
performance rates of elements located at any r consecutive positions is lower than a demand w.
An algorithm based on the universal generating function method is suggested for
determination of the linear multi-state sliding window system reliability. This algorithm can
handle cases in which any number of multistate elements are allocated in the same position while
some positions remain empty. It is shown that such uneven allocation provides greater system
reliability than the even one. A genetic algorithm is used as optimization tool in order to solve
the optimal element allocation problem.
Keywords: sliding window system, consecutive k-out-of-r-from-n:F system, multi-state element,
universal moment generating function, genetic algorithm.
1
Nomenclature
Acronyms
SWS
linear multi-state sliding window system
ME
multi-state element
u-function
universal moment generating function
Nomenclature
n
number of positions in SWS
m
number of MEs in SWS
r
number of consecutive positions in SWS sliding window
w
minimal allowable cumulative performance of a set of MEs allocated at r consecutive
positions
ej
j-th ME of SWS
Ci
i-th position of SWS
E
set of all available MEs: E={e1,…,em}
Ei
set of MEs located at position i
h(j)
number of position the ME ej is allocated at: ejEh(j)
H
vector representing allocation of MEs in SWS: H={h(j),1jm}
F
SWS failure probability
R
SWS reliability
Kj
number of different states of ME j
Vj
random performance rate of ME j
vj,k
performance rate of ME j in state k
pj,k
probability of state k of ME j
Si
number of different states of group i
2
Gi
random performance rate of group i (sum of performance rates of MEs allocated at
position Ci)
gi,k
performance rate of group i in state k
 i ,k
probability of state k of group i
Ni
number of different states of i-th set of r groups
Gi
random vector representing performance rates of i-th set of r consecutive groups
gi,k
vector representing performance rates of i-th set of r consecutive groups in state k
Qi,k
probability of state k of i-th set of r consecutive groups
j(z)
u-function representing distribution of random value Vj
ui(z)
u-function representing distribution of random value Gi
Ui(z) vector-u-function representing distribution of random vector Gi
,, composition operators over u-functions
1(x)
1, x is True

0, x is False
Pr(x) probability of event x
sum(y) sum of elements of vector y
1. Introduction
This paper considers a linear multi-state sliding window system (SWS) consisting of n
linearly ordered positions at which m multi-state elements (MEs) are allocated. Each ME j can
have Kj different states: from complete failure up to perfect functioning. A performance rate is
associated with each state. The SWS fails if the sum of the performance rates of MEs allocated at
any r consecutive positions is lower than the demand w.
3
The SWS model was suggested in [1] as generalization of the consecutive k-out-of-r-fromn:F system to the multi-state case. The linear consecutive k-out-of-r-from-n:F system has n
ordered binary-state elements and fails if at least k out of r consecutive elements fail. This
system was formally introduced by Griffith [2], but had been mentioned previously by Tong [3],
Saperstein [4,5], Naus [6] and Nelson [7] in connection with tests for non-random clustering,
quality control and inspection procedures, service systems, and radar problems. The introduction
of the SWS model was motivated by examples from manufacturing, quality control and service
systems [1].
Consider for example an industrial heating system that should provide certain temperature
along a line with moving parts (Fig. 1). The temperature at each point of the line is determined
by a cumulative effect of r closest heaters. Each heater consists of several electrical heating
elements. The heating effect of each heater depends on the availability of its heating elements
and therefore can vary discretely (if the heaters are different, the number of different levels of
heat radiation and the intensity of the radiation at each level are specific to each heater). In order
to provide the temperature, which is not less than some specified value at each point of the line,
any r adjacent heaters should be in states where the sum of their radiation intensity is greater than
an allowed minimum w.
A variety of other systems also fit the model: radar systems that should provide certain
radiation intensity in different distances, combat systems that should provide certain fire density
along a defense line, etc.
It was shown in [8] that when the MEs are not identical, their allocation strongly affects the
entire SWS reliability. The optimal ME arrangement problem was formulated and an algorithm
for its solution was suggested for the case when m=n and only one ME is located in each
position (actually the problem formulated in [8] is not allocation but sequencing of the MEs). In
4
many cases, even for m=n, greater reliability can be achieved if some of MEs are gathered in the
same position than if all the MEs are evenly distributed between all the positions.
Consider, for example, a simple case in which four MEs should be allocated within a SWS
with four positions (m=n=4). Each ME j has two states: failure state with performance vj0=0 and
normal state with performance vj1=1. The probability of normal state is pj, the probability of
failure is qj=1-pj. For r=3 and w=2 the system succeeds if each three consecutive positions
contain at least two elements in normal state. Consider two possible allocations of the MEs
within the SWS (Fig. 2):
A. MEs are evenly distributed among the positions.
B. Two MEs are allocated at second position and two MEs are allocated at third position.
In case A, the SWS succeeds either if no more than one ME fails or if MEs in first and fourth
positions fail and MEs in second and third positions are in normal state. Therefore, the system
reliability is
RA=p1p2p3p4+q1p2p3p4+p1q2p3p4+p1p2q3p4+p1p2p3q4+q1p2p3q4.
For identical MEs with pj=p
RA=p4+4qp3+q2p2.
In case B, the SWS succeeds if at least two MEs are in normal state. The system reliability in this
case is
RB=p1p2p3p4+q1p2p3p4+p1q2p3p4+p1p2q3p4+p1p2p3q4+
q1q2p3p4+q1p2q3p4+q1p2p3q4+p1q2q3p4+p1q2p3q4+p1p2q3q4.
For identical MEs
RB=p4+4qp3+6q2p2.
One can see that the uneven allocation B is more reliable:
RB-RA=5q2p2=5(1-p)2p2.
5
Consider now the same system when w=3. In case A the system succeeds only if it does not
contain any failed ME:
RA=p1p2p3p4.
In case B it succeeds if it contains no more than one failed element:
RB=p1p2p3p4+q1p2p3p4+p1q2p3p4+p1p2q3p4+p1p2p3q4.
For identical MEs:
RA=p4, RB=p4+4qp3 and RB-RA=p4+4qp3- p4=4(1-p)p3.
Observe that even for w=4 when in case A the system is unable to meet the demand (R A=0)
because w>r, in case B it still succeeds with probability RB=p1p2p3p4.
This paper considers a general optimal allocation problem in which the number of MEs is
not necessarily equal to number of positions (mn) and an arbitrary number of elements can be
allocated at each position (some positions may be empty). In order to evaluate the reliability of
SWS with any given allocation of MEs the procedure based on a universal generating function
technique is developed. A genetic algorithm based on principles of evolution is used as an
optimization engine. The integer string solution encoding technique is adopted to represent
element allocation in the GA.
Section 2 of the paper presents a formulation of the optimal allocation problem. Section 3
describes the technique used for evaluating the SWS reliability for the given allocation of
different MEs with the specified performance distribution. Section 4 describes the optimization
approach used and its adaptation to the problem formulated. In the fifth section, illustrative
examples are presented in which the best-obtained allocation solutions are presented for two
different SWSs.
6
2. Problem formulation
The SWS consists of n consecutively ordered positions Ci (1in). At each position
C1,…,Cn, elements from a set E={e1,…,em} can be allocated.
The functioning of each element ej is characterised by its performance Vj. In each state k
(1kKj) the performance rate of the element ej is vjk. Thus the performance Vj is a discrete
random value which has the following distribution:
Kj
Pr{Vj  v j, k }  p j, k ,  p j, k  1.
(1)
k 1
The MEs allocation problem can be considered as a problem of partitioning a set E of m
elements into a collection of n mutually disjoint subsets Ei (1in), i.e. such that
n
 E i  E,
(2)
i 1
E i  E j  ø, ij.
(3)
Each subset Ei, corresponding to SWS position Ci, can contain from 0 to m elements.
Further we refer to subset Ei as i-th group. The partition of the set E can be represented by the
vector H={h(j),1jm}, where h(j) is the number of the group to which element ej belongs. One
can easily obtain the sum of performance rates of the MEs belonging to i-th group (allocated at
Ci) as
m
G i   Vj1(h ( j)  i).
(4)
j1
The linear SWS with n positions contains n-r+1 sets of r consecutive positions. The sum of
performance rates of elements located at positions belonging to each one of such sets should not
be less than w. Therefore, the entire SWS reliability can be defined as follows:
r
r 1
i 1
i2
R= Pr(  G i w   G i w ... 
n
n  r 1 x  r 1
 G i w)=Pr( 
i  n  r 1
7
x 1
 G i w).
ix
(5)
For the given n, r, w and set of m MEs with specified performance distributions, the only
factor influencing the SWS reliability is the allocation of its elements H. Therefore, the optimal
allocation problem can be formulated as follows:
Find vector H* maximizing the SWS reliability R:
H*(r,m,n,w)=arg{R(H, r,m,n,w)max}.
(6)
3. SWS reliability estimation based on a universal generating function
The procedure used in this paper for SWS reliability evaluation is based on the universal ztransform (also called u-function or universal generating function) technique, which was
introduced in [9] and which proved to be very effective for reliability evaluation of different
types of multi-state systems [10-14]. The u-function technique can straightforwardly handle
cases in which any number of multistate elements are allocated in the same position while some
positions remain empty.
3.1. u-functions for individual MEs and group of MEs located at the same position
The u-function of a discrete random variable X is defined as a polynomial
K
u ( z)   q k z x k ,
(7)
k 1
where the variable X has K possible values and qk is the probability that X is equal to xk. The ufunctions can also be used for representing distributions of more complex mathematical objects
[10]. For example the u-function of a discrete random vector X can take the form (7) where x k is
k-th realization of X. In this case the exponents are not scalar variables, but vectors and ufunctions (named vector-u-functions) are not longer polynomials.
8
In our case, the u-function can define a ME performance rate distribution, i.e. it represents
all of the possible states of the ME j by relating the probabilities of each state pjk to the
performance rate vjk of the ME in the form:
Kj
 j (z)   p j,k z
v j,k
(8)
k 1
Consider i-th group of MEs (group of MEs located at position Ci). The random performance
of this group Gi is equal to cumulative performance of all of its MEs. In order to obtain ufunction ui(z) representing distribution of random value Gi one can use a composition operator 
that determines ui(z) using simple algebraic operations on the individual u-functions of MEs
belonging to i-th group. The composition operator for a pair of MEs ej and ef takes the form:
Kj
( j (z), f (z))  (  p j,k z
v j,k
k 1
Kf
,  p f ,s z
vf ,s
s1
K j Kf
)    p j,k p f ,s z
v j,k  vf ,s
.
(9)
k 1s1
The resulting polynomial relates probabilities of each of the possible combinations of states
of the two independent MEs (obtained by multiplying the probabilities of corresponding states of
each ME) with cumulative performance of the pair of MEs in these states. One can see that the
operator  satisfies the following condition:
{u1 (z),..., u t (z), u t 1 (z),..., u m (z)}  {{u1 (z),..., u t (z)}, {u t 1 (z),...., u m (z)}}
(10)
for arbitrary t. Therefore, it can be applied recursively to obtain the u-function for an arbitrary
group of MEs. The i-th group of MEs can be now considered as a single ME with random
performance Gi. This performance takes values from the set {gi,1,…,gi,Si}. The total number of
possible values of group performance Si is usually much less than the product of numbers of
states of the elements belonging to the group since different combination of ME states can
produce the same cumulative performance. The distribution of Gi is represented by u-function
ui(z):
Si
u i (z)   ( j (z)) =  i,k z
jEi
k 1
9
gi ,k
.
(11)
Note that the absence of any ME at position Ci implies that performance of i-th group is
equal to 0 with probability 1. In this case, the corresponding u-function takes the form
ui(z)=z0.
(12)
Note that for any j(z)
(z0, j(z))=j(z).
(13)
3.2. u-function for set of MEs located at r adjacent positions
In order to represent the performance distribution of i-th set consisting of r groups (the set of
groups numbered from i to i+r-1) one has to modify the u-function by replacing the random
value X with the random vector Gi={Gi,…,Gi+r-1} consisting of random performance values
corresponding to all the groups belonging to the set (this replacement produces a vector-ufunction).
Each combination of states of groups belonging to the set constitutes a state of the set. The
total number of different states of the i-th set is equal to the number of possible combinations of
the states of the individual groups belonging to the set. Since all the groups are statistically
r
independent and each group j has Sj states, the total number of states of the set is Ni =  Si j1 .
j1
Therefore the vector-u-function Ui(z) corresponding to i-th set of r groups consists of Ni different
terms.
Consider a state k of i-th set that corresponds to state sj of each individual group j: iji+r-1.
The performance values of the groups of the i-th set in state k are represented by the realization
gi,k of the random vector Gi: gi,k={ g i,si ,..., g i r 1,si r 1 }. The probability of any state of the set is
equal to the product of the probabilities of the corresponding states of the individual groups.
For example, the vector-u-function for a set of two groups i and i+1 (for r=2) takes the form:
10
Si
Si1
{gi ,s ,gi1,s }
i
i1
U i (z)    q si ,si1 z
,
(14)
si 1 si11
where q si ,si 1 is a probability of an event in which group i is in state si and group i+1 is in state
si+1. It can be easily seen that for the statistically independent groups q si ,si1  i,si i1,si1 .
Therefore, the vector-u-function of the set can be obtained by applying the following operator 
over u-functions of individual groups:
Si
U i (z)  (u i (z), u i1 (z))   (  i,si z
Si
Si 1
gi ,s
i
si 1
  i,si i1,si1
Si 1
,  i1,si1 z
gi 1,s
i 1 )

si 11
{g
g
}
z i,si , i1,si1 .
(15)
si 1 si 11
Applying the operator  over u-functions of r consecutive groups one obtains the vector-ufunction corresponding to the set containing these groups:
Si Si 1
U i (z)
Si  r 1
 (u i (z), u i1 (z),..., u ir-1 (z)) 
  ...  i,si i1,si1 ...ir 1,sir 1 z
si 1si 11
{gi ,si ,gi 1,si 1 ,...,gi  r 1,si  r 1 }
(16)
si  r 11
Simplifying this representation one obtains:
Ni
U i ( z )   Q i ,k z
gi , k
,
(17)
k 1
where Qi,k is the probability that the i-th set is in state k and vector gi,k consists of values of
performance rates of groups at state k. The obtained vector-u-function defines all of the possible
states of the i-th set of r groups. Having the vectors gi,k representing performance rates of groups
belonging to the i-th set in any state k one can obtain the sum of performance rates of all the
MEs belonging to this set:
r 1
sum(gi,k)=  g i  j,si  j .
j0
11
(18)
By summing the probabilities of all of the states in which this sum is less than the
demand w, one can obtain the probability of failure of the i-th set of r consecutive groups. To do
so one can use the following operator :
( U i (z)) 
Ni
 Q k 1(sum( g i, k )  w) .
(19)
k 1
3.3. u-functions for all the sets of r consecutive groups
Note that the SWS considered contains exactly n-r+1 sets of r consecutive groups and each
group can belong to no more than r such sets. To obtain the u-function corresponding to all the
sets of r consecutive groups the following procedure is introduced:
1. Define u-function U1-r(z) as follows
U1-r(z)= z
g0
,
(20)
where the vector g0 consists of r zeros.
2. Define the following operator  over vector-u-function Ui(z) and u-function of individual
group ui+r(z):
Ni
Ψ( U i (z), u ir (z))  Ψ(  Qi,k z
k 1
gi ,k
Si  r
,  ir ,s z
s1
g j,s
Ni Si  r
)    Q i,k ir ,s z
( g
,g
)
i ,k i  r ,s
,
(21)
k 1 s1
where operator  over arbitrary vector y and value x shifts all the vector elements one position
left: y(s-1)=y(s) for 1<sr and assigns the value x to the last element of y: y(r)=x (the first
element of vector y disappears after applying the operator). The operator  removes the
performance value of the first group of the set and adds the performance value of the next (not
considered yet) group to the set preserving the order of groups belonging to the set. Therefore,
applying the operator  over vector-u-function representing performance distribution of the i-th
set of r groups (represented by the random vector Gi) one obtains the vector-u-function
representing the performance distribution of the i+1-th set (represented by the random vector
Gi+1).
12
3. Using the operator  in sequence as follows:
Ui1r (z)  Ψ(Uir (z), u i (z))
(22)
for i=1,…,n one obtains vector-u-functions for all of the possible sets of r consecutive groups:
U1(z), …, Un-r+1(z). Note that the vector-u-function for the first set U1(z) is obtained after
applying the operator  r times.
Applying the operator  (19) to the vector-u-functions U1(z), …, Un-r+1(z) one can obtain the
failure probability for each set of r consecutive groups. Note that if for some combination of
MEs states i-th set of r groups fails, the entire SWS fails independently of the states of the MEs
that do not belong to this set of groups. Therefore the terms corresponding to the failure of i-th
set can be removed from the vector-u-function Ui(z) since they should not participate in
determining further state combinations that cause system faults. This consideration lies at the
base of the following algorithm.
3.4. Algorithm for SWS reliability evaluation for a given allocation of MEs H
1. Assign ui(z)=z0 for each i=1,…,n.
2. According to the given vector H for each 1jm, determine j(z) using Eq. (8) and
modify uh(j)(z):
uh(j)(z)=(uh(j)(z), j(z)).
3. Assign F=0 and U1-r(z)= z
g0
.
4. Repeat the following for i=1,…,n:
4.1. Obtain Ui1r (z)  (Uir (z), u i (z)) .
4.2. If ir add value (Ui1r (z)) to F and remove all the terms with sum(gi+1-r,k)<w
from Ui1r (z) .
13
5. Obtain the SWS reliability as R=1-F. Alternatively, the system reliability can be obtained
as the sum of the coefficients of the last vector-u-function U n1r (z) .
3.5. Example
In order to illustrate the procedure, we will obtain the reliability of the SWS presented in the
introduction (Fig. 2). In this SWS n=m=4, r=3, w=2, Kj=2, Pr{Vj=1}=pj, Pr{Vj=0}=1-pj=qj for
any ME ej 1j4.
The u-functions of the individual MEs are:
1(z)=q1z0+p1z1, 2(z)=q2z0+p2z1,
3(z)=q3z0+p3z1, 4(z)=q4z0+p4z1.
First, consider the case A. The ME allocation in this case is represented by vector
H={1,2,3,4}. The u-functions representing distribution of random values G1, G2 G3 and G4 for
the groups of MEs allocated at the same positions are:
u1(z)=(z0,q1z0+p1z1)=q1z0+p1z1, u2(z)=(z0,q2z0+p2z1)=q2z0+p2z1,
u3(z)=(z0,q3z0+p3z1)=q3z0+p3z1, u4(z)=(z0,q4z0+p4z1)=q4z0+p4z1,
Following step 3 of the algorithm we assign F=0, U-2(z)=z0,0,0.
Following step 4 of the algorithm we obtain:
U-1(z)=[U-2(z),u1(z)]= [z0,0,0, q1z0+p1z1]=q1z0,0,0+p1z0,0,1,
U0(z)=[U-1(z),u2(z)]=[ q1z0,0,0+p1z0,0,1, q2z0+p2z1]=
q1q2z0,0,0+p1q2 z0,1,0+q1p2z0,0,1+p1p2z0,1,1.
U1(z)=[U0(z),u3(z)]=[q1q2z0,0,0+p1q2z0,1,0+q1p2z0,0,1+p1p2z0,1,1, q3z0+p3z1]=
q1q2q3z0,0,0+p1q2q3z1,0,0+q1p2q3z0,1,0+p1p2q3z1,1,0+
q1q2p3z0,0,1+p1q2p3z1,0,1+q1p2p3z0,1,1+p1p2p3z1,1,1
The terms of U1(z) with sum(g1,k)<2 are marked in bold. After removing the marked terms, U1(z)
takes the form:
14
U1(z)=p1p2q3z1,1,0+p1q2p3z1,0,1+q1p2p3z0,1,1+p1p2p3z1,1,1.
U2(z)=[U1(z),u4(z)]=[p1p2q3z1,1,0+p1q2p3z1,0,1+q1p2p3z0,1,1+p1p2p3z1,1,1, q4z0+p4z1]=
p1p2q3q4z1,0,0+p1q2p3q4z0,1,0+q1p2p3q4z1,1,0+p1p2p3q4z1,1,0
p1p2q3p4z1,1,1+p1q2p3p4z0,1,1+q1p2p3p4z1,1,1+p1p2p3p4z1,1,1.
The terms with sum(g2,k)<2 are marked in bold. After removing these terms the vector-ufunction takes the form:
U2(z)=q1p2p3q4z1,1,0+p1p2p3q4z1,1,0+p1p2q3p4z1,1,1+p1q2p3p4z0,1,1+q1p2p3p4z1,1,1+p1p2p3p4z1,1,1.
The SWS reliability is equal to the sum of the coefficients of vector-u-function U2(z):
RA=q1p2p3q4+p1p2p3q4+p1p2q3p4+p1q2p3p4+q1p2p3p4+p1p2p3p4.
The ME allocation in case B is represented by vector H={2,2,3,3}. The u-functions
representing distribution of random vales G1, G2 G3 and G4 for the groups of MEs allocated at
the same positions are after simplification:
u1(z)=z0,
u2(z)=(1(z), 2(z))=(q1z0+p1z1, q2z0+p2z1)=q1q2z0+(p1q2+q1p2)z1+p1p2z2,
u3(z)=(3(z), 4(z))=(q3z0+p3z1, q4z0+p4z1)=q3q4z0+(p3q4+q3p4)z1+p3p4z2,
u4(z)=z0,
Following step 3 of the algorithm we assign F=0, U-2(z)=z0,0,0.
Following step 4 of the algorithm we obtain:
U-1(z)=[U-2(z),u1(z)]= [z0,0,0, z0]=z0,0,0,
U0(z)=[U-1(z),u2(z)]=[z0,0,0, q1q2z0+(p1q2+q1p2)z1+p1p2z2]=
q1q2z0,0,0+(p1q2+q1p2)z0,0,1+p1p2z0,0,2.
U1(z)=[U0(z),u3(z)]=[q1q2z0,0,0+(p1q2+q1p2)z0,0,1+p1p2z0,0,2, q3q4z0+(p3p4+q3p4)z1+p3p4z2]=
q1q2q3q4z0,0,0+(p1q2+q1p2)q3q4z0,1,0+p1p2q3q4z0,2,0+
q1q2(p3q4+q3p4)z0,0,1+(p1q2+q1p2)(p3q4+q3p4)z0,1,1+p1p2(p3q4+q3p4)z0,2,1
q1q2p3p4z0,0,2+(p1q2+q1p2)p3p4z0,1,2+p1p2p3p4z0,2,2.
15
The terms of U1(z) with sum(g1,k)<2 are marked in bold. After removing the marked terms, U1(z)
takes the form:
U1(z)=p1p2q3q4z0,2,0+(p1q2+q1p2)(p3q4+q3p4)z0,1,1+p1p2(p3q4+q3p4)z0,2,1+
q1q2p3p4z0,0,2+(p1q2+q1p2)p3p4z0,1,2+p1p2p3p4z0,2,2.
U2(z)=[U1(z),u4(z)]=[ p1p2q3q4z0,2,0+(p1q2+q1p2)(p3q4+q3p4)z0,1,1+p1p2(p3q4+q3p4)z0,2,1+
q1q2p3p4z0,0,2+(p1q2+q1p2)p3p4z0,1,2+p1p2p3p4z0,2,2, z0]=
p1p2q3q4z2,0,0+(p1q2+q1p2)(p3q4+q3p4)z1,1,0+p1p2(p3q4+q3p4)z2,1,0+
q1q2p3p4z0,2,0+(p1q2+q1p2)p3p4z1,2,0+p1p2p3p4z2,2,0.
U1(z) does not contain terms with sum(g2,k)<2, therefore no terms are removed in this vector-ufunction.
The SWS reliability is equal to the sum of the coefficients of the vector-u-function U2(z).
RB= p1p2q3q4+(p1q2+q1p2)(p3q4+q3p4)+p1p2(p3q4+q3p4)+
q1q2p3p4+(p1q2+q1p2)p3p4+p1p2p3p4=
p1p2q3q4+p1q2p3q4+p1q2q3p4+q1p2p3q4+q1p2q3p4+p1p2p3q4+p1p2q3p4+
q1q2p3p4+p1q2p3p4+q1p2p3p4+p1p2p3p4.
4. Optimization technique
Finding the optimal ME allocation in SWS is a complicated combinatorial optimization
problem having nm possible solutions. An exhaustive examination of all these solutions is not
realistic even for a moderate number of positions and elements, considering reasonable time
limitations. As in most combinatorial optimization problems, the quality of a given solution is
the only information available during the search for the optimal solution. Therefore, a heuristic
search algorithm is needed which uses only estimates of solution quality and which does not
require derivative information to determine the next direction of the search.
16
The recently developed family of genetic algorithms is based on the simple principle of
evolutionary search in solution space. GAs have been proven to be effective optimization tools
for a large number of applications. Successful applications of GAs in reliability engineering are
reported in [10,15-21].
It is recognized that GAs have the theoretical property of global convergence [22]. Despite
the fact that their convergence reliability and convergence velocity are contradictory, for most
practical, moderately sized combinatorial problems, the proper choice of GA parameters allows
solutions close enough to the optimal one to be obtained in a short time.
4.1. Genetic Algorithm
Basic notions of GAs are originally inspired by biological genetics. GAs operate with
"chromosomal" representation of solutions, where crossover, mutation and selection procedures
are applied. "Chromosomal" representation requires the solution to be coded as a finite length
string. Unlike various constructive optimization algorithms that use sophisticated methods to
obtain a good singular solution, the GA deals with a set of solutions (population) and tends to
manipulate each solution in the simplest manner.
A brief introduction to genetic algorithms is presented in [23]. More detailed information on
GAs can be found in Goldberg’s comprehensive book [24], and recent developments in GA
theory and practice can be found in books [21, 22]. The steady state version of the GA used in
this paper was developed by Whitley [25]. As reported in [26] this version, named GENITOR,
outperforms the basic “generational” GA. The structure of steady state GA is as follows:
17
1. Generate an initial population of Ns randomly constructed solutions (strings) and evaluate
their fitness. (Unlike the “generational” GA, the steady state GA performs the evolution search
within the same population improving its average fitness by replacing worst solutions with better
ones).
2. Select two solutions randomly and produce a new solution (offspring) using a crossover
procedure that provides inheritance of some basic properties of the parent strings in the offspring.
The probability of selecting the solution as a parent is proportional to the rank of this solution.
(All the solutions in the population are ranked by increasing order of their fitness).
3. Allow the offspring to mutate with given probability Pm. Mutation results in slight changes
in the offspring structure and maintains diversity of solutions. This procedure avoids premature
convergence to a local optimum and facilitates jumps in the solution space. The positive changes
in the solution code created by the mutation can be later propagated throughout the population via
crossovers.
4. Decode the offspring to obtain the objective function (fitness) values. These values are a
measure of quality, which is used in comparing different solutions.
5. Apply a selection procedure that compares the new offspring with the worst solution in the
population and selects the one that is better. The better solution joins the population and the
worse one is discarded. If the population contains equivalent solutions following the selection
process, redundancies are eliminated and, as a result, the population size decreases. Note that
each time the new solution has sufficient fitness to enter the population, it alters the pool of
prospective parent solutions and increases the average fitness of the current population. The
average fitness increases monotonically (or, in the worst case, does not vary) during each genetic
cycle (steps 2-5).
6. Generate new randomly constructed solutions to replenish the population after repeating
steps 2-5 Nrep times (or until the population contains a single solution or solutions with equal
18
quality). Run the new genetic cycle (return to step 2). In the beginning of a new genetic cycle,
the average fitness can decrease drastically due to inclusion of poor random solutions into the
population. These new solutions are necessary to bring into the population new "genetic
material" which widens the search space and, like a mutation operator, prevents premature
convergence to the local optimum.
7. Terminate the GA after Nc genetic cycles.
The final population contains the best solution achieved. It also contains different nearoptimal solutions, which may be of interest in the decision-making process.
4.2. Solution representation and basic GA procedures
To apply the genetic algorithm to a specific problem, one must define a solution
representation and decoding procedure, as well as specific crossover and mutation procedures.
As it was shown in section 2, any arbitrary m-length vector H with elements h(i) belonging
to the range [1,n] represents a feasible allocation of MEs. Such vectors can represent each one of
the possible nm different solutions. The fitness of each solution is equal to the reliability of SWS
with allocation, represented by the corresponding vector H. To estimate the SWS reliability for
the arbitrary vector H, one should apply the procedure presented in section 3.
The random solution generation procedure provides solution feasibility by generating
vectors of random integer numbers within the range [1,n]. It can be seen that the following
crossover and mutation procedures also preserve solution feasibility.
The crossover operator for given parent vectors P1, P2 and the offspring vector O is defined
as follows: first P1 is copied to O, then all numbers of elements belonging to the fragment
between a and b positions of the vector P2 (where a and b are random values, 1a<bm) are
copied to the corresponding positions of O. The following example illustrates the crossover
procedure for m=6, n=4:
19
P1=2 4 1 4 2 3
P2=1 1 2 3 4 2
O=2 4 2 3 4 3
The mutation operator moves a randomly chosen ME to the adjacent position (if such a
position exists) by modifying a randomly chosen element h(i) of H using rule h(i)=max{h(i)-1,1}
or rule h(i)=min{h(i)+1,n} with equal probability. The vector O in our example can take the
following form after applying the mutation operator :
O=2 3 2 3 4 3.
5. Illustrative examples
5.1. SWS with identical MEs
Consider a SWS with n=10 positions in which m=10 two-state identical MEs are to be
allocated. The performance distribution of each ME is Pr(Vj=1)=0.9, Pr(Vj=0)=0.1 for 1j10.
Table 1 presents allocation solutions obtained for different r and w (number of identical
elements in each position). The reliability of the SWS corresponding to the obtained allocations
is compared with its reliability corresponding to the case when the MEs are evenly distributed
among the positions. One can see that the reliability improvement achieved by the free allocation
increases with the increase of r and w. On the contrary, the number of occupied positions in the
best obtained solutions decreases when r and w grow.
Fig. 3 presents the SWS reliability as a function of demand w for r=2, r=3 and r=4 for even
ME allocation and for unconstrained allocation obtained by the GA.
5.2. SWS with different MEs
20
Consider the ME allocation problem presented in [8], in which n=m=10, r=3 and w=1. The
performance distributions of MEs are presented in Table 2. The best ME allocation solutions
obtained by the GA are presented in Table 3 (list of elements located at each position).
The best even allocation solution obtained in [8] considerably improves when the even
allocation constraint is removed. One can see that the best unconstrained allocation solution
obtained by the GA in which only 4 out of 10 positions are occupied by the MEs provides 42%
reliability increase over even allocation. The system reliability as a function of demand for the
obtained even and unconstrained allocations is presented in Fig. 4.
Table 3 presents also the best allocations of the first m MEs from Table 2 (for m=9, m=8 and
m=7). Observe that free allocation of 9 MEs in the SWS still provides greater reliability than
does even allocation of 10 MEs.
5.3. Computational Effort and Algorithm Consistency
The C language realization of the algorithm was tested on a Pentium II PC. The chosen
parameters of GA were NS=100, Nrep=2000, Nc=10 and Pm=1. The time taken to obtain the bestin-population solution (time of the last modification of the best solution obtained) did not exceed
20 seconds for the optimization problems presented in the previous sections when r=3. The
parameter r has the greatest influence on the computational time since with the growth of r the
number of possible states for each set of r groups increases dramatically. For instance, solving the
second optimization problem for r=4 takes 120 seconds. It should be noted also that increase of
the demand w leads to reduction of the computational time because of more intensive truncation
of the u-functions in step 4.2 of the SWS reliability evaluation algorithm.
To demonstrate the consistency of the suggested algorithm, we repeated the GA 25 times
with different starting solutions (initial population) for the second problem. The coefficient of
variation (CV) was calculated for fitness values of best-in-population solutions obtained during
21
the genetic search by different GA search processes. The variation of this index during the GA
procedure is presented in Fig. 5. One can see that the standard deviation of the final solution
fitness does not exceed 0.6 % of its average value.
References
[1] G. Levitin, Linear multi-state sliding window systems, IEEE Trans. Reliability, 52, 2003, pp.263-269.
[2] W. Griffith, On consecutive k-out-of-n failure systems and their generalizations, A. P. Basu (ed), Reliability and
Quality Control, 1986, pp. 157-165; Elsevier (North-Holland).
[3]. Y. Tong, A rearrangement inequality for the longest run with an application in network reliability, J. Applied
Probability, vol. 22, 1985, pp. 386-393.
[4] B. Saperstain, The generalized birthday problem, J. Amer. Statistical Assoc, vol. 67, 1972, pp. 425-428.
[5] B. Saperstain, On the occurrence of n successes within N Bernoulli trails, Technometrics, vol. 15, 1973, pp. 809818.
[6] J. Naus, Probabilities for a generalized birthday problem, J. Amer. Statistical Assoc, vol. 69, 1974, pp. 810-815.
[7] J. Nelson, Minimal-order models for false-alarm calculations on sliding windows, IEEE Trans. Aerospace
Electronic Systems, vol. AES-14, 1978, pp. 351-363.
[8] G. Levitin, "Optimal allocation of elements in linear multi-state sliding window system", Reliability Engineering
and System Safety, 76, 2002, pp. 245-254.
[9] I. A. Ushakov, Universal generating function, Sov. J. Computing System Science, vol. 24, No 5, 1986, pp. 118129.
[10] A. Lisnianski, G. Levitin, Multi-state system reliability. Assessment, Optimization and Applications, World
Scientific, 2003.
[11] G. Levitin, "Reliability evaluation for linear consecutively-connected systems with multistate elements and
retransmission delays", Quality and Reliability Engineering International, vol. 17, 2001.
[12] G. Levitin, "Evaluating correct classification probability for weighted voting classifiers with plurality voting",
European journal of operational research, vol. 141, pp. 596-607, 2002.
[13] G. Levitin, "Reliability evaluation for acyclic consecutively-connected networks with multistate elements",
Reliability Engineering & System Safety, vol. 73, pp. 137-143, 2001.
[14] G. Levitin, "Incorporating Common-Cause Failures into Non-Repairable Multi-State Series-Parallel System
Analysis", IEEE Transactions on Reliability, vol. 50, pp. 380-388, 2001.
22
[15] L. Painton and J. Campbell, "Genetic algorithm in optimization of system reliability", IEEE Trans. Reliability,
44, 1995, pp. 172-178.
[16] D. Coit and A. Smith, "Reliability optimization of series-parallel systems using genetic algorithm", IEEE
Trans. Reliability, 45, 1996, pp. 254-266.
[17] D. Coit and A. Smith, "Redundancy allocation to maximize a lower percentile of the system time-to-failure
distribution", IEEE Trans. Reliability, 47, 1998, pp. 79-87.
[18] Y. Hsieh, T. Chen, D. Bricker, "Genetic algorithms for reliability design problems", Microelectronics and
Reliability, 38, 1998, pp. 1599-1605.
[19] J. Yang, M. Hwang, T. Sung, Y. Jin, "Application of genetic algorithm for reliability allocation in nuclear
power plant", Reliability Engineering & System Safety, 65, 1999, pp. 229-238.
[20] M. Gen and J. Kim, "GA-based reliability design: state-of-the-art survey", Computers & Ind. Engng, 37, 1999,
pp. 151-155.
[21] M. Gen and R. Cheng, Genetic Algorithms and engineering design, John Wiley & Sons, New York, 1997.
[22] T. Back, Evolutionary Algorithms in Theory and Practice. Evolution Strategies. Evolutionary Programming.
Genetic Algorithms, Oxford University Press, 1996.
[23] S. Austin, "An introduction to genetic algorithms", AI Expert, 5, 1990, pp. 49-53.
[24] D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading,
MA, 1989.
[25] D. Whitley, The GENITOR Algorithm and Selective Pressure: Why Rank-Based Allocation of Reproductive
Trials is Best. Proc. 3th International Conf. on Genetic Algorithms. D. Schaffer, ed., pp. 116-121. Morgan
Kaufmann, 1989.
[26]. G. Syswerda, “A study of reproduction in generational and steady-state genetic algorithms, in G.J.E. Rawlings
(ed.), Foundations of Genetic Algorithms, Morgan Kaufmann, San Mateo, CA, 1991.
23
Table 1. Solutions of the first allocation problem.
Position
C1
C2
C3
C4
C5
C6
C7
C8
C9
C10
r=2, w=1
r=3, w=2
2
1
3
Free allocation
Even allocation
Improvement
0.951
0.920
3.4%
2
r=4, w=3
5
3
2
5
2
3
2
Reliability
0.941
0.866
8.7%
24
0.983
0.828
18.7%
Table 2. MEs' performance distributions for the second allocation problem.
No of ME
State
1
2
p
V
p
V
0.03 0.0 0.10 0.0
1
0.22 0.2 0.10 0.1
2
0.75 0.5 0.40 0.2
3
- 0.40 0.4
4
5
No of ME
6
7
State
p
V
p
V
0.01 0.0 0.20 0.0
1
0.22 0.4 0.10 0.3
2
0.77 0.5 0.10 0.4
3
- 0.60 0.5
4
3
4
5
p
V
p
V
p
0.17 0.0 0.05 0.0 0.08
0.83 0.6 0.25 0.3 0.20
- 0.40 0.5 0.15
- 0.30 0.6 0.45
- 0.12
8
9
10
p
V
p
V
p
0.05 0.0 0.20 0.0 0.05
0.25 0.4 0.10 0.3 0.25
0.70 0.6 0.15 0.4 0.70
- 0.55 0.5
-
25
V
0
1
2
4
5
V
0
2
6
-
Table 3. Solutions of the second allocation problem (n=10, r=3, w=1).
Position
C1
C2
C3
C4
C5
C6
C7
C8
C9
C10
Reliability
m=10
Even allocation
Free allocation
5
9
3
6, 7, 10
1
4
2, 5
7
1, 4
10
8
3, 8, 9
6
2
0.536
0.765
26
m=9
m=8
m=7
2, 5, 8, 9
3, 6
1
7
7
1, 4
5, 7, 8
3
6
3
6
4
1, 2
4, 5
2
0.653
0.509
0.213
Fig. 1
Figure 1: Example of SWS with r=3.
27
Fig. 2
e1
e2
e3
e4
C1
C2
C3
C4
A
C1
e1,e2
e3,e4
C2
C3
B
Figure 2: Two possible allocations of MEs in SWS with n=m=4.
28
C4
Fig. 3
1
0.8
0.6
R
0.4
0.2
0
0
1
2
3
4
5
W
even r=2
even r=4
unconstrained r=3
even r=3
unconstrained r=2
unconstrained r=4
Figure 3: Reliability of SWS with identical MEs for different r and ME allocations.
29
Fig. 4
1
0.8
0.6
R
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
W
even
unconstrained
Figure 4: Reliability of SWS with different MEs for optimal even and uneven ME allocations.
30
Fig. 5
15
Coefficient of Variation (%)
12
9
6
3
0
0
2
4
6
8
10
No of Crossovers (Thousands)
Figure 5: CV of best-in-population solution fitness obtained by 25 different search processes as
function of number of crossovers.
31
Download