Optimal allocation of elements in linear multi

advertisement
Optimal allocation of elements in linear multi-state
sliding window system
Gregory Levitin
The Israel Electric Corporation Ltd., Haifa, Israel
E-mail: levitin@iec.co.il
Abstract
This paper proposes a new model that generalizes the consecutive k-out-of-r-from-n:F
system to multi-state case. In this model (named linear multi-state sliding window system) the
system consists of n linearly ordered multi-state elements. Each element can have different
states: from complete failure up to perfect functioning. A performance rate is associated with
each state. The system fails if the sum of the performance rates of any r consecutive elements
is lower than a demand W.
An algorithm is suggested that finds the order of elements with different characteristics
within linear multi-state sliding window system, which provides the greatest possible system
reliability. The algorithm is based on using a universal generating function technique for
system reliability evaluation. A genetic algorithm is used as the optimization tool. Illustrative
examples are presented.
Keywords: sliding window system, consecutive k-out-of-r-from-n:F system, multi-state
element, universal moment generating function, genetic algorithm.
Nomenclature
n
number of MEs in SWS
r
number of consecutive MEs in SWS sliding window
W
minimal allowable cumulative performance of group of r consecutive MEs
F
SWS failure probability
R
SWS reliability
Hj
number of different states of ME j
gjh
performance rate of ME j in state h
pjh
probability of state h of ME j
Qi
probability of state i of group of r consecutive MEs
Gi
vector representing performance rates of group of r consecutive MEs in state i
uj(z) u-function representing PRD of j-th ME
Um(z) vector-u-function representing PRD of m-th group of r consecutive MEs
(G) operator producing 1 if sum of vector G elements is less than W and 0 otherwise
, composition operators over u-functions

operator over vector G
Abbreviations
SWS
linear multi-state sliding window system
ME
multi-state element
PRD
performance rate distribution
UMGF, u-function universal moment generating function
1. Introduction
The linear consecutive k-out-of-r-from-n:F system has n ordered elements and fails if at
least k out of r consecutive elements fail. This system was formally introduced by Griffith [1],
but had been mentioned previously by Tong [2], Saperstein [3,4], Naus [5] and Nelson[6] in
connection with tests for non-random clustering, quality control and inspection procedures,
1
service systems, and radar problems. Different algorithms for evaluating the system reliability
were suggested in [7-10].
When r=n, one has the well studied simple k-out-n:F system. When k=r, one has the
consecutive k-out-of-n:F system, which was introduced by Chiang and Niu [11], and
Bollinger [12,13]. The simple k-out-n:F system was generalized to the multi-state case by Wu
& Chen in [14], where system elements have two states but can have different integer values
of nominal performance rate. In [15] the general model is developed in which elements can
have an arbitrary number of real-valued performance levels. The multi-state generalization of
the consecutive k-out-of-n:F system was first suggested by Hwang & Yao [16] as a
generalization of linear consecutive-k-out-of-n:F system and linear consecutively-connected
system with 2-state elements, studied by Shanthikumar [17,18].
This paper considers a new model that generalizes the consecutive k-out-of-r-from-n:F
system to the multi-state case. In this model (named linear multi-state sliding window system)
the system consists of n linearly ordered multi-state elements (MEs). Each ME j can have Hj
different states: from complete failure up to perfect functioning. A performance rate is
associated with each state. The SWS fails if the sum of the performance rates of any r
consecutive MEs is lower than the demand W.
Note that the special case of SWS in which all the n MEs are identical and have two
states with performance rates 0 and g respectively is k-out-of-r-from-n system where W=(rk+1)g.
The introduction of the SWS model is motivated by the following examples.
Service system.
Consider a conveyor-type service system that can process r incoming tasks
simultaneously according to first-in-first-out rule and share a common limited resource. Each
incoming task can have different states and the amount of the resource needed to process the
task is different for each state of each task. The total resource needed to process r consecutive
tasks should not exceed the available amount of the resource. If there is no available resource
to process r tasks simultaneously, the system fails.
Manufacturing.
Consider a heating system that should provide certain temperature along a line with
moving parts (Fig. 1). The temperature at each point of the line is determined by a cumulative
effect of r closest heaters. Each heater consists of several electrical heating elements. The
heating effect of each heater depends on the availability of its heating elements and therefore
can vary discretely (if the heaters are different, the number of different levels of heat radiation
and the intensity of the radiation at each level are specific to each heater). In order to provide
the temperature, which is not less than some specified value at each point of the line, any r
adjacent heaters should be in states where the sum of their radiation intensity is greater than
an allowed minimum W.
It can be easily seen that the order of tasks arrival to the service system (first example) or
allocation of heaters along a line (second example) can strongly affect the entire system
reliability. Having a set of MEs, one can achieve considerable system reliability improvement
by choosing proper elements ordering. In [7] Papastavridis and Sfakianakis first considered
the optimal element allocation problem for consecutive k-out-of-r-from-n:F systems in which
different elements can have different reliability. In this paper, the optimal element allocation
problem is considered for the more general SWS model.
Section 2 of the paper presents the formal model description and the formulation of the
optimization problem. Section 3 describes the technique used for evaluating the reliability of
SWS with a given ME allocation. In the fourth section, optimization technique is described.
Illustrative examples are presented in the fifth section.
2. Problem formulation
2
Assumptions
1. All n MEs of SWS are mutually independent
2. Each ME j can be in one of Hj different states. Each state h{1,2,…,Hj} of ME j
Hj
is characterized by its probability pjh and performance rate gjh.
 p jh  1 .
h 1
3.
4.
Each one of n MEs can be allocated at any one of n linearly ordered positions.
Each position must contain one ME.
The SWS fails if the sum of performance rates of MEs located at any r
consecutive positions is less than W.
In order to represent ME allocation in the SWS one can use an allocation function
(vector) C={c(1),…,c(n)} in which c(j) is equal to number of ME allocated at position j. One
can see that the total number of different allocation solutions (number of different vectors C)
is equal to n! (the number of possible permutations in a string of n different numbers). For the
set of MEs with a given performance rate distributions the only factor affecting the entire
SWS reliability (for fixed r and W) is the ME allocation. Therefore, one can define the
following optimization problem.
Find the vector C that maximizes the system reliability R:
C(r,W)=arg(R(C,r,W)max}.
(1)
Having an algorithm for evaluating the SWS reliability as function of C one can apply an
optimization procedure for solving the problem (1).
3. SWS reliability evaluation algorithm
The procedure used in this paper for network reliability evaluation is based on the
universal z-transform (also called u-function or universal moment generating function)
technique, which was introduced in [19] and which proved to be very effective for reliability
evaluation of different types of multi-state systems [15,20-23]. The u-function extends the
widely known ordinary moment generating function (OMGF). The essential difference
between the ordinary and universal generating functions is that the latter allows one to
evaluate probabilistic distributions of overall performance for a wide range of systems
characterized by different topology, different nature of interaction among system elements,
and different physical nature of the elements' performance measures. This can be done by
introducing different composition operators over UMGF (the only composition operator used
with OMGF is the product of polynomials).
3.1. Determination of u-functions for individual MEs and their groups
The u-function of a discrete random variable X is defined as a polynomial
u (z) 
K
 qkzXk ,
(2)
k 1
where the variable X has K possible values and qk is the probability that X is equal to Xk.
In our case, the u-function can define a ME performance rate distribution, i.e. it represents all
the possible states of the ME j by relating the probabilities of each state pjh to the performance
rate gjh of the ME in the form:
u j (z) 
Hj
 p jh z
h 1
3
g jh
(3)
In order to represent PRD of a group consisting of r MEs one has to modify the UMGF
by replacing the random value X with the random vector G={G(1),…,G(r)} consisting of
performance values corresponding to all the MEs belonging to the group. Vector element G(j)
is equal to the performance rate of j-th one out of r consecutive MEs. For example, the UMGF
(vector-u-function) for a group of two MEs e and f (r=2) should take the form:
He
U(z) 
Hf
 qhehf z
g ehe , g fhf
,
(4)
h e 1 h f 1
where q h e h f is a probability of an event in which ME e is in state he and ME f is in state hf. It
can be easily seen that for s-independent MEs q h e h f  peh e pfhf . Therefore, the UMGF of
the group can be obtained by applying the following operator  over u-functions of individual
MEs:
U(z)  (u e (z), u f (z))  (
He
Hf
He
 p eh e z
g ehe
Hf
,
h e 1
  p eh e p fh f z
 p fh f z
g fhf
h f 1
)
.
(5)
g ehe , g fhf
h e 1 h f 1
Applying the operator  over u-functions of r consecutive MEs one obtains the ufunction corresponding to the group containing these MEs:
U(z)  (u1 (z), u 2 (z),..., u r (z)) 
H1
H2
 
Hr
...  p1h1 p 2h 2 ...p rh r z
g1h1 , g 2 h 2 ,..., g rhr
. (6)
h 1 1 h 2 1 h r 1
Simplifying this representation one obtains:
I
U(z)   Q i z G i ,
(7)
i 1
r
where I represents the total number of different states of the group I   H i , Qi is the
i 1
probability of the group state i and vector Gi consists of values of MEs performance rates at
state i. Observe that the obtained u-function defines all the possible states of the group of r
MEs. By summing the probabilities of all the states in which the total performance of the
group is less than W one can obtain the probability of failure of the group of r consecutive
MEs. To do so one can use the following operator :
I
( U (z), W )   Q i (G i ) ,
(8)
i 1
where
r

1
if

 G i (s)  W

s 1
(Gi)  
.
r
0 if
 G i (s)  W

s 1

3.2. Determination of u-functions for all the groups of r consecutive MEs
Note that the SWS considered contains exactly n-r+1 groups of r consecutive MEs and
each ME can belong to no more than r groups. To obtain the UMGF corresponding to all the
groups of r consecutive MEs we introduce the following procedure:
1. Define u-function U1-r(z) as follows
4
U1-r(z)=zGo,
(9)
where the vector G0 consists of r zeros.
2. Define the following operator  over vector-u-function U(z) (7) and u-function of
individual ME uj(z) (3):
I
 ( U(z), u (z))  ( Qi z
i 1
Gi
Hj
,  p jh z
h 1
g jh
I Hj
)   Qi p jh z
(G i , g jh )
,
(10)
i 1 h 1
where operator  over arbitrary vector G and value g shifts all the vector elements one
position left: G(s-1)=G(s) for 1<sr and places the value g to the rightmost position: G(r)=g
(observe that the first element of vector G disappears after applying the operator).
3. Since the MEs are allocated in order c(1),…,c(n), using the operator  in
sequence as
follows:
(11)
Uf 1 r (z)  (Uf  r (z), u c(f ) (z))
for f=1,…,n one obtains vector-u-functions for all the possible groups of MEs located at r
consecutive positions: U1(z), …, Un-r+1(z). Note that the vector-u-function for the first group
U1(z) is obtained after applying the operator  r times. In the vector-u-function Um(z) (for
m>0), value G(s) of vector G corresponds to the random performance rate of ME c(m-1+s).
Consider a vector-u-function Um(z). For each combination of values G(2),…,G(r) it
contains exactly Hc(m) different terms corresponding to different values of G(1), which takes
all the possible values of the performance rate of ME c(m). After applying operator , G(1)
disappears from the vector G being replaced with G(2). This produces Hc(m) terms with the
same vector G in the vector-u-function Um+1(z). Collecting these like terms, one obtains a
single term for each vector G. Therefore, the number of different terms in each vector-um  r 1
function Um(z) is equal to
 H c (i ) .
im
Applying the operator  to the vector-u-function Um(z) one can obtain the probability that
group m consisting of MEs c(m),…,c(m+r-1) fails. Note that if for some combination of MEs
states the group m fails, the entire SWS fails independently of the states of the MEs that do
not belong to this group. Therefore the terms corresponding to the group failure can be
removed from the vector-u-function Um(z) since they should not participate in determining
further state combinations that cause system faults. This consideration lies at the base of the
following algorithm.
3.3. Algorithm for SWS reliability evaluation for a given allocation of MEs C
1. Initialization
F=0; U1-r(z)=zGo.
Determine u-functions of the individual MEs using (3).
2. Main loop
Repeat the following for j=1,…,n:
2.1.
Obtain U j1 r (z)  (U j r (z), u c( j) (z)) .
2.2.
If jr add value (U j1 r (z), W) to F and remove all the terms with
(Gi)=1 from U j1 r (z) .
3. Obtain the SWS reliability as R=1-F.
3.4. Example
5
Consider a SWS with n=5, r=3, W=4. Each ME has two states: total failure
(corresponding to performance rate 0) and functioning with a nominal performance rate. The
nominal performance rates of the MEs located at positions from 1 to 5 are 1, 2, 3, 1, 1
respectively.
The u-functions of the individual MEs are:
u1(z)=p11z0+p12z1, u2(z)=p21z0+p22z2, u3(z)=p31z0+p32z3, u4(z)=p41z0+p42z1, u5(z)=p51z0+p52z1.
F=0, U-2(z)=z0,0,0.
Following step 2 of the algorithm we obtain:
U-1(z)=(U-2(z),u1(z))= (z0,0,0,p11z0+p12z1)=p11z0,0,0+p12z0,0,1,
U0(z)=(U-1(z),u2(z))=(p11z0,0,0+p12z0,0,1,p21z0+p22z2)=
p11p21z0,0,0+p12p21z0,1,0+p11p22z0,0,2+ p12p22z0,1,2.
U1(z)=(U0(z),u3(z))=(p11p21z0,0,0+p12p21z0,1,0+p11p22z0,0,2+p12p22z0,1,2,p31z0+p32z3)=
p11p21p31z0,0,0+p12p21p31z1,0,0+p11p22p31z0,2,0+
p12p22p31z1,2,0+p11p21p32z0,0,3+p12p21p32z1,0,3+p11p22p32z0,2,3+p12p22p32z1,2,3.
The terms of U1(z) with (Gi)<4 are marked in bold. Following the step 2.2 of the algorithm
we obtain F=p11p21p31+p12p21p31+p11p22p31+p12p22p31+p11p21p32. After removing the marked
terms, U1(z) takes the form:
U1(z)=p12p21p32z1,0,3+p11p22p32z0,2,3+p12p22p32z1,2,3.
U2(z)=(U1(z),u4(z))=(p12p21p32z1,0,3+p11p22p32z0,2,3+p12p22p32z1,2,3,p41z0+p42z1)=
p12p21p32p41z0,3,0+p11p22p32p40z2,3,0+p12p22p32p41z2,3,0+p12p21p32p42z0,3,1+p11p22p32p42z2,3,1+
p12p22p32p42z2,3,1.
F=p11p21p31+p12p21p31+p11p22p31+p12p22p31+p11p21p32+p12p21p32p41.
After removing the marked term and collecting like terms U2(z) takes the form:
U2(z)=p22p32p41z2,3,0+p12p21p32p42z0,3,1+p22p32p42z2,3,1.
U3(z)=(U2(z),u5(z))=(p22p32p41z2,3,0+p12p21p32p42z0,3,1+p22p32p42z2,3,1,
p51z0+p52z1)=p22p32p41p51z3,0,0+p12p21p32p42p51z3,1,0+p22p32p42p51z3,1,0+
p21p31p40p50z3,0,1+p11p20p31p41p50z3,1,1+p21p31p41p50z3,1,1
F=p11p21p31+p12p21p31+p11p22p31+p12p22p31+p11p21p32+p12p21p32p41+p22p32p41p51.
R=1-F=1-p11p21p31-p12p21p31-p11p22p31-p12p22p31-p11p21p32-p12p21p32p41p22p32p41p51=1-p31-p32[p11p21+(p12p21+p22p51)p41].
4. Optimization technique
Finding the optimal ME allocation in SWS is a complicated combinatorial optimization
problem having n! possible solutions. An exhaustive examination of all these solutions is not
realistic even for a moderate number of MEs, considering reasonable time limitations. As in
most combinatorial optimization problems, the quality of a given solution is the only
information available during the search for the optimal solution. Therefore, a heuristic search
algorithm is needed which uses only estimates of solution quality and which does not require
derivative information to determine the next direction of the search.
The recently developed family of genetic algorithms is based on the simple principle of
evolutionary search in solution space. GAs have been proven to be effective optimization
tools for a large number of applications. Successful applications of GAs in reliability
engineering are reported in [15,22-30].
It is recognized that GAs have the theoretical property of global convergence [31].
Despite the fact that their convergence reliability and convergence velocity are contradictory,
for most practical, moderately sized combinatorial problems, the proper choice of GA
parameters allows solutions close enough to the optimal one to be obtained in a short time.
4.1. Genetic Algorithm
Basic notions of GAs are originally inspired by biological genetics. GAs operate with
"chromosomal" representation of solutions, where crossover, mutation and selection
6
procedures are applied. "Chromosomal" representation requires the solution to be coded as a
finite length string. Unlike various constructive optimization algorithms that use sophisticated
methods to obtain a good singular solution, the GA deals with a set of solutions (population)
and tends to manipulate each solution in the simplest manner.
A brief introduction to genetic algorithms is presented in [32]. More detailed information
on GAs can be found in Goldberg’s comprehensive book [33], and recent developments in
GA theory and practice can be found in books [30, 31]. The steady state version of the GA
used in this paper was developed by Whitley [34]. As reported in [35] this version, named
GENITOR, outperforms the basic “generational” GA. The structure of steady state GA is as
follows:
1. Generate an initial population of Ns randomly constructed solutions (strings) and
evaluate their fitness. (Unlike the “generational” GA, the steady state GA performs the
evolution search within the same population improving its average fitness by replacing worst
solutions with better ones).
2. Select two solutions randomly and produce a new solution (offspring) using a crossover
procedure that provides inheritance of some basic properties of the parent strings in the
offspring. The probability of selecting the solution as a parent is proportional to the rank of
this solution. (All the solutions in the population are ranked by increasing order of their
fitness). Unlike the fitness-based parent selection scheme, the rank-based scheme reduces GA
dependence on the fitness function structure, which is especially important when constrained
optimization problems are considered [36].
3. Allow the offspring to mutate. Mutation results in slight changes in the offspring
structure and maintains diversity of solutions. This procedure avoids premature convergence
to a local optimum and facilitates jumps in the solution space. The positive changes in the
solution code created by the mutation can be later propagated throughout the population via
crossovers.
4. Decode the offspring to obtain the objective function (fitness) values. These values are
a measure of quality, which is used in comparing different solutions.
5. Apply a selection procedure that compares the new offspring with the worst solution in
the population and selects the one that is better. The better solution joins the population and
the worse one is discarded. If the population contains equivalent solutions following the
selection process, redundancies are eliminated and, as a result, the population size decreases.
Note that each time the new solution has sufficient fitness to enter the population, it alters the
pool of prospective parent solutions and increases the average fitness of the current
population. The average fitness increases monotonically (or, in the worst case, does not vary)
during each genetic cycle (steps 2-5).
6. Generate new randomly constructed solutions to replenish the population after
repeating steps 2-5 Nrep times (or until the population contains a single solution or solutions
with equal quality). Run the new genetic cycle (return to step 2). In the beginning of a new
genetic cycle, the average fitness can decrease drastically due to inclusion of poor random
solutions into the population. These new solutions are necessary to bring into the population
new "genetic material" which widens the search space and, like a mutation operator, prevents
premature convergence to the local optimum.
7. Terminate the GA after Nc genetic cycles.
The final population contains the best solution achieved. It also contains different nearoptimal solutions, which may be of interest in the decision-making process.
4.2. Solution representation and basic GA procedures
To apply the genetic algorithm to a specific problem, one must define a solution
representation and decoding procedure, as well as specific crossover and mutation procedures.
7
In our problem, each solution should be represented by n length string of integer numbers
ranged from 1 to n. To provide solution feasibility each number should appear in the string
only once. The order in which the numbers appear determines the ME allocation function C.
For each integer string the solution fitness equal to R(C,r,W) can be estimated by applying the
algorithm 3.3.
Crossover and mutation procedures should preserve feasibility of newly obtained
solutions given that parent solutions are feasible. A crossover procedure that was first
suggested in [37] and was proven to be highly efficient in [38] is used in this work. This
procedure first copies all the string elements from the first parent to the same positions of the
offspring. Then all the offspring elements belonging to the fragment, defined as a set of
adjacent positions between two randomly defined sites, are reallocated within this fragment in
the order they appear in the second parent. The following is an example of the crossover
procedure in which the fragment is marked in bold.
First parent:
1 2 3 4 5 6 7 8 9 10
Second parent:
7 8 9 2 4 5 1 3 6 10
Offspring:
1 2 7 4 5 3 6 8 9 10.
The mutation procedure used in our GA just swaps elements initially located in two
randomly chosen positions of the string. This procedure also preserves solution feasibility.
5. Numerical example
Consider a SWS with n=10. The parameters of the system MEs are presented in Table 1.
Observe that the different MEs have different number of states. Three solutions were obtained
by the GA for SWS with r=3 (for W=6, W=8 and W=10) and three solutions were obtained
for the same SWS with r=5 (for W=10, W=15, W=20). These solutions are presented in Table
2. The system reliability as function of demand W is presented in Fig. 2 and Fig. 3 (for r=2
and r=5 respectively) for the obtained ME allocations. One can see that the greater r, the
greater the SWS reliability for the same W. This is natural because the growth of r provides
growing redundancy in each group.
Note also that for solutions which provide the greatest SWS reliability for certain W, this
index for the rest of values of W is less than for the solutions optimal for that values of W.
Indeed, the optimal allocation provides the greatest system probability of meeting just the
specified demand by the price of reducing the probability of meeting greater demands.
Running time on a Pentium II PC for the problems considered was about 60 seconds for
GA with Ns=100, Nrep=2000, Nc=25 (for the tested problems with n=20 the running time was
about 800 seconds).
To demonstrate the consistency of the suggested algorithm, GA was repeated 10 times
with different starting solutions (initial population) for two test problems of reliability
maximization (for SWS with n=10 and n=20). The coefficient of variation was calculated for
fitness values of best-in-population solutions obtained during the genetic search by various
GA search processes. Figure 4 shows the variation of this index during the GA proceeding.
One can see that GAs with different starting populations converge to very close solutions fast.
References
[1] W. Griffith, On consecutive k-out-of-n failure systems and their generalizations, A. P. Basu (ed), Reliability
and Quality Control, 1986, pp. 157-165; Elsevier (North-Holland).
[2]. Y. Tong, A rearrangement inequality for the longest run with an application in network reliability, J. Applied
Probability, vol. 22, 1985, pp. 386-393.
[3] B. Saperstain, The generalized birthday problem, J. Amer. Statistical Assoc, vol. 67, 1972, pp. 425-428.
[4] B. Saperstain, On the occurrence of n successes within N Bernoulli trails, Technometrics, vol. 15, 1973, pp.
809-818.
[5] J. Naus, Probabilities for a generalized birthday problem, J. Amer. Statistical Assoc, vol. 69, 1974, pp. 810815.
8
[6] J. Nelson, Minimal-order models for false-alarm calculations on sliding windows, IEEE Trans. Aerospace
Electronic Systems, vol. AES-14, 1978, pp. 351-363.
[7] S. Papastavridis, M. Sfakianakis, Optimal-arrangement & Importance of the components in a consecutive-kout-of-r-from-n:F system, IEEE Transactions on Reliability, vol. 40, 1991, pp. 277-279
[8] M. Sfakianakis, S. Kounias, A. Hillaris Reliability of consecutive-k-out-of-r-from-n:F systems, IEEE
Transactions on Reliability, vol. 41, 1992, pp. 442-447.
[9] J. Cai, Reliability of a large consecutive-k-out-of-r-from-n:F system with unequal component-reliability ,
IEEE Transactions on Reliability, vol. 43, 1994, pp. 107-111.
[10] Z. Psillakis, A simulation algorithm for computing failure probability of a consecutive-k-out-of-r-from-n:F
system, IEEE Transactions on Reliability, vol. 44, 1995, pp. 523-531.
[11] D. Chiang, S. Niu, Reliability of consecutive-k-out-of-n:F systems, IEEE Transactions on Reliability, vol.
R-30, 1981, pp. 87-89
[12] R. Bollinger, Direct computations for consecutive-k-out-of-n:F systems, IEEE Transactions on Reliability,
vol. R-31, 1982, pp. 444-446.
[13] R. Bollinger A. Salvia, Consecutive-k-out-of-n:F networks, IEEE Transactions on Reliability, vol. R-31,
1987, pp. 53-56.
[14] J. Wu, R. Chen, An algorithm for computing the reliability of weighted-k-out-of-n systems, IEEE
Transactions on Reliability, vol. 43, 1994, pp. 327-328.
[15] G. Levitin, A. Lisnianski, A new approach to solving problems of multi-state system reliability
optimization, Quality and Reliability Engineering International, vol. 47, 2001, pp. 93-104.
[16] F. Hwang, Y. Yao, Multistate consecutively-connected systems , IEEE Transactions on Reliability, vol. 38,
1989, pp. 472-474.
[17] J. Shanthikumar, A recursive algorithm to evaluate the reliability of a consecutive-k-out-of-n:F system,
IEEE Transactions on Reliability, vol. R-31, 1982, pp. 442-443.
[18] J. Shanthikumar, Reliability of systems with consecutive minimal cutsets , IEEE Transactions on
Reliability, vol. R-36, 1987, pp. 546-550.
[19] I. Ushakov, Universal generating function, Sov. J. Computing System Science, vol. 24, No 5, 1986, pp.
118-129.
[20] G. Levitin, Evaluating correct classification probability for weighted voting classifiers with plurality voting,
European journal of operational research.
[21] G. Levitin, Reliability evaluation for acyclic consecutively-connected networks with multistate elements,
Reliability Engineering & System Safety, vol. 73, pp. 137-143, 2001.
[22] G. Levitin, A. Lisnianski, Optimal separation of elements in vulnerable multi-state systems, Reliability
Engineering & System Safety, vol. 73, pp. 55-66, 2001.
[23] G. Levitin, Redundancy optimization for multi-state system with fixed resource requirements and unreliable
sources, IEEE Transactions on Reliability, vol. 50, pp. 52-58, 2000.
[24] L. Painton and J. Campbell, Genetic algorithm in optimization of system reliability, IEEE Trans. Reliability,
44, 1995, pp. 172-178.
[25] D. Coit and A. Smith, Reliability optimization of series-parallel systems using genetic algorithm, IEEE
Trans. Reliability, 45, 1996, pp. 254-266.
[27] Y. Hsieh, T. Chen, D. Bricker, Genetic algorithms for reliability design problems, Microelectronics and
Reliability, 38, 1998, pp. 1599-1605.
[28] J. Yang, M. Hwang, T. Sung, Y. Jin, Application of genetic algorithm for reliability allocation in nuclear
power plant, Reliability Engineering & System Safety, 65, 1999, pp. 229-238.
[29] M. Gen and J. Kim, GA-based reliability design: state-of-the-art survey, Computers & Ind. Engng, 37,
1999, pp. 151-155.
[30] M. Gen and R. Cheng, Genetic Algorithms and engineering design, John Wiley & Sons, New York, 1997.
[31] T. Bck, Evolutionary Algorithms in Theory and Practice. Evolution Strategies. Evolutionary
Programming. Genetic Algorithms, Oxford University Press, 1996.
[32] S. Austin, An introduction to genetic algorithms, AI Expert, 5, 1990, pp. 49-53.
[33] D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley,
Reading, MA, 1989.
[34] D. Whitley, The GENITOR Algorithm and Selective Pressure: Why Rank-Based Allocation of
Reproductive Trials is Best. Proc. 3th International Conf. on Genetic Algorithms. D. Schaffer, ed., pp. 116-121.
Morgan Kaufmann, 1989.
[35]. G. Syswerda, A study of reproduction in generational and steady-state genetic algorithms, in G.J.E.
Rawlings (ed.), Foundations of Genetic Algorithms, Morgan Kaufmann, San Mateo, CA, 1991.
[36]. D. Powell, M. Skolnik, Using genetic algorithms in engineering design optimization with non-linear
constraints, Proc. Of the fifth Int. Conf. On Genetic Algorithms, Morgan Kaufmann, 1993, pp. 424-431.
[37]. J. Rubinovitz and G. Levitin, Genetic algorithm for assembly line balancing, Int. Journal of Production
Economics, 42 (1995) 343-354.
9
[38]. G. Levitin, S. Masal-Tov, D. Elmakis, Genetic algorithm for open loop distribution system design, Electr.
Power Syst. Res., 32 (1995) 81-87.
10
Table 1. Parameters of SWS elements
No of ME
1
No of state
p
0.03
0.22
0.75
-
1
2
3
4
5
2
g
0
2
5
-
p
0.10
0.10
0.40
0.40
-
3
4
5
g p g p g p
0 0.17 0 0.05 0 0.08
1 0.83 6 0.25 3 0.20
2 - - 0.40 5 0.15
4 - - 0.30 6 0.45
- - - - - 0.12
6
7
8
9
g p g p g p g p g
0 0.01 0 0.20 0 0.05 0 0.20 0
1 0.22 4 0.10 3 0.25 4 0.10 3
2 0.77 5 0.10 4 0.70 6 0.15 4
4 - - 0.60 5 - - 0.55 5
5 - - - - - - - -
Table 2. Parameters of solutions obtained by the GA
r
3
5
W
6
8
10
10
15
20
R
0.931
0.788
0.536
0.990
0.866
0.420
2
5
5
2
9
2
1
1
9
5
7
5
Allocation of SWS elements
6
5
4
8
7
10
8
9
6
4
7
3
3
1
4
7
10
8
1
4
6
8
10
3
3
10
1
6
8
4
4
8
3
6
10
7
11
3
10
6
7
5
1
9
2
2
9
2
9
10
p
0.05
0.25
0.70
-
g
0
2
6
-
Figure Captions
Figure 1: Example of SWS with r=3.
Figure 2: Reliability of best solutions obtained for W=6, W=8 and W=10 as function of
demand W (for r=3).
Figure 3: Reliability of best solutions obtained for W=10, W=15 and W=20 as function of
demand W (for r=5).
12
Download