The polynomial

advertisement
OPTIMAL ALLOCATION OF MULTISTATE ELEMENTS IN LINEAR
CONSECUTIVELY-CONNECTED SYSTEMS WITH DELAYS
GREGORY LEVITIN
Reliability Department, Planning, Development and Technology Division,
Bait Amir, Israel Electric Corporation Ltd., P.O. Box 10, 31000 Haifa, Israel
E-mail: levitin@iec.co.il
Abstract: A linear consecutively-connected system consists of N+2 linear ordered positions.
The first position contains a source of a signal and the last one contains a receiver. M
statistically independent multistate elements (retransmitters) with different characteristics are
to be allocated at the N intermediate positions. The elements provide retransmission of the
received signal to the next few positions. Each element can have different states determined
by a number of positions that are reached by the signal generated by this element. The
probability of each state for any given element depends on the position where it is allocated.
The signal retransmission process is associated with delays. The system fails if the signal
generated by the source can not reach the receiver within a specified time period.
A problem of finding an allocation of the multistate elements that provides the maximal
system reliability is formulated. An algorithm based on the universal generating function
method is suggested for the system reliability determination. This algorithm can handle cases
where any number of multistate elements are allocated in the same position while some
positions remain empty. It is shown that such an uneven allocation can provide greater system
reliability than an even one. A genetic algorithm is used as an optimization tool in order to
solve the optimal element allocation problem.
Keywords: System reliability, multistate elements, retranslation delay, linear consecutivelyconnected systems, universal generating function, genetic algorithm.
1
Abbreviations
ME
multistate element
LCCS
linear consecutively-connected system
UGF
universal generating function
GA
genetic algorithm
Notation
R
reliability of LCCS
R*
desired level of LCCS reliability
D
expected delay in LCCS
T
random delay in LCCS (time of signal propagation from source to receiver),
T{tj}, 1jJ
tj
time of signal propagation from source to receiver when the LCCS is in state j
T*
maximal allowable delay
N
number of intermediate positions in LCCS
M
number of available ME's in LCCS
ei
ith ME
Si
random state of ith ME
i
retransmission delay of ith ME
Cn
nth position of LCCS
E
set of all available ME's: E={e1,…,eM}
En
set of ME's located at position n
h(i)
number of the position where the ME ei is allocated: eiEh(i)
H
vector representing allocation of ME's in LCCS: H={h(i),1iM}
K
maximal number of different states of ME's
pik(n)
Pr{Si=k | eiEn}
2
Pi(n)
vector representing probabilistic distribution of state of ith ME allocated at Cn:
Pi(n)={pi1(n),…,piK(n)}
Vi
random vector representing arrival times of signal generated by ith ME
Vik
vector representing arrival times of signal generated by ith ME at state k
~
V nk
vector representing arrival times of signal generated by group of ME's
belonging to En at state k of the group
V̂ j
vector representing LCCS delays at state j of the entire system
qnk
probability of state k of group of ME's belonging to En
Gn
number of different states of the group of ME's belonging to En
Qk
probability of state k of the entire LCCS
uin(z)
u-function for ith ME located at position n (i=0 corresponds to the absence of
ME)
Un(z)
u-function for the group of ME's belonging to En (located at position n)
U1,…,n(z)
u-function for subset of ME's
n
 Ei
i 1
, ,
operators over u-functions
()
(x)=min{x,N+1}.
1. Introduction
The linear consecutive k-out-of-n:F system which was introduced by Chiang and Niu [1],
and Bollinger [2] consists of n ordered elements and fails when any k consecutive elements
fail. According to the system definition, each element can have two states: perfect functioning
and complete failure. In many practical applications, the system elements can have a number
of intermediate states characterized by different levels of the element performance.
3
Consider for example a set of radio relay stations with a signal source allocated at
position C0 and a receiver allocated at position CN+1. Each one of consequently ordered
stations Cn (1nN) can have retransmitters generating signals that reach the next S i stations.
Note that Si is a random value dependent on power and availability of retransmitter
amplifiers. Therefore each retransmitter is a multistate element with random performance
characterized by Si. The aim of the system is to provide propagation of a signal from a signal
source to a receiver (the connection between signal source and the first retransmitter is
assumed to be fully reliable).
The extension of the linear consecutive k-out-of-n:F system to the multi-state case is
named linear consecutively-connected system (LCCS). The LCCS consists of N+2
consequently ordered positions (nodes) Cn, n[0,N+1]. The first position C0 is the source and
the last one, CN+1, is the sink (receiver). At each position C1,…,CN multistate elements from a
set E={e1,…,eM} can be allocated. These elements provide connections between the position
in which they are allocated and a number of further positions. The source is always connected
with position C1.
Each element ei has K states, where the state Si of ei is a discrete random value with the
distribution:
Pr{Si  k}  p ik ,
K 1
 pik  1.
(1)
k 0
Si=k for element ei allocated at position Cn implies that connections exist from Cn to each
of Cn+1, Cn+2,…,C(n+k), where (x)=min{x,N+1}. Si=0 implies the total failure state of ei (no
connections exist between Cn and other nodes). All the states Si are statistically independent.
Note that although different ME's can have a different number of states, one can define
the same number of states for all the ME's without a loss of generality. Indeed, if ME ei has Ki
4
states and ME em has Km states (KiKm), one can consider both ME's as having
K=max{Ki,Km} states while assigning pik=0 for Kik<K.
The LCCS with ME's was first introduced by Hwang & Yao [3] as a generalization of
linear consecutive-k-out-of-n:F system and linear consecutively-connected system with 2state elements, studied by Shanthikumar [4,5]. Algorithms for reliability evaluation of LCCS
with ME's were developed by Hwang & Yao [3], Kossow & Preuss [6] and Zuo & Liang [7].
The problem of optimal ME allocation in LCCS was first formulated by Malinowski & Preuss
in [8]. In this problem, elements with different characteristics should be allocated in positions
C1,…,CN in such a manner that maximizes the LCCS reliability. A multi-start local search
algorithm was suggested for solving this problem.
In all of the above mentioned works, the issue of signal retransmission delay was not
addressed. However, in digital telecommunication systems the retransmission process is
usually associated with a certain delay. When this is so, the total time T of the signal
propagation from the source to the receiver can vary depending on the combination of states
of ME's (retransmitters). Note that even when the retransmission delays of the individual MEs
are constant values, the retransmission delay of the entire system is a random variable because
of stochastic properties of the multistate elements. The whole system is considered to be in
working condition if T is not greater than a certain specified level T*. Otherwise, the network
fails. In this paper, an algorithm is suggested for the evaluation of the reliability of linear
consecutively-connected systems consisting of ME's with fixed retransmission delays.
In the algorithm for optimal element allocation in LCCS presented in [8], only the
systems with M=N are considered where only one ME is located in each position. In many
cases, even for M=N, greater reliability can be achieved if some of the ME's are gathered in
the same position to provide redundancy (in hot standby mode) rather than having all of the
ME's evenly distributed among all the positions.
5
This work considers a generalization of the optimal allocation problem. In this
generalization, the number of ME's is not necessarily equal to number of positions (MN). An
arbitrary number of elements can be allocated at each position (some positions may be
empty).
To evaluate the reliability of LCCS with arbitrary allocation of ME's with fixed delays; a
procedure based on the use of a universal generating function is developed. A genetic
algorithm based on principles of evolution is used as an optimization engine. The integer
string solution encoding technique is adopted to represent element allocation in the GA.
Section 2 of the paper presents a formulation of the optimal allocation problem. Section 3
describes the technique used for evaluating the LCCS reliability for the given allocation of
different ME's with the specified delays and state distributions. Section 4 describes the
optimization approach used and its adaptation to the formulated problem. Section 5 presents
illustrative examples where the best-obtained allocation solutions are presented for two
different formulations of the optimal ME allocation problem.
2. Problem formulation
Consider a LCCS with a given allocation of ME's. Each ME ei receiving the signal can
retransmit it to further positions after a fixed delay i. The signal generated by ei located at
some position reaches all of the Si next positions immediately. Since the states of each ME are
random values, the total time T of the signal propagation from C0 to CN+1 is also a random
value with the distribution:
Pr{T  t j }  Q j ,
J
 Q j  1,
(2)
j1
where J is the total number of LCCS states characterised by different values of T (note that
different combinations of individual ME's' states can result in the same T). One can see that
6
some combinations of ME's states lead to the absence of the connection between C0 and CN+1.
In this case tj= relates to this specific state.
The system reliability R(T*) is defined as a probability that the signal generated in C 0 can
be delivered to CN+1 at a time not greater than T*. Having the distribution (2) one can obtain
this reliability as:
R (T*) 
Q j .
(3)
t j  T*
Another important measure of LCCS performance is the expected time of signal delivery
D, which also can be obtained from (2) as:
D
 t jQ j /  Q j .
t j 
(4)
t j 
The ME's allocation problem can be considered as a problem of partitioning a set E of M
elements into a collection of N mutually disjoint subsets En (1nN), i.e. such that
N
 E n  E,
(5)
E i  E j  ø, ij.
(6)
n 1
Each set En, corresponding to LCCS position Cn, can contain from 0 to M elements. The
partition of the E set can be represented by the vector H={h(i),1iM}, where h(i) is the
number of the subset where element i belongs.
In general, the probability of being at state k for any ME ei can depend on the position in
which the ME is located (in the above example, if the stations are allocated unevenly and/or
are located in areas with different signal propagation conditions the identical retransmitters
placed at different stations should be in different states in order to provide connection to the
same number of next stations). This can be taken into account by introducing vector-function
Pi(n)={pi1(n),…,piK(n)} for each ME ei located in position n for 1nN.
7
For the given set of ME's with specified vectors P1(n),…, PM(n) representing
probabilistic distribution of their states, the only factor influencing the LCCS reliability and
expected delay is the allocation of its elements H. Therefore, one can consider two possible
formulations of the optimal ME allocation problem:
Formulation 1. Find vector H maximizing the LCCS reliability R for the given T*:
Max R(T*,H, P1(n),…, PM(n)).
(7)
Formulation 2. Find vector H minimizing the expected delay while providing desired LCCS
reliability R* for the given T*:
Min D(T*,H,P1(n),…,PM(n))
subject to R(T*,H,P1(n),…,PM(n))R*.
(8)
3. LCCS reliability and expected delay estimation based on a universal generating
function
The procedure used in this paper for LCCS reliability evaluation is based on the universal
z-transform (also called u-function or universal generating function) technique, which was
introduced in [9] and which proved to be very effective for reliability evaluation of different
types of multi-state systems [10-14]. The u-function extends the widely known ordinary
moment generating function.
3.1. U-function of individual ME
The UGF (u-transform) of a discrete random variable X is defined as a polynomial
u (z) 
K
 qkzXk ,
k 1
where the variable X has K possible values and qk is the probability that X is equal to Xk.
8
(9)
In order to represent the signal arrival time distribution in our LCCS, we modify the UGF
by replacing the random value X with the random vector Vi={vi(1)…vi(N+1)} so that vi(j)
represents random time of arrival to node Cj for the signal generated by ith ME.
Consider a ME ei allocated at node Cn. In each state Si (1Si<K), the ME provides a
signal retransmission from Cn to a set of nodes {Cn+1,…C(n+Sk)} with delay i. In order to
represent the times taken to retransmit the signal entered the node Cn to the rest of LCCS
nodes at state Si, we determine vector ViSi as follows
, 0  j  n,

v iS i ( j)  i , n  j  (n  Si ) ,
, j  (n  Si )

(10)
(in numerical applications  can be represented by any number much greater than T*).
The polynomial
u in (z) 
K 1
 pik (n )z Vik
(11)
k 0
represents all of the possible states of the ME ei located at Cn by relating the probabilities of
each state Si to the value of a random vector Vi (representing signal arrival times) in this state.
Note that the absence of any ME at position Cn implies that no connections exist between
Cn and any other position. This means that any signal reaching Cn cannot be retransmitted in
this node (with probability 1). In this case, the corresponding u-function takes the form
u0n(z)= z V0 ,
(12)
where v0(j)= for 1jN+1.
3.2. U-function for group of ME's allocated at the same position
Consider two ME's ei and ef allocated at the same position Cn. Assume that ei is in state Si
and ef is in state Sf. The signal reached Cn can be retransmitted by ei to each position j
corresponding to viSi(j)=i by time i (and to each position j corresponding to viSi(j)= by time
9
). The same signal can be retransmitted by ef to each positions j by time vfSf(j). The time of
arrival of the signal retransmitted at position i to any position j>i can be obtained as
min{viSi(j),vfSf(j)}. Therefore the combination of states Si and Sf of the pair of ME's ei and ef
(which has probability piSi(n)pfSf(n)) corresponds to signal delay distribution represented by
the vector min{ViSi,VfSf}.
In order to obtain the u-function of a subsystem containing an arbitrary number of ME's
located at the same position, a composition operator  is introduced. This operator determines
the polynomial Un(z) for a group of ME's belonging to En using simple algebraic operations
on the individual u-functions of the ME's. The composition operator for a pair of ME's ei and
ef takes the form:
K 1
(u in (z), u fn (z))  (  p ik (n )z Vik ,
K 1 K 1
k 0
  pik (n )p fj (n )z
K 1
 p fj (n )z
Vfj
)
j 0
(13)
min{ Vik , Vfj}
k  0 j 0
The resulting polynomial relates probabilities of each of the possible combinations of
states of the two ME's (obtained by multiplying the probabilities of corresponding states of
each ME) with vector of arrival delays of the signal retransmitted in position Cn to the next
positions when the ME's are in the given states.
Note that for any uin(z)
(u0n(z),uin(z))=uin(z).
(14)
One can see that the operator  satisfies the following conditions:
{u1 (z),..., u k (z), u k 1 (z),..., u v (z)}  {{u1 (z),..., u k (z)}, {u k 1 (z),...., u v (z)}}
(15)
for arbitrary k. Therefore, it can be applied in sequence to obtain the u-function for an
arbitrary group of ME's allocated at Cn:
10
U n ( z) 

iE n
(u in (z)) 
Gn
~
 q nk z Vnk ,
(16)
k 1
where Gn is a number of different states of the group of ME's allocated at C n. One can
consider the group of ME's allocated at Cn as a single ME with delay distribution (16).
3.3. U-function for the entire LCCS
Assume that a signal generated at Cm in state Sm reaches Cn (which corresponds to
vmSm(n)=m). If the group of ME's located at Cn is in state Sn, the signal generated
(retransmitted) at Cn reaches all of the nodes belonging to the set {Cn+1,…,C(n+Sn)} when the
time m+n has passed since the signal arrived at Cm.
Note that if there are different ways of signal propagation to a certain position, which are
characterized by different times, the minimal time determines the signal delay. For example,
when ME's allocated at Cm are in state 2 and ME's allocated at Cm+1 are in state 1, the same
signal can reach Cm+2 directly from em by time m (from the moment it reached Cm) or by time
m+m+1 being retransmitted by em+1. In this case, the signal is received at Cm+2 after time m
since it reached Cm.
Therefore, in order to determine the delay distribution for a signal retransmitted by two
groups of ME's allocated at two adjacent positions Cm and Cm+1 and being in states Sm and
Sm+1 respectively, one can use the following function over vectors VmSm and Vm+1Sm+1:
(VmSm,Vm+1Sm+1),
where for each individual term for 1jN
(vmSm(j),vm+1Sm+1(j))=min{vmSm(j), vmSm(m+1)+vm+1Sm+1(j)}
(17)
In order to represent all the possible combinations of states of the two groups of ME's,
one has to relate the corresponding probabilities with the values of the random vector
11
(VmSm,Vm+1Sm+1) in these states. For this purpose, we introduce a composition operator 
over u-functions of groups of ME's allocated at Cm and Cm+1 which takes the following form:
Gm
U m, m 1 (z)   ( U m (z), U m 1 (z))   (  q mk z
~
Vm k G m 1
,

k 0
j 0
~
~
G m G m1
( Vm k , Vm1 j )
q mk q m 1 jz
k  0 j 0
q m 1 jz
~
Vm1 j
)
(18)
 
The resulting polynomial Um,m+1(z) represents the probabilistic distribution of the delay
times for a set of nodes receiving the signal from Cm directly or through Cm+1.
One can obtain the u-function for the entire LCCS (or the equivalent, probabilistic
distribution of delay times in the LCCS) consecutively applying the equation
U1,..., m 1 (z)   ( U1,..., m (z), U m 1 (z))
(19)
for m=1, m=2,…, m=N-1. The resulting polynomial takes the form
J
U1,..., N (z)   Q jz
ˆ
V
j
.
(20)
j1
This polynomial relates the probabilities of all the possible LCCS states Qj with the times of
signal propagation from source to receiver tj= v̂ j (N+1) (the last element of the vector V̂ j )
corresponding to these states. Since U1,…,N determines delay distribution (2) the LCCS
reliability as well as the expected delay can be determined in accordance with (3) and (4) as
Q j
R (T*) 
(21)
j:v̂ j ( N 1)  T *
and
D


v̂ j ( N)Q j /
Qj .
j:v̂ j ( N 1) 
j:v̂ j ( N 1) 
(22)
3.4. Simplification of u-functions
Observe that when u-function U1,…,m(z) is obtained, the values ~
v (1),…, ~v (m)
representing the delays of a signal arrival to nodes C1,…,Cm are no longer used for
12
determining U1,…,m+1(z) (and all the U1,…,n(z) for n>m) according to (17). Indeed, when
determining U1,…,m+1(z), we need to know only the probabilities that the signal reaches nodes
Cm+1,…,CN and the corresponding delays. It does not matter through which paths the signal
reaches these nodes. For example, if in different LCCS states the signal reaches C m+1 through
a number of different combinations of paths (represented by the same number of different
terms in U1,…,m (z)) resulting in the same delay, one does not have to distinguish among these
combinations. The only thing one has to know is the sum of the probabilities of the states in
which combinations of paths with the given minimal delay exist. This means that one can
collect the corresponding terms in U1,…,m(z) by replacing all the values v(1),…,v(m) in
vectors V of the polynomial with  symbols and collecting the like terms.
If in some term of the polynomial U1,…,m(z) (corresponding to a certain state of the MEs)
~v (m+1)=…= ~v (N+1)=, the signal cannot reach any position from Cm+1 to CN+1
independently of the states of ME's located in these positions, then this state does not
contribute to signal propagation to the last node and the corresponding term can be removed
from the u-function U1,…,m(z).
Taking into account the above-mentioned considerations, one can drastically simplify
polynomials U1,…,m(z) for 1mN using the following operator (U1,…,m(z)) which
-
assigns  to v(1),…, v(m) in each term of U1,…,m(z);
-
~
removes all the terms in which vectors V contain only  symbols;
-
collects like terms in the resulting polynomial.
The example of the operator  application is presented in section 3.6.
13
3.5. Algorithm for determination of LCCS reliability end expected delay
Using the UGF technique described above, one can obtain the LCCS reliability for the
given set of parameters (i,pik(n)) 1iM, 1nN, 0k<K and the given ME allocation H
applying the following procedure that is convenient for numeric implementation:
1. For each ME ei allocated at Cn (where n=h(i) ) determine vectors Vik corresponding to
states 0k<K using rule (10).
2. For each ME ei allocated at Cn determine its u-function uin(z) using expression (11).
3. Assign Un(z)=u0n(z) for 1nN, where u0n(z) is determined by Eq. (12).
4. For each n (1nN) apply expression U n (z) 

iE n
(u in (z)) using operator (13).
5. Apply expression U1,..., m 1 (z)   (( U1,..., m (z)), U m 1 (z)) for m=1, m=2,…, m=N-1 in
sequence using operator  (18) and operator  described in the previous section.
6.
Simplify polynomial U1,..,N-1(z) using operator  and obtain the LCCN reliability R
and expected delay D using equations (21) and (22) respectively.
3.6. Example of determination of LCCS reliability and expected delay
Consider for example a LCCS with N=3 and M=3. The delays of the ME's are 1, 2 and
3 respectively. Maximal allowable delay is T*. The first ME is allocated at C1, the second
and third ME's are allocated at C3, which corresponds to H={1,3,3}. Each ME can provide a
connection with the further allocated ME's with the given probabilities: Pr{S1=k | e1E1}=p1k
for 0k4, Pr{S2=k | e2E3}=p2k and Pr{S3=k | e3E3}=p2k for 0k1.
According to (10) and (11), the u functions of individual ME's e1, e2 and e3 are:
u11(z)=p10z(****)+p11z(*1**)+p12z(*11*)+p13z(*111),
u23(z)=p20z(****)+p21z(***2),
14
u33(z)=p30z(****)+p31z(***3).
For 1n3 u0n(z)=z(****). (In this example symbol * stands for ).
Following steps 3 and 4 of the algorithm we obtain
U1(z)=(u01(z),u11(z))=u11(z);
U2(z)=u02(z)=z(****);
U3(z)=(u03(z),u23(z),u33(z))=p20p30z(****)+p21p30z(***2)+ p20p31z(***3)+p21p31z(***min{2,3});
Following the consecutive procedure, we obtain:
U1(z)=u1(z),
(U1(z))=p11z(*1**)+p12z(*11*)+p13z(*111),
U1,2(z)= ((U1(z)),U2(z))=p11z(*1**)+p12z(*11*)+p13z(*111),
(U1,2(z))=p12z(**1*)+p13z(**11).
U1,2,3(z)=((U1,2(z)),U3(z))=p12(p20p30z(**1*)+p21p30z(**11+2)+
p20p31z(**11+3)+p21p31z(**11+min{2,3}))+p13(p20p30z(**11)+p21p30z(**11)+
p20p31z(**11)+p21p31z(**11)).
The operator  applied to U1,2,3(z) first replaces the third element of each vector with *, then
removes the term p12p20p30z(****) and collects the like terms as follows:
(U1,2,3(z))=p13(p20p30+p21p30+p20p31+p21p31)z(***1)+p12p21p30z(***1+2)+p12p20p31z(***1+3)+
p12p21p31z(***1+min{2,3})=p13z(***1)+z(***1+2)+z(***1+3),
where
=p12p21, =p12p20p31 if 3>2,
=p12p21p30, =p12p31 if 2>3.
Now using (21) one obtains
R(T*)=0 if T*<1,
15
R(T*)=p13 if 1T*<1+min{2,3}
R(T*)=p13+p12p31 if 1+3 T*<1+2, (2>3)
R(T*)=p13+p12p21 if 1+2 T*<1+3, (3>2)
R(T*)=p13+p12(p21+p20p31) if 1+max{2,3} T*.
and using (22) one obtains
D=1+(p12p212+p12p20p313)/(p13+p12(p21+p20p31)) if 3>2,
D=1+(p12p313+p12p21p302)/(p13+p12(p21+p20p31)) if 2>3.
Note that the operator  reduces the number of terms in U1(z) from 4 to 3, in U1,2(z) from
3 to 2 and in U1,2,3(z) from 8 to 3.
4. Optimization technique
Finding the optimal ME allocation in LCCS is a complicated combinatorial optimization
problem having NM possible solutions. An exhaustive examination of all these solutions is not
realistic even for a moderate number of positions and elements, considering reasonable time
limitations. As in most combinatorial optimization problems, the quality of a given solution is
the only information available during the search for the optimal solution. Therefore, a
heuristic search algorithm is needed which uses only estimates of solution quality and which
does not require derivative information to determine the next direction of the search.
The recently developed family of genetic algorithms is based on the simple principle of
evolutionary search in solution space. GAs have been proven to be effective optimization
tools for a large number of applications. Successful applications of GAs in reliability
engineering are reported in [11-22].
It is recognized that GAs have the theoretical property of global convergence [23].
Despite the fact that their convergence reliability and convergence velocity are contradictory,
16
for most practical, moderately sized combinatorial problems, the proper choice of GA
parameters allows solutions close enough to the optimal one to be obtained in a short time.
4.1.
Genetic Algorithm
Basic notions of GAs are originally inspired by biological genetics. GAs operate with
"chromosomal" representation of solutions, where crossover, mutation and selection
procedures are applied. "Chromosomal" representation requires the solution to be coded as a
finite length string. Unlike various constructive optimization algorithms that use sophisticated
methods to obtain a good singular solution, the GA deals with a set of solutions (population)
and tends to manipulate each solution in the simplest manner.
A brief introduction to genetic algorithms is presented in [24]. More detailed information
on GAs can be found in Goldberg’s comprehensive book [25], and recent developments in
GA theory and practice can be found in books [22, 23]. The steady state version of the GA
used in this paper was developed by Whitley [26]. As reported in [27] this version, named
GENITOR, outperforms the basic “generational” GA. The structure of steady state GA is as
follows:
1. Generate an initial population of Ns randomly constructed solutions (strings) and
evaluate their fitness. (Unlike the “generational” GA, the steady state GA performs the
evolution search within the same population improving its average fitness by replacing worst
solutions with better ones).
2. Select two solutions randomly and produce a new solution (offspring) using a crossover
procedure that provides inheritance of some basic properties of the parent strings in the
offspring. The probability of selecting the solution as a parent is proportional to the rank of
this solution. (All the solutions in the population are ranked by increasing order of their
fitness). Unlike the fitness-based parent selection scheme, the rank-based scheme reduces GA
17
dependence on the fitness function structure, which is especially important when constrained
optimization problems are considered [28].
3. Allow the offspring to mutate with given probability Pm. Mutation results in slight
changes in the offspring structure and maintains diversity of solutions. This procedure avoids
premature convergence to a local optimum and facilitates jumps in the solution space. The
positive changes in the solution code created by the mutation can be later propagated
throughout the population via crossovers.
4. Decode the offspring to obtain the objective function (fitness) values. These values are
a measure of quality, which is used in comparing different solutions.
5. Apply a selection procedure that compares the new offspring with the worst solution in
the population and selects the one that is better. The better solution joins the population and
the worse one is discarded. If the population contains equivalent solutions following the
selection process, redundancies are eliminated and, as a result, the population size decreases.
Note that each time the new solution has sufficient fitness to enter the population, it alters the
pool of prospective parent solutions and increases the average fitness of the current
population. The average fitness increases monotonically (or, in the worst case, does not vary)
during each genetic cycle (steps 2-5).
6. Generate new randomly constructed solutions to replenish the population after
repeating steps 2-5 Nrep times (or until the population contains a single solution or solutions
with equal quality). Run the new genetic cycle (return to step 2). In the beginning of a new
genetic cycle, the average fitness can decrease drastically due to inclusion of poor random
solutions into the population. These new solutions are necessary to bring into the population
new "genetic material" which widens the search space and, like a mutation operator, prevents
premature convergence to the local optimum.
7. Terminate the GA after Nc genetic cycles.
18
The final population contains the best solution achieved. It also contains different nearoptimal solutions, which may be of interest in the decision-making process.
4.2. Solution representation and basic GA procedures
To apply the genetic algorithm to a specific problem, one must define a solution
representation, a decoding procedure, as well as specific crossover and mutation procedures.
As it was shown in section 2, any arbitrary M-length vector H with elements h(i)
belonging to the range [1,N] represents a possible allocation of ME's. Such vectors can
represent each one of the possible NM different solutions.
In order to let the genetic algorithm search for the solution that meets requirements (7) or
(8), the following universal expression of solution quality (fitness) is used:
F     min{ 0, R  R*}  D,
(23)
where  and  are constants far greater than the maximal possible value of system expected
delay. Note that =0 and R*=1 corresponds to formulation 1 of the problem (7) and =1 and
0<R*<1 corresponds to formulation 2 (8). To estimate the LCCS reliability and expected
delay for the arbitrary vector H, one should apply the procedure presented in section 3.5.
The random solution generation procedure provides solution feasibility by generating
vectors of random integer numbers within the range [1,N]. It can be seen that the following
crossover and mutation procedures also preserve solution feasibility.
The crossover operator for given parent vectors P1, P2 and the offspring vector O is
defined as follows: first P1 is copied to O, then all numbers of elements belonging to the
fragment between a and b positions of the vector P2 (where a and b are random values,
1a<bM) are copied to the corresponding positions of O. The following example illustrates
the crossover procedure for M=6, N=4 (the positions defining the fragment are a=3, b=5):
P1=2 4 1 4 2 3
19
P2=1 1 2 3 4 2
O=2 4 2 3 4 3.
One can see that the crossover procedure produces ME allocation O in which each ME is
located either where it was according to allocation P1 or where it was according to allocation
P2.
The mutation operator moves the randomly chosen ME to the adjacent position (if such a
position exists) by modifying a randomly chosen element h(i) of H using rule h(i)=max{h(i)1,1} or rule h(i)=min{h(i)+1,N} with equal probability. The vector O in our example can take
the following form after applying the mutation operator:
O=2 3 2 3 4 3.
Note that the mutation operator does not cause drastic changes in ME allocation. It
provides a local search in the space of solutions.
5. Illustrative example
Consider LCCS consisting of 10 positions (N=8) and M=8 ME's. For the sake of
simplicity assume that probabilistic distributions of ME states don't depend on the ME
allocation. These distributions are presented in Table 1 as well as delay times for each ME.
The considered example corresponds to the problem of optimal allocation of 8
retransmitters with different technical characteristics among 8 potential positions with
identical signal propagation conditions.
The best ME allocation obtained for the first formulation of the problem for T*=3
(solution A) is presented in Fig. 1 (in each figure ME's are presented in their maximal
possible state corresponding to the maximal signal span). The probability that the system
delay is not greater than 3 for solution A is R(3)=0.808 and expected delay is D=2.755. Note
that in this solution some positions remain empty while others contain more then one ME. To
20
compare this solution with the best possible even ME allocation, the solution was obtained for
the constrained allocation problem in which allocation of no more than one ME in each
position is allowed (solution B). This solution is presented in Fig. 2. One can see that the
reliability of the even allocation solution R(3)=0.753 is smaller than the free allocation
solution. Observe that while solution A has greater reliability than solution B, the latter has a
lower expected delay of D=2.657.
The solution of formulation 2 of the allocation problem for R*=0.75 and T*=3 is
presented in Fig. 3 (solution C). In this solution we obtained D=2.368 while R(3)=0.751>R*.
The even ME allocation solution obtained for formulation 2 coincides with solution B.
The LCCS reliability as a function of the maximal allowable delay is presented in Fig. 4
for all of the obtained solutions. R(T) function represents the probabilistic delay distribution
in the LCCS. One can see that not only the reliability of LCCS for a given T* depends on ME
allocation but also the minimal possible delay (minimal T* for which R(T*)>0) and the
maximal finite delay (maximal T* for which R(T*)<R()).
The presented example shows that the proper allocation of retransmitters can improve the
reliability of the radio relay system without any additional investment in retransmitter
reliability improvement.
5.1. Computational Effort and Algorithm Consistency
The C language realization of the algorithm was tested on a Pentium II PC. The chosen
parameters of GA were NS=100, Nrep=2000, Nc=50 and Pm=1. The time taken to obtain the
best-in-population solution (time of the last modification of the best solution obtained) for
problems with N10, M10 did not exceed 60 seconds.
To demonstrate the consistency of the suggested algorithm, we repeated the GA 20 times
with different starting solutions (initial population) for both problem formulations (solutions A
21
and C). The coefficient of variation was calculated for fitness values of best-in-population
solutions obtained during the genetic search by different GA search processes. The variation
of this index during the GA procedure is presented in Fig. 5. One can see that the standard
deviation of the final solution fitness does not exceed 0.1 % of its average value.
References
1 D. Chiang, S. Niu, "Reliability of consecutive-k-out-of-n:F systems", IEEE Transactions on
Reliability, vol. R-30, 1981, pp. 87-89
2 R. Bollinger, "Direct computations for consecutive-k-out-of-n:F systems", IEEE
Transactions on Reliability, vol. R-31, 1982, pp. 444-446.
3 F. Hwang, Y. Yao, "Multistate consecutively-connected systems ", IEEE Transactions on
Reliability, vol. 38, 1989, pp. 472-474.
4 J. Shanthikumar, "A recursive algorithm to evaluate the reliability of a consecutive-k-outof-n:F system", IEEE Transactions on Reliability, vol. R-31, 1982, pp. 442-443.
5 J. Shanthikumar, "Reliability of systems with consecutive minimal cutsets ", IEEE
Transactions on Reliability, vol. R-36, 1987, pp. 546-550.
6 A. Kossow, W. Preuss, "Reliability of linear consecutively-connected systems with
multistate components", IEEE Transactions on Reliability, vol. 44, 1995, pp. 518-522.
7 M. Zuo, M. Liang, "Reliability of multistate consecutively-connected systems", Reliability
Engineering & System Safety, vol. 44, 1994, pp. 173-176.
8 J. Malinowski, W. Preuss, "Reliability increase of consecutive-k-out-of-n:F and related
systems through components' rearrangement", Microelectronics and Reliability, vol. 36, 1996,
pp. 1417-1423.
9 I. A. Ushakov, Universal generating function, Sov. J. Computing System Science, vol. 24,
No 5, 1986, pp. 118-129.
22
10 G. Levitin, A. Lisnianski, "Importance and sensitivity analysis of multi-state systems using
the universal generating function method", Reliability Engineering and System Safety, 65,
1999, pp. 271-282.
11 A. Lisnianski, G. Levitin, H. Ben Haim, "Structure optimization of multi-state system with
time redundancy, Reliability Engineering & System Safety, vol. 67, 2000, pp. 103-112.
12 G. Levitin, "Redundancy optimization for multi-state system with fixed resource
requirements and unreliable sources", to appear in IEEE Transactions on Reliability, vol. 49,
2000.
13 G. Levitin, A. Lisnianski, "Reliability optimization for weighted voting system",
Reliability Engineering & System Safety, vol. 71, 2001, pp. 131-138..
14 G. Levitin, A. Lisnianski, H. Beh-Haim, D. Elmakis, "Redundancy Optimization for
Series-parallel Multi-state Systems", IEEE Transactions on Reliability, vol. 47, 1998, pp.
165-172.
15 T. Yokota, M. Gen, K. Ida, "System reliability optimization problems with several failure
modes by genetic algorithm", Japanese Journal of Fuzzy Theory and Systems, Vol. 7, No. 1,
1995, pp. 119-132.
16 L. Painton and J. Campbell, "Genetic algorithm in optimization of system reliability",
IEEE Trans. Reliability, 44, 1995, pp. 172-178.
17 D. Coit and A. Smith, "Reliability optimization of series-parallel systems using genetic
algorithm", IEEE Trans. Reliability, 45, 1996, pp. 254-266.
18 D. Coit and A. Smith, "Redundancy allocation to maximize a lower percentile of the
system time-to-failure distribution", IEEE Trans. Reliability, 47, 1998, pp. 79-87.
19 Y. Hsieh, T. Chen, D. Bricker, "Genetic algorithms for reliability design problems",
Microelectronics and Reliability, 38, 1998, pp. 1599-1605.
23
20 J. Yang, M. Hwang, T. Sung, Y. Jin, "Application of genetic algorithm for reliability
allocation in nuclear power plant", Reliability Engineering & System Safety, 65, 1999, pp.
229-238.
21 M. Gen and J. Kim, "GA-based reliability design: state-of-the-art survey", Computers &
Ind. Engng, 37, 1999, pp. 151-155.
22 M. Gen and R. Cheng, Genetic Algorithms and engineering design, John Wiley & Sons,
New York, 1997.
23 T. Bck, Evolutionary Algorithms in Theory and Practice. Evolution Strategies.
Evolutionary Programming. Genetic Algorithms, Oxford University Press, 1996.
24 S. Austin, "An introduction to genetic algorithms", AI Expert, 5, 1990, pp. 49-53.
25 D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison
Wesley, Reading, MA, 1989.
26 D. Whitley, The GENITOR Algorithm and Selective Pressure: Why Rank-Based
Allocation of Reproductive Trials is Best. Proc. 3th International Conf. on Genetic
Algorithms. D. Schaffer, ed., pp. 116-121. Morgan Kaufmann, 1989.
27. G. Syswerda, “A study of reproduction in generational and steady-state genetic algorithms,
in G.J.E. Rawlings (ed.), Foundations of Genetic Algorithms, Morgan Kaufmann, San Mateo,
CA, 1991.
28. D. Powell, M. Skolnik, “Using genetic algorithms in engineering design optimization with
non-linear constraints”, Proc. Of the fifth Int. Conf. On Genetic Algorithms, Morgan
Kaufmann, 1993, pp. 424-431.
24
Table 1. Parameters of ME's
State probability distribution
No of ME
i
1
2
3
4
5
6
7
8
Si=0
Si=1
Si=2
Si=3
Si=4
Si=5
i
0.13
0.17
0.02
0.16
0.08
0.09
0.06
0.10
0.11
0.00
0.80
0.42
0.44
0.30
0.00
0.00
0.76
0.83
0.18
0.30
0.40
0.07
0.22
0.00
0.00
0.00
0.00
0.12
0.08
0.25
0.53
0.29
0.00
0.00
0.00
0.00
0.00
0.29
0.19
0.28
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.33
0.6
0.8
1.2
0.8
1.0
1.1
0.7
1.3
25
Figure Captions
Figure 1: ME allocation solution obtained for the first formulation of the problem.
Figure 2: ME allocation solution with even ME distribution among LCCS positions obtained
for the first formulation of the problem.
Figure 3: ME allocation solution obtained for the second formulation of the problem.
Figure 4: LCCS reliability as function of maximal allowable delay.
Figure 4: CV of best-in-population solution fitness obtained by 20 different search processes
as a function of number of crossovers.
26
About the Author:
Gregory Levitin received the BS and MS degrees in Electrical Engineering from Kharkov
Politechnical Institute (Ukraine) in 1982, the BS degree in Mathematics from Kharkov State
University in 1986 and PhD degree in Industrial Automation from Moscow Research Institute
of Metalworking Machines in 1989. From 1982 to 1990 he worked as software engineer and
research associate in the field of industrial automation. From 1991 to 1993 he worked at the
Technion-Israel Institute of Technology as a postdoctoral fellow at the faculty of Industrial
Engineering and Management. In 1993 he joined the Israel Electric Corporation. Dr. Levitin
is presently an engineer-expert at the Reliability Department of the I.E.C. and adjunct senior
lecturer at the Technion. His current interests are in operations research and artificial
intelligence applications in reliability and power engineering. He is senior member of IEEE.
27
Download