End Algorithm

advertisement
REDUNDANCY OPTIMIZATION FOR MULTI-STATE SYSTEM
WITH FIXED RESOURCE REQUIREMENTS AND UNRELIABLE
SOURCES
Gregory Levitin, Senior Member IEEE
The Israel Electric Corporation Ltd., Haifa, Israel
E-mail: levitin@iec.co.il
Key Words - Redundancy optimization, Fixed resource consumption,
Universal generating function, Genetic algorithm.
Summary & Conclusions - This paper considers a redundancy optimization
problem for a multi-state system consisting of elements that consume a fixed
amount of resources to perform their task and a number of resource generating
subsystems. The presented algorithm finds the optimal system structure subject
to availability constraints by choosing system elements from a list of available
equipment. Each element is characterized by its productivity, availability and
cost. Elements of the main producing subsystem also have their specific resource
consumption limitations. The objective is to minimize the sum of investment costs
while satisfying demand, represented by cumulative demand curve, with given
probability. To solve the problem, a genetic algorithm is used as an optimization
tool. The procedure based on the universal generating function is used for
evaluation of the availability of the system while assuming that the working
elements of main producing subsystem are chosen in such a way that the total
system performance rate is maximal under given resource constraints.
Illustrative examples demonstrate how to obtain the optimal structures of
a simple two-level system for different availability constraints.
1. INTRODUCTION
Acronyms1
PRD
performance rate distribution
MPS
main producing subsystem
RGS
resource generating subsystem
u-function
universal moment generating function
GA
genetic algorithm
The problem of total investment cost minimization subject to reliability or
availability constraints is well known as the redundancy optimization problem. It has
been addressed in a number of studies, e.g. [1], where the binary-state reliability was
considered. When applied to wide variety of systems (production, power generation,
data transmission, etc.), reliability is considered to be a measure of the ability of the
system to meet the demand and the outage effect will be different for units with
different nominal generating or transmitting capacity and will also depend on demand
distribution. Therefore, the capacities of system components should be taken into
account as well as the demand distribution curve. The nonhomogeneous system
containing elements with different capacities may be considered to be a multi-state
system because its components can have different performance levels depending on the
state of the elements they contain. For such a system, each component can be
characterized by its PRD. The redundancy optimization problem for such a system is,
actually, a problem of system structure optimization. This problem for a series-parallel
multi-state system was formulated in [2]. The algorithm for system structure
optimization subject to availability constraints was suggested in [3]. In this algorithm
1
The singular and plural of acronyms are always spelled the same.
2
the appropriate versions of system elements are to be chosen from a list of available
products for each type of component as well as number of parallel elements. Each
element is characterized by its capacity (productivity), availability and cost. The
objective is to meet the demand (represented by a demand distribution curve) with the
desired level of system availability while minimizing total system cost. This approach
allows the reliability engineer to solve practical problems in which a variety of
products exist and in which analytical dependencies are unavailable for the cost of
system components.
The extension of the algorithm presented in [4] solves the system structure
optimization problem without limiting the diversity of versions of elements connected
in parallel; hence both series-parallel and parallel-series systems (according to
classification given in [1]) can be optimized.
While the suggested algorithms cover a wide range of series-parallel systems,
these algorithms are restricted to systems with continuous flows. Systems with
continuous flows are comprised of elements that can process any piece of product
(resource) within its capacity (productivity) limits. In this case, the minimal amount of
product which can proceed through the system is not limited.
In practice, there are technical elements that can work only if the amount of
some resources is not lower than specified limits. If this requirement is not met, the
element fails to work. An example of such a situation is a control system which stops
the controlled process if a decrease in its computational resources does not allow a
necessary information to be processed within required cycle time. Another example is
metalworking machine which cannot perform its task if the flow of supplied coolant is
less than required. In the both examples the amount of resource necessary to provide
the normal operation of a given composition of main producing units (controlled
3
processes or machines) is fixed. Any deficit of the resource makes it impossible for all
the units from the composition to operate together (in parallel) because any unit can
not reduce the amount of resource it consumes. Therefore any resource deficit leads to
turning off some producing units.
This paper considers systems containing producing elements with fixed
resource consumption. The systems consist of a number of RGS that supply different
resources to the MPS. Each subsystem consists of different elements connected in
parallel. Each element of MPS can perform only by consuming a fixed amount of
resources. If following failures in RGS there are not enough resources to allow all the
available producing elements to work, some of these elements should be turned off.
We assume that the choice of the working MPS elements is made in such a way as to
maximize the total performance rate of MPS under given resources constraints.
The problem is to find the minimal cost RGS and MPS structure, which
provides desired level of entire system ability to meet the demand.
In spite of the fact that only two level RGS-MPS hierarchy is considered in this
paper, the method can easily be expanded to systems with multilevel hierarchy. When
solving the problem for multilevel systems the entire RGS-MPS system (with PRD
defined by its structure) may be considered in its turn as one of RGS for higher level
MPS.
As in [3] and [4], to solve the combinatorial optimization problem, a genetic
algorithm is used which operates only with values of solution quality and does not
require derivative information. A solution quality index is comprised of both
availability and cost estimations.
Illustrative examples are presented in which the optimal structures of simple
two-level system are obtained for different availability constraints.
4
Notation
M
number of different resources
Hm
maximal permissible number of parallel elements in m-th RGS (1mM)
H0
maximal permissible number of parallel elements in MPS
Qm
number of available versions for elements of m-th RGS (1mM)
Q0
number of available versions for elements of MPS
Amj
availability of element of version j belonging to m-th RGS (1mM)
A0j
availability of MPS element of version j
gmj
generating capacity of element of version j belonging to m-th RGS (1mM)
Gmj
amount of resource m required for MPS element of version j
wj
productivity of MPS element of version j
hm(j)
number of version of j-th element belonging to m-th RGS (1mM, 1jHm)
Lm
number of different possible levels of resource m generation
Bm
discrete random variable which represents available amount of resource m and
can have Lm possible different values mi (1iLm)
mi
probability that Bm=mi
Cmj
cost of element of version j belonging to m-th RGS (1mM)
C0j
cost of MPS element of version j
K
number of demand levels considered
W
vector of demand levels W ={Wk}, 1kK
T
vector of intervals, corresponding to demand levels, T ={Tk}, 1kK
Wsys
discrete random variable representing the entire system productivity
E
system availability
5
E0
minimal permissible system availability
2. PROBLEM FORMULATION AND DESCRIPTION OF SYSTEM MODEL
A system consisting of MPS and M different RGS is considered (Fig. 1). The
MPS can have up to H0 different elements connected in parallel. Each producing
element consumes resources supplied by RGS and produces a final product. To
distinguish among elements with different characteristics, the notion of element
version is introduced. There are Q0 versions of producing elements available. Each
version j (1jQ0) is characterized by its availability A0j, performance rate
(productivity or capacity) wj, cost C0j and vector of required resources Gj={Gmj}
1mM. MPS element of version j can work only if it receives the amount of each
resource defined by vector Gj.
Each resource m is generated by the corresponding RGS which can contain up
to Hm parallel resource generating elements of different versions. Each version of
element of RGS supplying m-th resource is characterized by its availability,
productivity and cost.
All the properties of j-th element of m-th subsystem can be obtained from a list
of MPS and RGS elements available in the market according to a version number hm(j)
chosen to this element. The structure of subsystem m can be defined by numbers of
versions of elements hm(j) (1jHm) chosen to constitute this subsystem. The vector
h={hm(j)}, (0mM, 1jHm) defines the entire system structure. In order to allow the
number of elements in each subsystem to vary, we introduce “dummy” elements of
version 0. Such elements have productivity and cost equal to 0. Therefore, all the
elements of the vector h may vary in the range 0hm(j)Qm.
6
For given vector h the total cost of the system can be calculated as
M Hm
C=
  C mh m (j) .
(1)
m = 0 j=1
The overall probability E that the demand will be met is used as a measure of
the entire system availability. If the operation period is divided into K intervals, each
with duration Tk and demand level Wk (k=1,...,K), then the value of E index can be
calculated as
 K

E =   Tk 


 k =1 
1
K
  Pr(Wsys  Wk )  Tk ,
(2)
k =1
where Pr(WsysWk) is the probability that the total system performance rate Wsys is
not lower than the demand level Wk. Vectors W={Wk} and T={Tk}, (1kK), define
the cumulative demand curve.
Now we can formulate the problem of system structure optimization as
follows: find the minimal cost system configuration h* that provides the required
availability level E0:
h* = arg{C( h)  min | E( h, W, T)  E 0 }.
(3)
3. SYSTEM AVAILABILITY ESTIMATION METHOD
The procedure used in this paper for evaluating system availability is based on
the universal moment generating function technique, which was introduced in [5] and
was proven to be very effective for high dimension combinatorial problems. The
detailed description of universal z-transform applied to system availability estimation
is presented in [3]. A brief introduction to this technique is given here:
7
The universal moment generating function (u-transform) of a discrete variable
X is defined as a polynomial
J
u(z) =
p z
xj
j
(4)
,
j=1
where the discrete random variable X has J possible values and pj is the probability
that X is equal to xj. In our case, the polynomial u(z) can define PRD, i.e. it represents
all the possible states of the system (or element) by relating the probabilities of each
state pj with performance xj of the system in this state.
To evaluate the probability that the random variable X exceeds the value X* the
coefficients of polynomial u(z) should be summed for every term with xjX*:
Pr(X  X* ) 
 p j.
(5)
x j  X*
This can be done using the following  operator over u(z):
J
(u (z), X* ) = (  p jz
xj
j1
where for individual term p jz
xj
J
, X* )   (p jz
xj
, X* ),
(6)
j1
:
( p j z
xj
, X* )  p j  1( x j  X* ),
(7)
1(True )  1, 1(False )  0.
Consider the single elements with total failures. Since each j-th element
belonging to m-th RGS has nominal performance g mhm (j) and availability A mhm (j)
(corresponding to chosen version hm(j)),
Pr(X  g mh m (j) )  A mh m (j) ,
Pr(X  0)  1  A mh m (j) .
The u-function of such an element has only two terms and can be defined as
8
(8)
u mj (z)  (1  A mh m (j) )z 0  A mh m (j) z
g mhm (j)
.
(9)
(Note that the u-function of a “dummy” element with performance 0 does not depend
on its availability and is equal to z0=1).
The function um(z) for the entire subsystem m should represent the
probabilistic distribution of the amount of m-th resource Bm which can be supplied to
the MPS: each value of amount mi can be supplied with probability mi.
If a subsystem m contains single element with availability am1 and capacity gm1,
the amount of resource m can have two different levels. The distribution of Bm is the
same as the distribution of the element capacity:
 m1  0,
 m2  g m1 ,
 m1  Pr( Bm   m1 )  (1  a m1 ),
 m2  Pr( Bm   m2 )  a m1.
(10)
If a subsystem m contains elements connected in parallel, its total capacity in
each moment is equal to the sum of the capacities of its elements. For example, if the
first element has capacity gm1 with probability pm1 and the second element has capacity
gm2 with probability pm2, the total capacity of the component containing these two
elements will be gm1+gm2 with probability pm1pm2, which corresponds to term
p m1p m2 z g m1  g m 2 in the u-function representing the entire component capacity. In the
general case, the u-function of elements connected in parallel can be defined using the
 operator:
n
(u1 (z), u2 (z),..., u n (z))   u i (z),
(11)
i 1
where the  operator is a product of polynomials representing the individual ufunctions.
9
Consider, for example, subsystem m consisting of two elements with individual
availability am1 and am2 and capacity gm1 and gm2 respectively. Having the u-functions,
representing capacity distribution for individual elements
u m1 (z)  (1  a m1 )z 0  a m1 z g m1 ,
u m2 (z)  (1  a m2 )z 0  a m2 z g m2 ,
(12)
one can obtain the u-function of the entire subsystem as
u m (z)  (u m1 (z), u m2 (z)) 
[(1  a m1 )z  a m1 z g m1 ][(1  a m2 )z 0  a m2 z g m2 ] 
0
0
 (1  a m1 )(1  a m2 )z  a m1 (1  a m2 )z
g m1
(13)

 a m 2 (1  a m1 )z g m2  a m1a m2 z g m1  g m 2 ,
which corresponds to the following distribution of Bm:
 m1  0,
 m2  g m1 ,
 m3  g m 2 ,
 m4  g m1  g m2 ,
 m1  (1  a m1 )(1  a m2 ),  m2  a m1 (1  a m2 ),
 m3  a m2 (1  a m1 ),  m4  a m1 a m2 .
(14)
The u-function which represents PRD of the m-th RGS containing Hm elements
with their individual u-functions umj(z), 1jHm can be obtained as follows:
Hm
Hm
Lm
g
u m (z)   u mj (z)   (1  A mh m (j) )z 0  A mh m (j) z mhm (j)     mi z mi ,


i 1
j1
j1
(15)
where Lm is the number of different levels of resource m generation.
The same  operator can be used in order to obtain the u-function representing
maximal PRD of MPS u0(z). In this case L0 is number of different levels of MPS
productivity.
The function u0(z) represents the distribution of system productivity defined
only by MPS elements availability. This distribution corresponds to situations in which
there are no limitations on required resources.
10
3.1. PRD of System Containing Identical Elements in MPS.
If a producing subsystem contains only identical elements, the number of the
elements that can work in parallel when the available amount of m-th resource is mi is
mi/Gm which corresponds to total system productivity mi=wmi/Gm, where w and
Gm are respectively productivity of a single element of MPS and amount of resource m
required for this element (wj=w, Gmj=Gm for 1jH0). Note that mi represents the total
theoretical productivity, which can be achieved using available resource m by
unlimited number of producing elements. In terms of entire system output, the ufunction of the m-th RGS can be obtained in the following form:
Lm
Lm
u *m (z)    mi z  mi    mi z w  mi / G m  .
i 1
(16)
i 1
The RGS which can provide the work of minimal number of producing units
becomes the bottleneck of the system. Therefore, this RGS defines the total system
capacity. To calculate the u-function for a system containing two different RGS, the 
operator should be used. This operator for a pair of RGS is defined as follows:
L1
L2
(u*1 (z), u*2 (z))  (  1i z  1i ,   2jz
i 1
 2j
)
j 1
L1 L2

i 1 j 1
1i  2jz
min{ 1i ,  2j }
.
(17)
Successively applying  operator using the rule
(u *1 (z), u*2 (z),..., u*M (z)) = ( (u *1 (z), u*2 (z)) ,..., u*M (z)) .
(18)
one can obtain u-function for all the resource generating subsystems ur(z). Function
ur(z) represents the entire system PRD for the case of unlimited number of producing
elements.
11
The entire system productivity is equal to the minimum of total theoretical
productivity which can be achieved using available resources and total productivity of
available producing elements. To obtain PRD of the entire system taking into account
the availability of MPS elements the same  operator should be applied:
usys(z)=(u0 (z),ur(z))=(u0 (z),u1*(z) ,u2*(z),...,uM*(z)).
(19)
3.2. PRD of System Containing Different Elements in MPS.
If MPS has N different elements, there are 2N possible states of element
availability composition. Each state may be characterized by set Sn (1n2N) of
available elements. The probability of state n can be evaluated as follows
pn 
 A 0 j  (1  A 0i ),
jS n
(20)
iS n
the maximal possible productivity of MPS and corresponding resources consumption
in state n are
wj
jS n
and
 G mj (1mM), respectively.
jS n
The amount of resources generated by RGS is defined by their PRD. It is not
always enough to provide maximal possible productivity of MPS at state n. In order to
define maximum possible productivity  of MPS under resource constraints one has to
solve the following integer linear programming problem:
(1 ,  2 ,...,  M , Sn )  max
 w jy j ,
jS n
subject to
 G mj y j  m , for 1  m  M,
jS n
y j  {0,1},
12
(21)
where m is the available amount of m-th resource, yj=1 if element j works (producing
wj units of main product and consuming Gmj of each resource (1mM)) and yj=0 if
element is turned off.
The PRD of the entire system can be defined by evaluating all the possible
combinations of available resources generated by RGS and states of MPS. For each
combination, a solution of the above formulated optimization problem defines the
system productivity. The u-function representing PRD of the entire system can be
defined as follows:
u
sys

2N
(1i1 , 2i2 ,...,  MiM ,S n ) 

(z)    ...  (   m i m )  p n z
.

i1 1 i 2 1 i M 1 
n

1
m

1


L1
L2
LM  M
(22)
To evaluate the E index for the entire system with its PRD, the probability that
the total productivity of the system is not less than a specified demand level W must be
calculated:
Pr(WsysW)= ( usys ( z), W).
(23)
To obtain the system PRD, its productivity should be determined for each
unique combination of available resources and for each unique state of MPS. From
equation (22) one can see that in general case the total number of integer linear
sys
programs to be solved to obtain u (z) is 2
N
M
 Lm .
In practice the number of
m 1
programs to be solved can be drastically reduced using the following rules.
1. If for the given vector (1,…,m,…,M) and for the given set of MPS elements
Sn there exists m for which  mi  min G mj , system productivity (1,…,M,Sn) is
jSn
equal to 0.
13
2. If for each element j from Sn there exists m for which m  G mj , system
productivity (1,…,M, Sn) is equal to 0.
3. If there exists element jSn for which m  G mj for some m, this means that
in the program (21) yj must be zeroed. In this case dimension of integer program can
be reduced by removing all such elements from Sn.
4. If for the given vector (1,…,m,…,M) and for the given set Sn the solution of
the integer program (21) determines subset S*n of turned on MPS elements (jS*n if
yj=1), the same solution must be optimal for the MPS states characterized by any set
S'n: S*n S'n  Sn. This allows one to avoid solving many integer programs by
assigning value of (1,…,M, Sn) to all the (1,…,M, S'n).
It should be noted that for systems with large number of elements and/or
resources the required computational effort for solving redundancy optimization
problem can be unaffordable even when applying presented computational complexity
reduction technique. In this case the use of fast heuristics for solving integer programs
is recommended instead of exact algorithms.
4. OPTIMIZATION TECHNIQUE
To solve the optimization problem formulated in (3) we use the same approach
as used in [3,4]. It is based on a GA, a technique inspired by a principle of evolution.
The comprehensive description of GAs theory and their application in engineering can
be found in [6,7]. The applications of GA in reliability optimization are reported in
[3,4,7-26].
Unlike various constructive optimization algorithms which use sophisticated
methods to obtain a good single solution, the GA deals with a set of solutions
(population) and tends to manipulate each solution in the simplest way.
14
"Chromosomal" representation requires the solution to be coded as a finite length
string. The steady state version of the GA used in this paper was developed by Whitley
and Kouth [27]. As reported in [28] this version, named GENITOR, outperforms the
basic “generational” GA. The structure of steady state GA is as follows:
Algorithm GENITOR
1. Generate an initial population of Ns randomly constructed solutions (strings)
and evaluate their fitness. (Unlike the “generational” GA, the steady state GA performs
the evolution search within the same population improving its average fitness by
replacing worst solutions with better ones).
2. Select two solutions randomly and produce a new solution (offspring) using
a crossover procedure providing inheritance of some basic properties of the parent
strings in the offspring. The probability of selecting the solution as a parent is
proportional to the rank of this solution. (All the solutions in the population are ranked
in order of their fitness increase). Unlike the fitness-based parent selection scheme, the
rank-based scheme reduces GA dependence on the fitness function structure, which is
especially important when constrained optimization problems are considered [29].
The same double-point crossover technique as used in [3,4] is adopted in this
work. In this technique the fragment of the string is randomly chosen as a set of
adjacent positions. All the elements allocated within the fragment are copied into the
child solution string from its first parent and the rest of the elements are copied from
the second one.
15
The following example illustrates the crossover procedure in which the
offspring O is obtained from the two parent strings Q and V of length 8 (the fragment
is between positions 3 and 7):
Q = q1 q2 q3 q4 q5 q6 q7 q8
V = v1 v2 v3 v4 v5 v6 v7 v8
O = v1 v2 q3 q4 q5 q6 q7 v8 .
3. Allow the offspring to mutate. Mutation results in slight changes in the
offspring structure and maintains diversity of solutions. This procedure avoids
premature convergence to a local optimum and facilitates jumps in the solution space.
In our GA, the mutation procedure just swaps elements initially located in two
randomly chosen positions of the string.
4. Decode offspring to obtain the objective function (fitness) values. These
values are a measure of quality which is used to compare different solutions.
5. Apply a selection procedure that compares the new offspring with the worst
solution in the population and selects one that is better. The better solution joins the
population and the worse one is discarded. If the population contains equivalent
solutions following the selection process, redundancies are eliminated and, as a result,
the population size decreases.
6. Generate new randomly constructed solutions to replenish the population
after repeating steps 2-5 Nrep times (or until the population contains a single solution
or solutions with equal quality). Run new genetic cycle (return to step 2).
7. Terminate the GA after Nc genetic cycles.
16
End Algorithm
The final population contains the best solution achieved. It also contains
different near optimal solutions which may be of interest in the decision making
process.
4.1. Solution Representation and Decoding in Genetic Algorithm.
To apply the genetic algorithm to a specific problem, one has to define the
solution representation as well as the decoding procedure. Our GA deals with F length
integer strings, where F is the maximal number of elements that the entire system may
contain:
F
M
 Hm.
(24)
m0
Each solution is represented by string D={d1,d2,...,dF}, where for each
m1
n =  H i  j,
(25)
i0
dn corresponds to the number of version chosen for j-th element of subsystem m (note
that m=0 corresponds to MPS and 1mM to m-th RGS).
To provide variation of number of elements included in subsystems by using
“dummy” elements, the solution generation procedure is designed in a following way:
for each 1iF, the random value from the range (1  max (Q m )) is assigned to i-th
0 m  M
element of the string with probability p and a value of 0 is assigned with probability 1p. (It was experimentally found that choice of p=0.8 provides the fastest GA
convergence to the best solution).
17
In order to allow each randomly generated string D to represent a feasible
solution, the decoding procedure obtains the number of the version chosen for j-th
element of subsystem m using the following transform:
mod Q ( d n )  1, d n  0
m

h m ( j)  
0,
d n  0,

(26)
where n is calculated using (25).
To obtain the u-function of m-th RGS represented by elements of string with position
m 1
numbers from
 H i  1 to
i0
m
 Hi ,
the decoding algorithm uses expression (15). In
i0
the case of identical MPS elements it also obtains MPS u-function u0(z) using the
same expression, transforms u-functions of RGS to the form (16) and obtains the entire
system u-function usys(z) using rules (17-19).
In the case of different MPS elements, the algorithm forms a set of versions of
elements included into MPS: S={ho(j) | 1jH0, h0(j)0}, and for all its different
subsets Sn (1n2N, where N=|S|) corresponding to MPS states, the state probability is
determined using (20). Then it solves optimization problems (21) for each composition
of available resources and, finally, obtains the entire system u-function usys(z) using
expression (22).
To obtain the probability that the entire system output rate exceeds given
demand level Wk , the decoding procedure applies the  operator (6),(7) to usys(z):
Pr(Wsys  Wk )  (u sys (z) , Wk ).
Finally, the total E index for all the demand levels is calculated using (2).
18
(27)
In order to let the genetic algorithm look for the solution with minimal total
cost and with E not less than the required value E0, the solution quality (fitness) is
evaluated as follows:
M Hm
  C mh m (j) ,
F = (E 0  E) +
(28)
m = 0 j=1
where Cm0=0 for any m, which corresponds to the “dummy” elements,
(1 x), x  0,

(x) = 
x < 0,

0 ,
(29)
and  is a sufficiently large penalty.
For solutions meeting the requirement E>E0, the fitness of solution is equal to
its total cost.
5. ILLUSTRATIVE EXAMPLE
5.1. Description of the System to be Optimized.
The main producing component of the system may have up to six parallel
producing elements (chemical reactors) working in parallel. To perform their task,
producing elements require three different resources:
1. Power, generated by energy supply subsystem (group of converters),
2. Computational resource, provided by control subsystem (group of
controllers),
3. Cooling water, provided by water supply subsystem (group of pumps).
Each of these resource generating subsystems can have up to five parallel elements.
Both producing units and resource generating units may be chosen from the list of
19
products available in the market. Each producing unit is characterized by its
availability, productivity, cost and amount of resources required for its work. The
characteristics of available producing units are presented in Table 1. The resource
generating units are characterized by their availability, generating capacity
(productivity) and cost. The characteristics of available resource generating units are
presented in Table 2. Each element of the system is considered to be a unit with total
failures.
The demand for final product varies with time. The demand distribution is
presented in Table 3 in the form of a cumulative demand curve.
5.2. Optimization Results.
Table 4 contains minimal cost solutions for different required levels of system
availability E0. The structure of each subsystem is presented by the list of numbers of
versions of the elements included in the subsystem. The actual estimated availability of
the system and its total cost are also presented in the table for each solution.
The solutions of the system structure optimization problem when the main
producing subsystem can contain only identical elements are presented in Table 5 for
comparison. Note that when MPS is composed from elements of different types the
same system availability can be achieved by much lower cost. Indeed, using elements
with different availability and capacity (productivity) provides much greater flexibility
for optimizing the entire system performance in different states.
Therefore, the
algorithm for solving the problem for different MPS elements, which requires much
greater computational effort, usually yields better solutions then one for identical
elements.
20
5.3. Computational Effort and Algorithm Consistency.
The C language realization of the algorithm was tested on a DEC station
5000/240. The parameters of GA were chosen: NS=100, Nrep=2000, Nc=550. For the
time-consuming optimization problem in which MPS may have different elements, the
time taken to obtain the best-in-population solution (time of the last modification of
the best solution obtained) did not exceed 45 minutes. The average time for arriving at
the best solutions for the solved problems of this type was 27 minutes. The
corresponding time for the problems with identical MPS elements was less than 1
minute.
To demonstrate the consistency of the suggested algorithm, GA was repeated 5
times with different starting solutions (initial population) for the system structure
optimization problems with E0=0.97 and E0=0.999. The coefficient of variation was
calculated for fitness values of best-in-population solutions obtained during the genetic
search by different GA search processes. The variation of this index during the GA
proceeding is presented in Fig. 2. One can see that all the processes converge to the
same solution fitness values.
21
REFERENCES
[1]. I. Ushakov, Handbook of Reliability Engineering. , 1994; John Wiley & Sons.
[2]. I. A. Ushakov, “Optimal standby problems and a universal generating function”,
Sov. J. Comput. Syst. Sci., vol. 25, No. 4 ,1987, pp 79-82.
[3]. G. Levitin, A. Lisnianski, H. Ben-Haim, D. Elmakis, “Redundancy optimization
for series-parallel multi-state systems”, IEEE Trans. on Reliability, vol. 47, No. 2,
1998 June, pp 165-172.
[4]. G. Levitin, A. Lisnianski, D. Elmakis, “Structure optimization of power system
with different redundant elements”, Electric Power Systems Research, vol 43, 1997, pp
19-27.
[5]. I. A. Ushakov, “Universal generating function”, Sov. J. Comput. Syst. Sci., vol. 24,
No. 5 ,1986, pp 118-129.
[6]. D.Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning,
1989; Addison Wesley, Reading, MA.
[7]. M. Gen, R. Cheng, Genetic Algorithms & Engineering Design, John Wiley &
Sons, New York, 1997.
[8]. M. Gen, R. Cheng, “Optimal design of system reliability using interval
programming and genetic algorithm”, Computers ind. Engng, vol. 31, No 1/2, 1996, pp
237-240.
[9] M. Gen, J. Kim, "GA-based reliability design: state-of-the-art survey",
Computers ind. Engng, vol. 37, No 1/2, 1999, pp 151-155.
22
[10]. T. Taguchi, T. Yokota, M. Gen, "GA-based method for fuzzy optimal
design of system reliability with incomplete FDS", Proc. of second Int. Conf. on
Knowledge-based Intelligent electronic systems, 1998, pp.272-277.
[11]. T. Taguchi, T. Yokota, M. Gen, "Reliability optimal design problem with
interval coefficients using hybrid genetic algorithms", Computers ind. Engng,
vol. 35 No 1/2, 1998, pp. 373-376.
[12]. T. Yokota, M. Gen, K. Ida, “System reliability optimization problems with
several failure models by genetic algorithm", Jpn. J. Fuzzy Theory Syst, vol. 7, No. 1,
1995, pp 119-132.
[13]. M. Sasaki, T. Yokota, M. Gen, “A method for solving fuzzy optimal reliability
design problems by genetic algorithm", Jpn. J. Fuzzy Theory Syst, vol. 7, No. 5, 1995,
pp 681-694.
[14]. L. Painton, J. Campbell, “Genetic algorithm in optimization of system
reliability”, IEEE Trans. Reliability, vol. 44, No. 2, 1995 Jun, pp 172-178.
[15]. D. Coit, A. Smith, “Reliability optimization of series-parallel systems using
genetic algorithm”, IEEE Trans. Reliability, vol. 45 No. 2 1996 Jun, pp 254-266.
[16]. D. Coit, A. Smith, “Penalty guided genetic search for reliability design
optimization”, Computers ind. Engng, vol. 30 No 4, 1996, pp 895-904.
[17]. D. Coit, A. Smith, "Solving the redundancy allocation problem using a
combined neural network/genetic algorithm approach", Computers Ops. Res.,
vol. 23 No 6, 1996, pp 515-526.
[18]. V. Ramachandran, V. Sivakumar et al., “Genetic based redundancy
optimization”, Microelectronics and Reliability, vol. 37, No 4, 1997, pp 661-663.
23
[19]. Lisnianski, A., Levitin, G., Ben-Haim, H., Elmakis, D., "Power system
structure optimization subject to reliability constraints", Electric Power Systems
Research, 1996, 39, 145-152.
[20]. Levitin, G., Lisnianski, A., "Structure optimization of power system with
bridge topology", Electric Power Systems Research, 45, 1998, pp. 201-208.
[21]. Lisnianski, A., Levitin, G., Ben-Haim, H., "Structure optimization of
multi-state system with time redundancy", Reliability Engineering & System
Safety, vol. 67, 2000, pp. 103-112.
[22]. G. Levitin, A. Lisnianski, "Joint redundancy and maintenance
optimization for multi-state series-parallel systems", Reliability Engineering &
System Safety, vol. 64, 1999, pp 33-42.
[23]. G. Levitin, A. Lisnianski, "Optimal multistage modernization of power
system subject to reliability and capacity requirements", Electric Power Systems
Research, 1999, 50 (3), 183-190.
[24]. Y. Hsieh, T. Chen, D. Bricker, "Genetic algorithms for reliability design
problems", Microelectronics and Reliability, vol. 38, No 10, 1998, pp. 15991605.
[25]. J. Yang, M. Hwang, T. Sung, Y. Jin, "Application of genetic algorithm for
reliability allocation in nuclear power plant", Reliability Engineering & System
Safety, vol. 65, No. 3, 1999, pp 229-238.
[26]. Y. Kohama, T. Takada, N. Kozawa, A. Miyamura, "Minimum weight
design of rigid frames subject to system reliability by genetic algorithm", Proc.
24
of 1-st Int. Conf. on engineering computational technology, Edinburgh, 1998,
pp. 103-109.
[27]. D. Whitley, J. Kauth, “GENITOR: a different genetic algorithm”, Tech. Rep. CS88-101, Colorado State University, Fort Collins, 1988.
[28]. G. Syswerda, “A study of reproduction in generational and steady-state genetic
algorithms, in G.J.E. Rawlings (ed.), Foundations of Genetic Algorithms, Morgan
Kaufmann, San Mateo, CA, 1991.
[29]. D. Powell, M. Skolnik, “Using genetic algorithms in engineering design
optimization with non-linear constraints”, Proc. Of the fifth Int. Conf. On Genetic
Algorithms, Morgan Kaufmann, 1993, pp. 424-431.
25
Figure Caption
Figure 1: Structure of simple M RGS - MPS system.
Figure 2: Coefficient of variation of best-in-population solution fitness obtained by 5
different search processes as function of number of crossovers.
26
Table 1. Parameters of the MPS units available
No
of Version
Cost
C
Performance
rate w
Availability
A
1
2
3
4
5
6
9.9
8.1
7.9
4.2
4.0
3.0
30.0
25.0
25.0
13.0
13.0
10.0
0.970
0.954
0.960
0.988
0.974
0.991
Resources required G
Resource 1 Resource 2 Resource 3
2.8
0.2
1.3
2.0
0.8
1.0
2.0
0.5
2.0
1.5
1.0
0.6
1.8
1.2
0.1
0.1
2.0
0.7
Table 2. Parameters of the RGS units available
Type of resource
1
2
3
No of
Version
Cost
C
Performance rate
1
2
3
4
1
2
3
1
2
3
4
0.590
0.535
0.370
0.320
0.205
0.189
0.091
2.125
2.720
1.590
1.420
g
1.8
1.0
0.75
0.75
2.00
1.70
0.70
3.00
2.60
2.40
2.20
Availability
A
0.980
0.977
0.982
0.978
0.995
0.996
0.997
0.971
0.973
0.971
0.976
Table 3.
Parameters of the cumulative demand curve.
Wk
Tk (%)
65.0
60
48.0
10
27
25.0
10
8.0
20
Table 4.
Parameters of the optimal solutions.
E0
E
C
0.950
0.970
0.990
0.999
0.951
0.973
0.992
0.999
27.790
30.200
33.690
44.613
MPS
3,6,6,6,6
3,6,6,6,6,6
3,3,6,6,6,6
2,2,3,3,6,6
System Structure
RGS 1
RGS 2
1,1,1,1,1
1,1,2,3
1,1,1,1
1,1,2,3
1,1,1,1
1,1,2,3
1,1,1
1,2,2
RGS 3
1,1
1,1
4,4
4,4,4
MPS
4,4,4,4,4,4
4,4,4,4,4,4
2,2,2,2
2,2,2,2,2
System Structure
RGS 1
RGS 2
1,4,4,4,4
2,2,2
1,1,1,4
2,2,2,2
4,4
3,3,3,3
3,4
2,2
RGS 3
1,3,3,3,4
1,3,3,3,4
4,4,4
4,4,4,4
Table 5.
Parameters of the optimal solutions.
E0
E
C
0.950
0.970
0.990
0.999
0.951
0.972
0.991
0.999
34.752
35.161
37.664
47.248
28
AUTHOR
Dr. Gregory Levitin; Reliability &Equip't Dep't; PD&T Div; Tsrael Electric Corp.
Ltd.; POBox 10; Haifa 31000 ISRAEL.
Internet (e-mail): levitin@iec.co.il
Gregory Levitin received the M.S. (1982, with honors) in Electrical
Engineering from Kharkov (Ukraine) Politechnical Institute, the B.S. (1986) in
Mathematics from Kharkov State University and Ph. D. (1989) in Industrial
Automation from Moscow Research Institute of Metalworking Machines. From
1982 to 1990 he was a software engineer and research associate in industrial
automation at the Ukrainian and Russian Research Institutes. From 1991 to 1993
worked at the Technion - Israel Institute of Technology as a post-doctoral fellow
at the faculty of Industrial Engineering & Management. In 1993 Dr. Levitin joined
the R&D Division of The Israel Electric Corporation, and is an engineer-expert in
the Reliability Department and also an adjunct lecturer at the Technion. His
current research is in artificial-intelligence and operations-research application in
reliability & power engineering. He is a senior member of IEEE.
29
Download