Uploaded by 김민성

SolutionManual-August29

advertisement
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/275019578
Solution Manual
Research · April 2015
DOI: 10.13140/RG.2.1.1326.0320
CITATIONS
READS
0
16,300
2 authors:
Ilya B. Gertsbakh
Yoseph Shpungin
Ben-Gurion University of the Negev
Shamoon College of Engineering, Israel, Beer Sheva
254 PUBLICATIONS 3,177 CITATIONS
79 PUBLICATIONS 872 CITATIONS
SEE PROFILE
All content following this page was uploaded by Ilya B. Gertsbakh on 15 April 2015.
The user has requested enhancement of the downloaded file.
SEE PROFILE
SOLUTION MANUAL
for Models of Network Reliability:
Analysis, Combinatorics and Monte Carlo
I. Gertsbakh and Y. Shpungin
August 29, 2009
1
Contents
1 Solutions to Chapter 1
2
2 Solutions to Chapter 2
6
3 Solutions to Chapter 3
12
4 Solutions to Chapter 4
15
5 Solutions to Chapter 5
19
6 Solutions to Chapter 6
22
7 Solutions to Chapter 7
25
8 Solutions to Chapter 8
27
9 Solutions to Chapter 9
30
10 Solutions to Chapter 10
32
11 Solutions to Chapter 11
34
1
1
Chapter 1.
1. First, let us explain how works the Mathematica program given in
Section 1.5.
The first line calls the Statistics package and the Continuous distribution
generators. dist calls for a uniform distribution with the support [0,2], Q
is the number of samples, sig counts the number of cases when the average
lies inside the confidence interval. x1, x2, x3, x4 are the replicas of random
variable uniformly distributed in [10,12], representing one sample of four
items. xbar and stdev are the average and the standard deviation of a
single sample, according to the formulas given in Problem 1. lef t, right are
the limits of the confidence interval, according to (1.5.2).
A bit tricky is operator alpha which produces either 2 if lef t < 11 <
right or zero, otherwise. (Why ?). So, each time the true mean lies inside
the confidence interval, the counter sig increases by 1.
Operator For[....] repeats all the above Q times. Finally, conf gives the
coverage proportion in decimals, and Print types the result.
To repeat the experiment for exponential distribution with mean 11, change
the operator dist = ExponentialDistribution[1].
The probability of coverage is 0.88 for the Uniform case and 0.83 for the
Exponential case.
We doubt that your boss will accept your proposal to use the normal
confidence interval for the population mean.
2.
Search algorithm for optimal schedule
a. Set Max=100, Q=10,000. Put N=0.
Define a matrix A with 10 rows and 20 columns; denote by a(i, j) the
j-th element of the i-th row. Put initially all a(i, j) = 0.
b. For i = 1 to i = 10, do the following: generate X(i) uniformly
distributed in the interval [−δl (i)], δr (i)];
The actual beginning and completion of job i will be b(i) + X(i) = beg(i)
and c(i) + X(i) = comp(i), respectively.
Determine the actual schedule
S = a(i, j) = 1 for j = [beg(i), ..., comp(i)],
for i = 1, ...10. a(i, j) will be equal one for the time points which are
occupied by job i, and zero, otherwise
∑
Calculate K = maxj=1,...,20 [ 20
i=1 a(i, j)].
2
If K is less than Max, put Max:=K. Remember the respective schedule
S for all jobs. In this way, the program remembers the schedule which has
produced so far the best result.
Set N:=N+1. If N =Q, Stop, else put all a(ij) = 0 and GOTO b.
There is a little need to explain this algorithm. The main operator b locates
randomly all jobs into the time matrix. K is the maximal number of jobs
carried out simultaneously, which, by the Dilworth theorem, equals the minimal number of machines. The program remembers the number of machines
and the corresponding schedule each time better result is achieved.
3.
Below is a Mathematica program which counts the lucky numbers in R =
100, 000 experiments. Random[ ] produces a random variable uniformly
distributed in [0,1]. When multiplied by 10, this becomes a random variable
U (0, 10). Operator Floor takes its integer part. Therefore x is the sum of 6
digits in the lottery. A bit tricky is the expression f . It equals zero if x is a
lucky number- 4, 9, 16, 25, 36 or 49. Otherwise f > 0. Operator Sign is zero
if the argument is zero and 1 if it is positive. Thus the counter sig counts
the number of cases when the sum is not a lucky number. r is the desired
result in decimal form.
Algorithm LuckyNumbers.
<< Statistics‘ContinuousDistributions‘
sig = 0;
R = 100000;
For[i = 1, i < R + 1, i++,
x = Floor[Random[ ]*10 ] +
Floor[Random[ ]*10] +
Floor[Random[ ]*10] +
Floor[Random[ ]*10] +
Floor[Random[ ]*10] +
Floor[Random[ ]*10];
f = ((x − 4)(x − 9)(x − 16)(x − 25)(x − 36)(x − 49))2 ;
y = Sign[f]; sig := sig + y];
r = N[1 - sig/R];
Print[r];
Our result is that the probability of getting a lucky number is about
0.099.
On the average, our casino earns in each game $120 and loses $1000 with
probability 0.099. In the long run, we have a profit of $21 per game.
3
4. Since a single experiment gives reward ak with probability pk , the mean
∑
reward of one experiment is nk=1 ak · pk , and the mean reward of N experiments is N times this sum.
5.
Follow the way of the example with three outcomes shown in Section
1.4. Define
Wi = (Xi,1 , Xi,2 , ..., Xi,n ),
(1.1)
where Wi = (1, 0, 0, ...0) if the first outcome takes place, Wi = (0, 1, 0, ..., 0) if
the second outcome takes places, and Wi = (0, 0, ..., 0, 1) if the n-th outcome
takes place. Obviously, Xi,1 + ... + Xi,n = 1 and
P (Xi,1 = 1) = f1 , P (Xi,2 = 1) = f2 , ..., P (Xi,n = 1) = fn ,
n
∑
fi = 1.
1
Now represent the reward in N experiments as
R=
N
∑
(a1 Xi,1 + a2 Xi,2 + ... + an Xi,n ).
(1.2)
i=1
Since the outcomes of j-th and m-th experiment for m ̸= j are independent,
the variance of the sum is a sum of variances, and
V ar[R] =
N
∑
V ar[a1 Xi,1 + a2 Xi,2 + ... + an Xi,n ].
(1.3)
i=1
2 and E[X
Now note that Xi,m = Xi,m
i,m ] = fi .
Note also that Xi,m · Xi,j = 0 for m ̸= j. Now
V ar[a1 Xi,1 + a2 Xi,2 + ... + an Xi,n ] =
(1.4)
E[(a1 Xi,1 + a2 Xi,2 + ... + an Xi,n ) ] −
2
E 2 [a1 Xi,1 + a2 Xi,2 + ... + an Xi,n ].
After simple algebra we arrive at the following expression:
n
∑
V ar[R] = N (
a2i fi (1 − fi ) − 2
i=1
∑
aj am fj fm ).
j̸=m
6. Note first that
∑
∑
V ar[R] < N a2i fi < N · (max fi ) · a2i .
4
(1.5)
E[R] > N · (min fi ) ·
Then
∑
ai .
∑
(V ar[R])0.5
(max fi )0.5 · ( a2i )0.5
∑ .
<
E[R]
N 0.5 · (min fi ) · ai
The result follows since
∑
∑
a2i < (
∫
ai )2 .
7. By the definition, E[X] = 01 t · fX (t) · dt = 1/2. E[X] is the center
of
mass with respect to the density function. By the definition, V ar[X] =
∫1
2
0 (t − 0.5) · fX (t)dt =[after elementary calculations]= 1/12. FX (t) = 0 for
t < 0 and obviously equals∫1 for t > 1.
For t ∈ [0, 1], P (X ≤ t) = 0t fX√
(v)dv = t.
0.5
c.v[X] = (1/12) /0.5 = 1/ 3.
8. We present an intuitive solution. Let X ∼ U (0, 1). The probability mass
is uniformly spread in the interval [0, 1]. Suppose we multiply X by C. Then
the probability mass of the random variable CX will be uniformly spread in
the interval [0, C]. After adding constant D, the probability mass is shifted
by D and will be uniformly spread in the interval [D, D + C]. To guarantee
that the integral of this mass equals 1, the density of Z = CX + D must be
1/C and zero outside this interval. So, Z ∼ U (D, D + C).
By using the rules of computing the mean and variance (see Section 1.4),
E[Z] = C · E[X] + D = C/2 + D, V ar[Z] = C 2 · V ar[X] = C 2 /12.
FZ (t) = 0 for t < D and FZ (t) = 1 for t > D + C. For t ∈ [D, D + C],
FZ (t) = (t − D)/C.
9. Y = 2 · Random [ ] − 1, where Random[ ] produces U (0, 1).
10. The key to the solution is the following fact. If the maximum Y of
X1 , X2 , ..., Xn is less or equal t, then all Xi , i = 1, ..., n must be less or
∏
equal t. Since Xi are i.r.v.’s, P (Y ≤ t) = ni=1 P (Xi ≤ t). For t ∈ [0, 1],
it gives FY (t) = tn . For t < 0, FY (t) = 0 and for t > 1, FY (t) = 1. The
density function of Y is fY (t) = ntn−1∫ for t ∈ [0, 1], and zero, otherwise. By
integration, we find out that E[Y ] = 0t v · nv n−1 dv = n/(n + 1).
11. The probability to have k successes in the first k experiments and later
on no successes is, obviously, pk (1 − p)n−k . But there are n!/(k!(n − k)!)
different ways of positioning k successes in n experiments. This explains the
result.
5
2
Chapter 2.
1. From Example 2.3.1-continued we obtain P (P1 ∪ P2 ∪ P3 ) = p1 p2 + p1 p3 +
p2 p3 − 2p1 p2 p3 .
Replacing in (2.3.3) qi = 1 − pi , we arrive at the expression
p1 p2 (1 − p3 ) + p1 p3 (1 − p2 ) + p2 p3 (1 − p1 ) + p1 p2 p2 , which after simple
algebra is identical to the previous expression.
2. There are two minimal cut sets of size 2, (1, 7), (7, 4) and (3, 6), (6, 5).
They isolate nodes 6 and 7. The cut set (1, 2), (2, 4), (2, 3) isolates node 2.
All other minimal cut sets contain 4 or more edges.
3. Associate with each edge e the weight w(e) = ln pe . Then use Kruskal
algorithm to construct the maximal spanning tree. It will produce a tree
with the maximal value of ln P (ST ), i.e. with maximal P (ST ).
4. The upper bound is RU = 1 − (1 − p2 )2 (1 − p3 )2 . The lower bound is
RL = [1 − (1 − p)2 ]2 [1 − (1 − p)3 )2 ]. The numerical results are:
p = 0.90, RL = 0.978141, RU = 0.997349.
p = 0.95, RL = 0.994758, RU = 0.999807.
p = 0.99, RL = 0.9999798, RU = 1.000000.
5.
The explanation of (2.3.4) is given in the text. We give here the explanation
of (2.3.5) in a similar manner. It is easy to see that each DOW N state
contains some minimal cut in the sense that all elements of the cut are down.
Suppose, for example, that all edges of the network in the Fig. 2.7 are down.
Then this DOW N state contains 3 minimal cuts, each consisting of some
pair of edges. On the other hand, each minimal cut defines some DOW N
state. So, the sufficient and necessary condition for N being DOW N is that
all elements of at least one min cut are down.
6. For a series system,
ϕ(x) =
n
∏
xi =
i=1
min
{i=1,...,n}
xi .
The proof is simple: the left-hand side is 1 if and only if all xi = 1
For a parallel system,
ϕ(x) = 1 −
n
∏
(1 − xi ) =
i=1
max
{i=1,...,n}
xi .
The proof is similar. The left-hand side side is 0 if and only if all xi = 0.
6
7. If any pair of edges is up, then the network is UP. and the corresponding
term in the parentheses will be zero. This makes ϕ(x) = 1.
8. (i) means that the system is DOWN if all its components are down and
is UP if all its components are up. (ii) means that if at least one component
changes its state from down to up, the system state can not get worse.
9. The bridge has four minimal path sets: (1, 4), (3, 5), (1, 2, 5) and (3, 2, 4).
Each of these sets is a path without redundant components. The set (1, 2, 3, 4)
is a path set but not a minimal path set since components 2,3 are redundant.
10. Assume that there is at least one minimal path set P1 , all elements of
∏
which are up. Then i∈P1 xi = 1 which leads to ϕ(x) = 1. Suppose now that
the system is UP. Then there must be at least one minimal path set having
all its elements in the up state. Then the right-hand side of the formula is
1. Therefore, ϕ(x) = 1 if and only if there is one minimal path set having
all its elements in the up state. This completes the proof.
11. The collection of all min cut cuts is (1, 3), (4, 5), (2, 3, 4), (1, 2, 5). (1, 2, 3)
is a cut set, but not a minimal cut set.
12. Any monotone system can be represented in two equivalent ways: as
a series connection of parallel subsystems each being a minimal cut set, or
as a parallel connection of series subsystems each being a minimal path set.
(2.5.3) is the structure function corresponding to the first way of representation. Indeed, ϕ(x) = 0 if and only if one of the multiples in the product
is zero. This will happen if all xi in one of the min cuts are zeroes, which
means that all components of one min cut are down.
13. Represent the system as a series connection of four parallel subsystems
corresponding to the given minimal cut sets. Imagine a source s before the
first min cut and the terminal t after the last min cut. Let us find out all
minimal paths leading from s to t. Each such path must go via element 1
or element 2. If the path goes via 1, then there are four minimal paths:
(1, 2, 3), (1, 2, 4), (1, 5, 3), (1, 5, 4). If the element 1 is down, each path must
contain element 2, and there are two minimal path sets (2, 4, 5), (2, 5, 3). In
total, there are 6 minimal paths.
14. For a binary random variable, the mean equals the probability that
∏
this variable equals 1. For series system, ϕ(X) = ni=1 Xi . Since the binary
variables are independent, the mean of their product equals to the product
of their means. Therefore,
∏
R = E[ϕ(X)] = ni=1 pi .
7
This gives the reliability for series system: Rser =
∏
a parallel system, Rpar = 1 − ni=1 (1 − pi ).
∏n
i=1 pi .
Similarly, for
15. By the result of Problem 10, ϕ(X) = 1 − (1 − X1 X2 X3 )(1 − X2 X4 ).
Since the minimal path sets are not disjoint (component 2 appears in both
of them), X1 X2 X3 and X2 X4 are not independent, and we must first open
the brackets, simplify the expression for ϕ(X) and only then take the expectation. Note that for binary variables Xi2 = Xi . This gives the result
ϕ(X) = X1 X2 X3 + X2 X4 − X1 X2 X3 X4 and R = p1 p2 p3 + p2 p4 − p1 p2 p3 p4
16. LB is the reliability of a parallel connection of series subsystems corresponding to non overlapping minimal paths, i.e the probability that at least
one of these paths is operational. This implies that the system is operational, and LB ≤ R. If the system is UP then all its minimal cuts are UP,
and thus all non overlapping minimal cuts are UP. Thus R ≤ U B.
(By non overlapping paths we mean paths that have no common elements; similar non overlapping cuts have no common elements).
Remark. Since there might be several sets of non overlapping paths and
non overlapping cuts, it is possible to construct several lower bounds and
use the largest of them. Similarly, one can construct several upper bounds
and use the smallest of them. This method is in fact the Ushakov-Litvak
original suggestion.#
17. It is easy to check that zi ≥ xi , zi ≥ yi ⇒ zi ≥ max(xi , yi ) ⇒ ϕ(z) ≥
max(ϕ(x), ϕ(y)) = 1 − (1 − ϕ(x))(1 − ϕ(y)), Q.E.D.
18. Option (i) corresponds to the structure function ϕ(z). Option (ii)
corresponds to the structure function 1 − (1 − ϕ(x))(1 − ϕ(y)). The result of
the previous problem implies that the first option is preferable. In words: it
is always better to reinforce each element by a parallel one than to reinforce
the whole system by a similar one in parallel.
19. The new structure function will be
h(ϕ(y (1) ), ϕ(y (2) ), ..., ϕ(y (n) )).
A particular case of the above described situation is replacement of each
component of the system by a subsystem identical to the original system.
Such systems are called recurrent.
20. The following Mathematica program BLOCK-1 generates variable A
which is positive if the graph is connected and zero, otherwise.
BLOCK-1
8
m = {{1, 1, 0, 0, 0}, {1, 0, 1, 0, 0},
{0, 0, 0, 1, 1}, {0, 0, 1, 1, 0}, {0, 1, 0, 0, 1}, {0, 1, 0, 1, 0}}
For[j = 1, j < 7, j + +, For[i = 1, i < 7, i + +,
If [m[ [j] ].m[ [i] ]≤ 0,
m[ [j] ] = m[ [j] ], m[ [j] ] = m[ [j] ] + m[ [i] ] ] ]
A=Sum [ Product [m [ [ i,j ] ],{j, 1, 5}], {i, 1, 6}]
Comments. m is a matrix, with 5 columns and six rows. The edge
vectors are the rows of this matrix. m[ [ i ] ] calls for the i-th edge vector.
The If operator checks the scalar product of the j-th and i-th vectors.
If it is zero, the j-vector remains unchanged; otherwise, the j-th vector is
replaced by the sum of j-th and i-th vectors. Internal For does this operation
for all i running from 1 to 6. The external For does the If operation for
all j running from 1 to 6. Finally, Product multiplies all coordinates in a
row, and Sum gives the sum of all these products. If there is a row vector
with all nonzero elements, at least one of the products is nonzero and A
is positive. If, on the contrary, all row vectors contain at least one zero
element, all products will be zero and A=0.
From the programming point of view, the above program is far from
being efficient. It takes n2 If comparisons to analyze a network with n
edges.
The students with sufficient programming background are strongly advised to implement the DDS described in Section 2.2.
21. Mathematica Monte Carlo program for computing probability of network connectivity.
Define the vector p = {p1 , p2 , p3 , p4 , p5 , p6 } of edge up probabilities.
Define vector b = {b1 , b2 , b3 , b4 , b5 , b6 } of six constants.
Define the matrix m as follows:
m = {b[[1]] ∗ {1, 1, 0, 0, 0}, b[[2]] ∗ {1, 0, 1, 0, 0},
b[[3]]∗{0, 0, 0, 1, 1}, b[[4]]∗{0, 0, 1, 1, 0}, b[[5]]∗{0, 1, 0, 0, 1}, b[[6]]∗{0, 1, 0, 1, 0}}.
Now denote the whole program given above in solution of problem 20 as
BLOCK-1, with m modified as above.
The whole program is as follows:
p = {p1 , p2 , p3 , p4 , p5 , p6 }
b = {b1 , b2 , b3 , b4 , b5 , b6 }
For[Q = 0; r = 1, r < 1001, r + +,
For[ i=1, i < 7, i++, If [Random[ ] > p[[i]],
b[[i]]=0, b[[i]]=1] ];
9
BLOCK-1;
If [A > 0, Q=Q+1,Q=Q ] ]
Rel=Q/1000;
Print[”Rel=”Rel]
Comments. The external For repeats the connectivity evaluation 1000
times. Q is the counter. The first internal For checks the presence of each
of the 6 edges. With probability pi edge ei exists, and the b[[i]] multiple at
the corresponding edge vector is 1. If edge is erased, the multiple becomes
zero. Then BLOCK-1 is carried out and it produces the parameter A. The
last If adds 1 to the counter, iff A is positive. Rel is the reliability estimate.
22.
In addition to the arrays defined for the Kruskal’s algorithm, let us define one
more. EdgeState[1, ..., m] is the array of edge states, so that EdgeState[i] =
1, if the edge i is up, and EdgeState[i] = 0, otherwise. In fact, there is no
need for us to construct the minimal spanning tree, it is sufficient to check
whether all terminals are connected by the edges in up. Below we give the
appropriate pseudocode. It uses slightly modified Kruskal’s algorithm.
Procedure CMC
// The main cycle
for j ← 1 to M
// DSS initialization
for i ← 1 to n
H[i] ← 0
Comp[i] ← i
T comp[i] ← 0
for i ← 1 to k
j ← T [i]
T comp[j] ← 1
// iT comp - the current maximal number
// of terminals in one component.
iT comp ← 1
i←0
for i ← 1 to m
simulate state othe edge i
if i is up
then
EdgeState[i] = 1
else
10
EdgeState[i] = 0
repeat
i←i+1
u ← F node[i]
v ← Snode[i]
ucomp ← f ind2(u)
vcomp ← f ind2(v)
// check if the nodes of the edge i
// belong to different components.
if (ucomp ̸= vcomp)
then
r ← merge2(ucomp, vcomp)
// the number of terminals in the
// resulting component after merging).
j ← T comp[ucomp] + T comp[vcomp]
if (j > iT comp)
then
iT comp ← j
until(iT comp = k || i = m)
if (iT comp = k)
then
R̂ = R̂ + 1
R̂ = R̂/M
11
3
Chapter 3.
1. Denote by τi the random lifetime of component i. We have to find
P (3, 2, 1) = P (τ3 < τ2 < τ1 ).
Suppose we know that τ2 =x. Then, given this fact, the result is
P (τ3 < x < τ1 ) =[ since τ3 and τ1 are independent]=
(1 − e−λ3 x )e−λ1 x = e−λ1 x − e−(λ1 x+λ3 x) .
Now, by the total probability formula, the result is obtained by integrating this expression with respect to the density function λ2 e−λ2 x :
∫
∞
P (3, 2, 1) =
0
[e−λ1 x − e−λ1 x+λ3 x ]λ2 e−λ2 x dx =
(3.1)
= λ2 /(λ1 + λ2 ) − λ2 /(λ1 + λ2 + λ3 ).
2. The failure rate is defined only for t ∈ [0, 1). In this interval the density
is 1 and the CDF of X is t. So, the result is
h(t) =
3.
∫
1
, t ∈ [0, 1).
1−t
∫
t
t
h(x)dx =
0
0
f (x)
dx = −
1 − F (x)
∫
t
0
d(1 − F (x))
= − ln(1−F (t)).(3.2)
1 − F (x)
The result follows since exp(ln a) = a.
4.
By the definition, X represents a sum of n exponentially distributed
i.r.v.’s with parameter λ. Similarly, Y is a sum of m such variables. Therefore Z is a sum of n + m exponentials, Z ∼ Gamma(n + m, λ).
5. For t > 0,
P (Y ≤ t) = P (−λ−1 log X ≤ t) = P (log X ≥ −λt) =
P (X ≥ e
−λt
)=1−e
−λt
(3.3)
.
Thus to generate Y ∼ Exp(λ), we have to generate X ∼ U (0, 1) and
take Y = −λ−1 log X.
6. Let τi ∼ Fi (t) for i = 1, 2. τ1 and τ2 are positive i.r.v.’s. The event
τ1 + τ2 ≤ t is a union of the events Ax = (τ1 ≤ t − x) and Bx = (τ2 ∈
[x, x + dx]), for all x ∈ [0, t]. P (Bx ) ≈ f2 (x)dx. Therefore,
P (τ1 + τ2 ≤ t) = F (2) (t) =
∫
t
0
12
F1 (t − x) · f2 (x)dx.
7.
∫
∞
0
A = P (τ1 ≤ (τ2 , ..., τn )) =
(
P (τ1 ∈ [x, x + dx])
∩
(3.4)
P (τ2 > x, ..., τn > x)
Since τi are i.r.v.’s,
P (τ2 > x, ..., τn > x) = P (τ2 > x)·...·P (τn > x) = exp[−(λ2 +...+λn )x].
Now the integral becomes
∫
A=
∞
λ1 exp[−
0
n
∑
λ1
λi x] exp(−λ1 x)dx = ∑n
i=1 λi
i=2
.
(3.5)
8. Suppose we perform a series of independent identical binary experiments
and stop at the first success. The probability of success is p. X is defined as
the number of the experiment at which the success took place. Obviously,
P (X = k) = q k−1 p.
∑
k−1 p = pq M (1 + q + q 2 + ...) = q M
P (X > M ) = ∞
k=M +1 q
The memoryless property:
P (X > M + N |X > M ) =
N )/P (X > M ) = q N = P (X > N ).
P ((X>M +N )∩(X>M ))
P (X>M )
= P (X > M +
9.
9.1. Obviously, system lifetime τ = min(τ (e1 ), τ (e2 )). Then P (τ > t) =
P (τ (e1 ) > t, τ (e2 ) > t). Because of independence of edge lifetimes we obtain
1 − F (t) = (1 − F1 (t))(1 − F2 (t)).
9.2.
f (t) = dFdt(t) = f1 (t) + f2 (t) − f1 (t)F2 (t) − f2 (t)F1 (t) = f1 (t)(1 − F2 (t)) +
f2 (t)(1 − F1 (t)).
Dividing f (t) by (1 − F (t)) gives that the system failure rate h(t) =
h1 (t) + h2 (t), where hi (t) = fi (t)/(1 − Fi (t)), i = 1, 2.
10. Since all lifetimes have the same distribution and are independent, all
different orders of failure appearance have the same probability. Since there
are 3!=6 different orders, each one of them has probability 1/6.
11. Note that τ , the lifetime, equals τ = min(τ1 , τ2 , ..., τk ), where τi is
component i lifetime. Then, similarly to the solution of Problem 9.1,
P (τ > t) = 1 − Fτ (t) =
k
∏
(1 − Fi (t)).
i=1
13
We arrive at the desired result since Fi (t) = 1 − e−λi t .
To prove that component j fails first, repeat the reasoning given in the
solution of Problem 7 and replace λ1 by λj .
12. Let us note first that by (3.4.2), the mean value of τ ∼ Exp(λ) equals
∫ ∞
1
µ=
e−λx dx = .
λ
0
The time to the first failure of the system has exponential distribution
with parameter k · λ. The mean time to the first failure is, therefore, 1/(kλ).
After the first failure, the system will consist of (k − 1) independent exponential distributions (the memoryless property !) and therefore, the mean
time between the first and the second failure will be 1/(k − 1)λ. Continuing
this reasoning, we arrive at the result:
(1/k + 1/(k − 1) + 1/(k − 2) + ... + 1)/λ.
13. The result follows from the fact that the mean of the sum of r.v.’s is
the sum of the means of these r.v.’s.
14. ∫Let us use (3.4.1):
t
β−1 dx = (αtβ )/β. Thus
0 αx
FX (t) = 1 − e−αβ
−1 tβ
, fX (t) = αtβ−1 exp[−αβ −1 tβ ].
(3.6)
15. By the result of Problem 9.2, the series system will have failure rate
h(t) = (α1 + α2 )t(β−1) , which again is the failure rate of a Weibull distribution with parameters (α1 + α2 , β). β is called the shape parameter. So, the
Weibull family with fixed β is closed under minimum operation.
16. Here F (t) = 1 − e−αβ
respect to Y produces
−1 tβ
(see Problem 14). Solving F (Y ) = X with
Y = [−(β/α) log(1 − X)]1/β .
(3.7)
17. The event Ak = [τ0 + ... + τk ≤ t0 ] is equivalent to the event
Bk = [ξ(t0 ) ≥ k].
Similarly, the event Ak+1 = [τ0 + ... + τk + τk+1 ≤ t0 ] is equivalent to the
event
Bk+1 = [ξ(t0 ) ≥ k + 1].
Therefore,
(
)
P (Ak ) − P (Ak+1 ) = P ξ(t0 ) = k ,
Q.E.D.
14
4
Chapter 4.
1. Solve Problem 2 and set λ = 1
2. Let t ∈ (0, 1).
P (−λ−1 log(1 − ξ) ≤ t) = P (log(1 − ξ) ≥ −λt) =
P (1 − ξ ≥ exp(−λt)) = P (ξ ≤ 1 − e
−λt
)=1−e
−λt
(4.1)
.
3. Let t ∈ (0, 1).
P (Y ≤ t) = P (F (X) ≤ t) = P (X ≤ F −1 (t)) = F F −1 (t) = t.
(4.2)
The inverse function F −1 (·) does exist since F (·) is continuous and
strictly increasing.
4. Suppose Random [ ] generates X ∼ U (0, 1)
Set X = Random[ ];
Set Y = Random[ ] +1;
Set I = 1 if Random[ ] ≤ p , or I = 0, otherwise;
Set Z = I · X + (1 − I) · Y .
5. By the description of the network, it is UP if all four edges are up, the
probability of which is R = 0.952 0.972 . If an edge having up probability p is
reinforced by another one in parallel with up probability p1 , the reinforced
edge will have up probability r = 1 − (1 − p)(1 − p1 ). If we reinforce the
”weak” edge, it will have up probability r = 1−0.05·0.1 = 0.995. Similarly, if
we reinforce the ”strong” edge, its reliability will be r = 1−0.03·0.1 = 0.997.
So, we have to compare R1 = 0.95 · 0.995 · 0.972 with R2 = 0.952 · 0.97 ·
0.997. Obviously R1 > R2 . So, reinforced should be the ”weak” edge, which
is intuitively obvious.
∏
6. System UP probability is R = 41 pi . A reinforced edge i will have
up probability poi = 1 − (1 − pi )0.1 = 0.9 + 0.1pi . The ratio a = R/Rio ,
where Rio is the reliability of system with i-th edge being reinforced, equals
pi /(0.9 + 0.1pi ). This expression is an increasing function of pi . Thus, a is
minimal, i.e. the reinforcement is optimal, if pi = min(p1 , p2 , p3 , p4 ). So, we
always must reinforce the less reliable edge. We note that the less reliable
edge in the case of series system is the most important.
∏
BIM for edge i equals k̸=i pk = R/pi . Thus the largest BIM has the
edge with the smallest up probability.
15
7. By (4.3.1), each of the edges has stationary availability p = 100/(100 +
5) = 0.952 . It equals the edge stationary up probability. Network stationary
UP probability is then Av = 0.9524 = 0.823.
8. The network has five 3-edge minimal size min cuts. Each one of them
cuts off one of the nodes a, b, c, d, e. If we position the more reliable edges
on the periphery, and the less reliable will be the radial ones, each min
cut will have failure probability g = 0.012 · 0.05 = 5 · 10−6 and the failure
probability of the network will be, by the BP approximation, F ≈ 25 · 10−6 .
If one of the less reliable edges will be put on periphery, there will be
at least one min cut of size three with two edges having failure probability
0.05 and one having failure probability 0.01. Thus there will be a cut of
size 3 resulting in failure probability F . The presence of other cuts will only
increase this probability. Thus the initial location is optimal.
9. Let us differentiate both sides of (4.6.2) with respect to p. It follows
from (4.6.1) that the left-hand side will have a sign change for p ∈ [0, 1],
see Fig. 4.5. The right-hand side is a sum of derivatives of the importance
functions. If the left-hand side is negative, all derivatives in the right-hand
side can not be positive. Therefore, there must be such p value for which at
least one one of the importance function derivatives is negative. Therefore,
it is impossible that all importance functions are increasing functions of p.
In practical terms, it means that the BIM of a component is not always an
increasing function of pin the interval p ∈ [0, 1].
For example, let us check a system which is a series connection of two
two-component parallel subsystems. The reliability of first subsystem components are p1 , p2 , and of the second subsystem - p3 , p4 . It is easy to check
that system reliability equals
R = (1 − (1 − p1 )(1 − p2 ))(1 − (1 − p3 )(1 − p4 )).
Taking derivative with respect to p1 and setting pi ≡ p, we obtain
Imp1 = 2 − 6p + 3p2 . This is not an increasing function of p in [0, 1].
Indeed, Imp1 = 2 for p = 0 and Imp1 = −1 for p = 1.
10. For a parallel system, R = 1 −
∏n
∂R/∂pj = R/(1 − pj ).
i=1 (1 − pi ).
It is easy to establish that
(4.3)
Thus the maximal importance has the component with maximal pj , i.e. the
most reliable one.
∏
For a series system, R = ni=1 pi and
∂R/∂pj = R/pj .
(4.4)
16
For this system, the most important component is the less reliable one.
11. a). The structure function of the system is
ϕ(x) = x1 [1 − (1 − x2 x3 )(1 − x4 x5 )].
b). Opening the brackets we obtain
ϕ(x) = x1 x2 x3 + x1 x4 x5 − x1 x2 x3 x4 x5 . Replacing each xi by a binary
variable Xi with P (Xi = 1) = pi and taking the expectation, we find out
that system reliability
r(p) = E[ϕ(X)] = p1 p2 p3 + p1 p4 p5 − p1 p2 p3 p4 p5
c). Imp1 = p2 p3 + p4 p5 − p2 p3 p4 p5 = [pi = p] = 2p2 − p4 .
Imp2 = p1 p3 − p1 p3 p4 p5 = [pi = p] = p2 − p4
It is easy to see that components 2, 3, 4, 5 have equal importance measures. Obviously, the most important is component 1.
12. It follows from (4.5.1) that system reliability function can be obtained
by replacing each pi in the expression of R by component i reliability Ri (t) =
1 − Fi (t). In our case, pi must be replaced by exp[−λi t]. Thus we arrive at
the expression
Rs (t) = exp[−(λ1 + λ2 + λ3 )t] + exp[−(λ1 + λ4 + λ5 )t] −
(4.5)
exp[−(λ1 + λ2 + λ3 + λ4 + λ5 )t].
13. The radar system is a so-called two-out-of-three system. It is UP if two
stations are up and one is down, or all three stations are up. Denote the
station lifetime by F (t) = 1 − exp[−λt].
Rs (t) = 3(1 − F (t))2 F (t) + (1 − F (t))3 = 3e−2λt − 2e−3λt .
(4.6)
Using the formula (4.2.4) for system mean UP time we obtain E[UP] =
5/(6λ).
The stationary availability is, therefore,
5/(6λ)/(5/(6λ) + 0.1/λ) = 0.893.
14. The system is not monotone since ϕ(0, 0, 0) = 0, ϕ(1, 0, 0) = 1, ϕ(0, 1, 0) =
1, ϕ(0, 0, 1) = 1, ϕ(1, 1, 0) = 1, ϕ(0, 1, 1) = 1, ϕ(1, 0, 1) = 1 but ϕ(1, 1, 1) = 0.
15. This network has 6 minimal path sets: (1, 3, 7), (1, 4, 8), (2, 5, 7), (2, 6, 8)
and (1, 3, 5, 6, 8), (2, 6, 4, 3, 7). (To check it, sketch the network !) The network is UP if at least one of the minimal paths is UP. If all min paths are
DOWN the network is DOWN.
Below is the algorithm. We denote by Xi the U (0, 1) random variables
generated via operator Random[ ], i = 1, 2, ..., 8. Define
I(1) = X1 X3 X7 , I(2) = X1 X4 X8 I(3) = X2 X5 X7 , I(4) = X2 X6 X8 ,
17
I(5) = X1 X3 X7 X7 X8 , I(6) = X2 X6 X4 X3 X7 .
Algorithm: small channel network reliability.
1. Set Q := 0; N = 10, 000;.
2. For k = 1 to k = 8 set I(k) = 1 if Xk < pi ;
3. For k = 1 to k = 6 calculate I(k);
4. Calculate I = I(1) + ... + I(6);
5. If I ≥ 1, set Q := Q + 1;;
Comment: If I ≥ 1, there is at leat one operational path.
6. If Q ≤ N GOTO 2;
7. R = Q/N ; Print R.
∏
16. For a series system, the structure function is ϕ(X) = ni=1 Xi . Taking
expectation, we obtain
∏
Rser = E[ϕ(X)] = ni=1 pi . Replacing pi by e−λi t we obtain that Rser =
∑
exp[− n1 λi t].
∏
For a parallel system, the structure function is ϕ(X) = 1 − ni=1 (1 − Xi ).
Taking expectation, we obtain
∏
Rpar = E[ϕ(X)] = 1 − ni=1 (1 − pi ).
∏
Replacing pi by e−λi t we obtain that Rpar = 1 − ni=1 (1 − exp[−λi t]).
17. Rs (t0 ) is the probability that the system is UP at the instant t0 . If
the system is not monotone, it could happen that the system went DOWN
for the first time at some instant t1 , returned to UP at some instant t2 and
remained in UP till t0 , t1 < t2 < t0 . Therefore, it is not true that always
the system lifetime is greater than t0 .
18. The right-hand side of (4.5.4) is the probability that all components
are up. Therefore the system is UP and vice versa. The right-hand side of
(4.5.5) is 1 minus the probability that all components are down. Therefore
it is the probability that at least one component is up and the whole system
is UP.
18
5
Chapter 5.
1. The left-hand side of (5.2.4) is
(∂R/∂p1 )q1 λ1 + (∂R/∂p2 )q2 λ2 + (∂R/∂p3 )q3 λ3 .
(5.1)
The coefficient at λ1 in the given expression is ψ1 ·λ1 +ψ3 ·λ1 . Comparing
the coefficients at λ1 , we obtain that
∂R/∂p1 = (ψ1 + ψ3 )/q1 .
Similarly,
∂R/∂p2 = (ψ1 + ψ2 )/q2 and ∂R/∂p3 = (ψ2 + ψ3 )/q3 .
2. The network is UP if at least two edges are up. This gives the following
formula
P (UP ) = p1 p2 (1 − p3 ) + p1 p3 (1 − p2 ) + p2 p3 (1 − p1 ) + p1 p2 p3 .
Imp1 = ∂R/∂p1 = p2 + p3 − 2p2 p3
Imp2 = ∂R/∂p2 = p1 + p3 − 2p1 p3
Imp1 − Imp2 = (p2 − p1 )(1 − 2p3 ) > 0 since both brackets are negative.
Similarly we conclude that Imp2 − Imp3 > 0 . Therefore, the most
important is edge (a, b) and the less important is (a, c).
3. Let us number the components: the components in parallel have numbers
2, 3, 4 and the single-block component has number 1. The border states are
the DOWN states which turn into UP state by replacing a single down
component into an up component.
The list of all DOWN states is the following:
(0, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1),
(0, 1, 1, 0), (0, 1, 0, 1), (0, 0, 1, 1), (0, 1, 1, 1), (1, 0, 0, 0)).
All these states, except (0, 0, 0, 0) are border states: the states with zero
on the first position turn into UP state by replacing component 1 by an up
one: state (1, 0, 0, 0) ⇒ UP if any one of the components 2,3 or 4 becomes
up.
4. Obviously, system reliability is
R = p1 [1 − (1 − p2 )(1 − p3 )(1 − p4 )].
Then
∂R/∂p1 = Imp1 = [taking derivative and setting pi = p]
= 1 − (1 − p)3 = 3p − 3p2 + p3 .
Similarly,
∂R/∂p2 = p(1 − p)2 .
After simple algebra, we see that
19
Imp1 − Imp2 ≥ 0.
Thus the first component is more important than the second. The proof
for the third and fourth components are similar. By symmetry, the result is
the same. So, 1 is the most important component, the remaining components
have equal importance.
5. The states with exactly two components up (we list only the up edges)
are:
[(s, a), (b, t)], [(s, a), (s, b)],[(a, t), (b, t)], [(a, t), (s, b)].
6. We found in Problem 4 that R = p1 [1 − (1 − p1 )(1 − p2 )(1 − p3 )].
(Argument t is omitted at each pi (t)). It follows that
∂R/∂p1 = 1 − (1 − p1 )(1 − p2 )(1 − p3 ). Now from the list of all system
border states (Problem 3) we see that the border states turning into UP
state by means of ”activating” component 1 are the following:
(0, 1, 1, 1), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1), (0, 0, 1, 1), (0, 1, 0, 1) and (0, 1, 1, 0).
The probability to be in one of these states is
(1 − p1 )[1 − (1 − p1 )(1 − p2 )(1 − p3 )]. If we multiply it by λ1 , we get
exactly the first component of the ∇R vector multiplied by λ1 (1 − p1 ).
Similarly, ∂R/∂p2 = p1 (1 − p3 )(1 − p4 ). Multiply it by (1 − p2 ) and we
get the probability of the unique border state (1, 0, 0, 0) which goes into UP
by ”activating” component 2.
7. If (s, t) is absolutely reliable, the network is UP and its reliability is
1. if (s, t) is eliminated, then the network has four min cuts of size 2. By
B-P approximation, the failure probability is F ≈ 4α2 and the importance
of (s, t) is Imp(s, t) ≈ 1 − (1 − 4α2 ) = 4α2 . If edge (s, a) is absolutely
reliable, we can compress s and a, and our network will have 2 min cuts of
size 3 and reliability R1 ≈ 1 − 2α3 . If edge (s, a) is eliminated, we have two
min cuts of size 2 and the reliability R2 ≈ 1 − 2α2 . Thus, the Imp(s, a) ≈
1 − 2α3 − (1 − 2α2 ) = 2α2 − 2α3 . Obviously Imp(s, t) > Imp(s, a).
8. Let 1, 2, 3 be the numbers of first series subsystem components, and 4, 5
- of the second. Denote by pi reliability of component i. System reliability
is
R = [1 − (1 − p1 )(1 − p2 )(1 − p3 )] · [1 − (1 − p4 )(1 − p5 )] = R1 · R2 .
The importance of component 1 is
Imp1 =
∂R1
· R2 .
∂p1
20
After setting all pi ≡ p we obtain
Imp1 = (1 − p)2 [1 − (1 − p)2 ].
Components 1,2,3 have the same BIM’s. In a similar way we obtain that
Imp4 = Imp5 = (1 − p)[1 − (1 − p)3 ].
Obviously, Imp1 < Imp4 . So, the components of the smaller system are
more important.
21
6
Chapter 6.
1. Bridge network has four nodes (s, a, b, t) and five edges e1 = (s, a),
e2 = (s, b), e3 = (a, t), e4 = (b, t), e5 = (a, b). s, t are terminals, a, b do not
fail, the edges do fail, and their lives are i.i.d. r.v.’s. Network fails if the
terminals become disconnected. In the process of destruction, the network
can not fail with the failure of one edge. Thus f1 = 0. Obviously, it can not
survive failure of four edges. Thus f5 = 0. If the first two failures are of
edges 1, 2 or 3, 4 then the network fails with the second failure. There are
2·2!·3! = 24 permutations leading to the the failure on the second step. The
total number of permutations is 5! = 120 and thus f2 = 1/5. The network
survives three failure and fails on the fourth if the edges fail, for example,
in the following order: 1, 3, 5, 2, 4 or 2, 5, 4, 1, 3. There are 3! · 2! · 2 = 24
permutations of this type. Thus f4 = 24/120 = 1/5 and therefore f3 = 3/5.
2. Define a matrix A = [a(i, j)], with 15 rows and and 10 columns. i denotes
the row, j -the column. Suppose you have an exponential random number
generator. For example, in Mathematica it is ExponentialDistribution[λ].
Set λ = 1.
a. For fixed i, generate 10 replicas of the random variable X ∼ Exp(λ),
denote them as x(i, 1), ..., x(i, 10) and set a(i, j) := x(i, j).
b. Order the values in the i-th row in ascending order.
.̧ Repeat a,b 15 times for i = 1 to i = 15.
The desired sample will be the values a(i, 7), i = 1, 2, ..., 15.
3. By the definition of the r-th order statistic in a sample of n, in order
to satisfy the inequality X(r) ≤ t there must be at least r values of Xi
smaller or equal t. Let r ≤ j ≤ n. Fix any j value. j replicas of Xi must
be smaller or equal t, the remaining (n − j) replicas must be greater than
t. For fixed candidates of these replicas, the probability of this event is
[F (t)]j · [1 − F (t)]n−j . There are n!/j!(n − j)! different ways to choose j
candidates from a collection of n. This explains (6.1.2).
The smallest value in the sample is larger than t if and only if all values
in the sample exceed t. This explains (6.1.3). The largest value in the
sample is ≤ t if and only if all values in the sample are ≤ t. This explains
(6.1.5).
To prove (6.1.6), note that
P (Xmax ≤ t) = P (all Xi ≤ t) =
n
∏
i=1
22
Fi (t),
because all Xi are i.r.v.’s.
4. For n = 4 and r = 2,
F(2) (t) = 6q 2 p2 + 4q 3 p + q 4 ; F(3) (t) = 4q 3 p + q 4 ; F(4) (t) = q 4 .
By (6.3.2),
Fs (t) = F(2) (t)f2 + F(3) (t)f3 = 4q 2 (1 − q)2 + 4q 3 (1 − q) + q 4 .
After some boring algebra, it gives the desired result Fs (t) = (1 − p2 )2 .
Now consider a parallel connection of two two-component series systems.
System structure function ϕ(X) = 1−(1−X1 X3 )(1−X2 X4 )), where P (Xi =
1) = p. Therefore, system reliability R = 1 − (1 − p2 )2 . Finally, Fs (t) =
1 − R = (1 − p2 )2 .
5. If the system remains UP when the first r − 1 failures take place and goes
DOWN with the r-th failure, the random instant when this happens is the
r-th order statistic in the sample of n. (6.3.3) follows from the formula of the
total probability. The event τs ≤ t may happen under one of n conditions,
P (τs ≤ t|system fails with the r-th failure) = F(r) (t), r = 1, 2, ...n.
6. Suppose that the network has n edges numbered 1, 2, ..., n and the order
in which they are born is given by some permutation of these numbers
i1 , i2 , ..., ir , ..., in . Suppose that the network is DOWN at the instant of the
first r − 1 births and gets UP at the r-th birth. Denote by br the probability
of this event. Then the collection {b1 , b2 , ..., bn } is called the C-spectrum
(construction spectrum) of the network. For example, if in the network on
Figure 6.3 the births appear in the order 3, 2, 1, 4 the system gets UP on
the third birth. Denote by θ the random time of system birth. Given that
the system birth takes place at the r-th birth of an edge, the system birth
time coincides with the value of the r-th order statistic of the birth times.
Denote by Bi ∼ G(T ) the random time of the birth of edge i and by B(r) (t)
the corresponding r-th order statistic in the sample of {Bi , i = 1, 2, ..., n}.
Thus, applying the total probability formula, we arrive at the expression
P (θ ≤ t) =
n
∑
br B(r) (t).
r=1
7. It follows from the problem description that the evaluation completion
time τ is distributed as the third order statistic in the population of 5 from
23
the CDF F (t). Therefore
P (τ ≤ t) =
5
∑
5!
[F (t)]r [(1 − F (t)]5−r .
r!(5
−
r)!
r=3
8. Three nodes b, c, e can fail in 6 different orders:
(b, c, e), (b, e, c), (c, b, e), (c, e, b), , (e, b, c), (e, c, b).
In the first and the third sequences disconnection appears with the third
failure. In other cases - on the second. So, f2 = 2/3, f3 = 1/3.
9. The C-spectrum is b2 = 1/3, b3 = 2/3.
Consider, for example, permutation (1, 2, 3, 4). Suppose it gives the order
of edge failures. s − t becomes disconnected with the second failure. Now
read this permutation from right to left and suppose, it determines the order
of edges births. Obviously, the s − t connection appears on the third step.
Therefore, b3 = f2 , b2 = f3 . In general fk = bn+1−k .
(3)
10. The lifetime of the first block is F1 (t) = F(2) (t), which is the CDF of the
second order statistic in a sample of n = 3. The lifetime of the second block
(4)
has CDF F2 (t) = F(2) (t), which is the second order statistic in a sample of
n = 4. Since the system is a series connection of these blocks,
P (τs ≤ t) = [see (6.1.4)] = 1 − (1 − F1 (t))(1 − F2 (t)).
24
7
Chapter 7.
1. X has density 1/T in the interval [0, T ]. Thus
P (X ≤ t) =
∫
0
t
(1/T )dx = t/T, t ∈ [0, T ].
For t ∈ [0, T1 ]
P (X ≤ t|X ≤ T1 ) =
P ((X ≤ t) ∩ (t ≤ T1 ))
= t/T1 ,
P (X ≤ T1 )
which is the CDF of X ⋆ ∼ U (0, T1 ).
2. It is easier to prove that for t ∈ [0, T ]
P (X1 > t) =
Exp[−λ1 t] − Exp[−λ1 T ]
.
1 − Exp[−λ1 T ]
P (X1 > t) = P (1 − ξ(1 − e−λ1 T ) ≤ e−λ1 t ) =
P (ξ > (1 − e
−λ1 t
1 − (1 − e
)/(1 − e
−λ1 t
−λ1 T
)) =
−λ1 T
)/(1 − e
(7.1)
).
Now
P (X1 > t) = 1 − P (X1 ≤ t) = R1 (t) and
fX1 (t) = − dRdt1 (t) .
This gives the left-hand side of (7.4.2).
3.
P (X ≤ t) = P (F −1 (V ) ≤ t) = |!| = P (F (F −1 (V )) ≤ F (t)) = F (t).
4.
P (Z ≤ t) = P (Y β ≤ t) = P (Y ≤ t1/β ) = 1 − e−λt ,
Q.E.D.
5. We already know that if X ∼ U (0, 1), then Y = −λ−1 log(1 − X) ∼
Exp(λ). Using the result of the previous problem we conclude that Y 1/β ∼
W (λ, β). Therefore the generator is the following:
Z = [−λ−1 log(1 − Random[ ])]1/β .
25
6. Set I = 0, N = 10, 000.
a. For i = 1 to i = 3 generate
Xi = [− log(1 − Random[ ])]1/β .
b. Compute S = X1 + X2 + X3 .
c. If S ≤ T , set I := I + 1.
Repeat steps a, b, c N times.
Estimate P̂ (X1 + X2 + X3 ≤ T ) = I/N.
z}|{
Estimate V ar[P̂ ] = P̂ (1 − P̂ /N .
Comment. Note that I ∼ B(N, P ), V ar[I] = N · P · (1 − P ) and
V ar[I/N ] = P (1 − P )/N .
Replacing P by its estimate P̂ = I/N we obtain the estimate of the
variance.
7. The question is, in fact, how many operations we have to perform in order
to compute one replica of the estimator Bm (T ). Suppose we are on the j-th
step. We already know the realizations of X1 = v1 , X2 = v2 , ..., Xj−1 = vj−1
and have the product of first j terms in (7.3.5). To get one replica vj of Xj
we must generate one replica of the r.v. with density (7.3.4). Suppose it
takes H arithmetic operations. Then we must find Fj (T −v1 −...−vj−1 ) and
multiply by this quantity the expression (7.3.5). Suppose it takes another L
operations. Thus, for simulating one replica of the unbiased estimator of r
convolutions we have to perform r(H + L) = O(r) operations. Our method,
therefore, is linear in r.
26
8
Chapter 8.
1. a). We will call the edges by their lifetimes. So, the edge connecting the
left terminal with v1 will be called edge 8. Following Kruskal algorithm, we
construct the maximal spanning tree. It consists of edges 13, 12, 11, 10, 9,
7, (note that edge 8 is omitted !) and 4. Edges 9,10 and 4 are ”hanging”
edges with respect to the terminals. Eliminate them. The remaining tree
has minimal edge 7, and the network lifetime is 7. When this edge fails, the
terminal in the middle gets isolated.
b). Because of node v1 failure, edges 8,9 will get lifetimes 6.5. Edge v2
failure changes edge 12 and 11 lifetimes to 10.5. Now the maximal spanning
tree has edges 13, 10.5, 10.5, 10, 7, 6.5 and 4. Edges 4, 6.5 and 10 are
”hanging” and must be erased. The minimal edge in the remaining tree is
7. So, the network fails at time instant t = 7.
2. Replace the numerator by the first term in (8.4.6) and delete the multiples
(1−fr ). The variance will only increase. Now replace fr by the largest value
of {fr }, denote it maxk fk . This will only increase the numerator. In the
denominator, replace all fr by their smallest value minr fr . In numerator
∑
∑
we have ( [F(r) ]2 )0.5 . It is smaller then the
F(r) . So, the fraction will
only increase if the ratio of these sums is replaced by 1. This completes the
proof.
3. Take the expression (6.1.2) for the r-th order statistic and set there
F (t) = α. Afterwards, substitute F(r) (t) into (6.3.3). You will observe that
α(1 − α)m−1 m!/(1!(m − 1)!) is multiplied only by f1 ,
α2 (1 − α)m−2 m!/(2!(m − 2)!) is multiplied by (f1 + f2 ),
α3 (1 − α)m−3 m!/(3!(m − 3)!) is multiplied by (f1 + f2 + f3 ),
and so on. In general,
αx (1 − α)m−x m!/(x!(m−)!) has multiple (f1 + f2 + ... + fx ).
On the other hand, there is an equivalent expression for system failure
probability (8.5.5). Here the term αx (1−α)m−x is multiplied by the number
of system failure states with exactly x components being down and (m − x)
being up. From this follows (8.5.4).
For the upper network in Fig. 8.3,c we have m = 15, x = 7 and f5 =
0.00205, f6 = 0.010155, f7 = 0.02982.
C(7) = (f5 + f6 + f7 )15!/(7!8!) = 270.43.
We conclude that the network has 270 failure states with 7 edges down and
8 edges up.
27
4. Similar to (8.5.2), the probability to reveal in a single experiment a min
cut of size 3 equals ϵ = 3!87!/90! = 6/(90 · 89 · 88) = 8.512 · 10−6 .
For large M , (1 − x/M )M ≈ e−x . If ϵ is the probability to reveal the min
cut in a single experiment, the probability to miss it in M experiments is
(1 − ϵ)M . It must be less than 0.001. From e−x = 0.001 it follows that x =
− log[0.001] = 6.91. So, from x/M = ϵ we obtain M = 6.91/ϵ ≈ 812, 000.
Thus we must simulate at least 812,000 permutations.
5. a. There are 24=4! various orders in which 4 nodes can fail. Failure
of any single node does lead to network failure. Thus f1 = 0. Network fails
at the instant of the second failure if node 1 fails first, or anyone of nodes
2, 3, 4 fail first and node 1 fails next. This amounts to 12 permutations and
thus f2 = 12/4! = 1/2. The network fails with the fourth failure if node 1
fails last. This gives 3!=6 permutations and f4 = 1/4. Therefore f3 = 1/4.
Thus, Sp = [0, 0.5, 0.25, 0.25].
b. Following (8.4.4) we find, after routine algebra, that FN = 3F 2 − 3F 3 +
F 4 . (In (8.4.3) we set m = 4, r = 2, 3, 4).
To find the maximal permissible value of F we can use the operator
FindRoot of Mathematica and solve the equation FN = 0.05. The result
is F = 0.139. If node failure probability exceeds this value, network failure
probability exceeds 0.05.
Note also that the same formula for FN could be derived from the fact
that the network is a parallel connection of two series systems, one with one
component, another one - with three components.
6.
Suppose that operator DiscUnif[m] generates random integer from the
set of integers 1, ..., m.
a. Define Matrix A[i,j] with 6 rows and n columns, n > 3.
Set A[1,1]=1,A[1,2]=2,A[1,3]=3,
Set A[2,1]=1,A[2,2]=3,A[2,3]=2,
Set A[3,1]=2,A[3,2]=1,A[3,3]=3,
Set A[4,1]=2,A[4,2]=3,A[4,3]=1,
Set A[5,1]=3,A[5,2]=1,A[5,3]=2,
Set A[6,1]=3,A[6,2]=2,A[6,3]=1.
b. Let x=DiscUnif[6]
c. Set j=4.
d. Define A[x,j]=j;
e. Set y=DiscUnif[j-1];a=A[x,y];b=A[x,j];
f. Set A[x,y]=b; A[x,j]=a;
28
g. j:=j+1;
h. If j > n STOP ELSE GOTO to d;
i. Read the permutation from the x-th row of A.
This algorithm creates a random permutation of n > 3 integers in a
recurrent way starting from a random permutation of 3 integers 1, 2, 3. These
permutations are positioned into first 6 rows of matrix A.
b. chooses randomly the x-th row of A containing one of these six
permutations.
c. for the first time (j=4) puts 4 on the last position number in the x-th
row.
e,f pick up a random element among the first j − 1 elements of the permutation; remembers it (A[x,y]=a) and exchanges it with the last element
in the x-th row of matrix A.
g, h create a loop.
7. It is seen from the figure, that there are 5 min cuts of size 4. The same
result is provided by (8.5.1), with r = 4, m = 10. The B-P approximation
is F ≈ 5ϵ4 . Equating it to 0.01 we obtain ϵ ≤ 0.211.
8. We present a small program written on Mathematica.
Define m, the number of edges in the network;
Define q, edge failure probability;
Define a one-dimensional array Dsp containing the D-spectrum:
Dsp=[f(1),f(2),...,f(n)];
Fsyst=Sum[Dsp[[r]]Sum[q j (1−q)(m−j) m!/(j!(m−j)!),(j, r, m)], (r, 1, m)]
Print[Fsyst]
The internal sum computes the r-th order statistic according to (8.4.3).
The external sum computes (8.4.4). Dsp[[r]] picks up the r-th term of the
spectrum, f(r). Fsyst is the system failure probability.
For example, for the network shown on Fig. 8.3, b, we have m=10,
Dsp=[0, 0, 0, 1/42, 4/42, 12/42, 25/42, 0, 0, 0]. Taking q=0.1, we obtain Fsyst=0.00833.
29
9
Chapter 9.
1. The super-state σ23 has edges 2 and 3 up, all other edges are down.
Transition into it can be from σ11 and σ14 .
2. Using the properties of the Exponential distribution (Chapter 3) it is
easy to find out the following.
∑
The sitting time in σ0 is τ0 ∼ Exp( 51 λi ).
∑
E[τ0 ] = 1/ 51 λi . The transition into σ1 has probability
∑5
λ1 / 1 λi .
The sitting time in σ11 is τ11 ∼ Exp(λ1 + λ2 + λ4 + λ5 ).
∑
∑
E[τ11 ] = 1/ i̸=3 λi . The transition into σ22 has probability λ4 / i̸=3 λi .
The sitting time in σ23 is τ23 ∼ Exp(λ1 + λ4 + λ5 ).
E[τ23 ] = 1/(λ1 + λ4 + λ5 ). The transition into U P has probability 1.
The mean transition time along this trajectory ( call it u) is
1/
5
∑
λi + 1/
1
∑
λi + 1/(λ1 + λ4 + λ5 ).
i̸=3
The probability of trajectory u is
P (u) = (λ3 /
5
∑
1
λi ) · λ2 /
∑
λi .
i̸=3
3. If the sitting time in σ0 plus the sitting time in σ11 plus the sitting time
in σ23 is less or equal t, then the trajectory is in UP at time t. Thus
P0 = P (τ0 + τ11 + τ23 ≤ t|u).
Now
P1 = P (τ0 ≤ t|u) − P (τ0 + τ11 ≤ t|u),
because the first term is the probability that the trajectory is not in σ0 at
time t + 0, and the second term is the probability that it is in σ23 or in UP
at time t.
4. There are two trajectories leading to σ26 :
u1 = σ0 → σ14 → σ26 ,
30
u2 = σ0 → σ13 → σ26 ,
λ2 λ4
.
i=1 λi · (λ1 + λ2 + λ3 + λ5 )
P (u1 ) = ∑5
λ5 (λ4 + λ3 )
.
i=1 λi · (λ1 + λ3 + λ4 + λ5 )
P (u2 ) = ∑5
The answer: P (u1 ) + P (u2 ) + P (u3 ).
5. Sitting time in σ0 is τ0 ∼ Exp(λ2 + λ4 + λ5 )
Sitting time in σ3 is τ3 ∼ Exp(λ2 + λ5 )
Sitting time in σ1 is τ1 ∼ Exp(λ2 + λ4 )
The transition σ0 → σ3 has probability λ4 /(λ2 + λ4 + λ5 ).
The transition σ0 → σ1 has probability λ5 /(λ2 + λ4 + λ5 ).
6.
1). The turnip has the root σ0 , three states on the first level σ1 , σ2 , σ3
and one super-state UP
on the second level. There are three trajectories: ui = (σ0 → σi →
U P ), i = 1, 2, 3.
2). λi = − log(qi ), or qi = e−λi . The sitting time in in σ0 is τ0 ∼
Exp(λ1 + λ2 + λ3 ). The sitting time in σ1 is τ1 ∼ Exp(λ2 + λ3 ).
3). p(u1 ) = (λ1 /(λ1 + λ2 + λ3 )) · 1.
4).A = P (τ0 ≤ 1) = 1 − e−Λ0 , where Λ0 = λ1 + λ2 + λ3 .
By formula (5) from Appendix B,
B = P (τ0 + τ1 ≤ 1) = 1 − e−Λ0
Λ1
Λ0
− e−Λ1
,
Λ1 − Λ0
Λ0 − Λ1
where Λ1 = λ2 + λ3 .
After some algebra we find out that P ⋆ = A − B = Λ0 (e−Λ1 − e−Λ0 )/λ1
5) follows from the above formulas for p(σ1 ), p(u1 ) and P ⋆ .
This problem gives an explanation for the formula (9.3.7) for Φ(N).
31
10
Chapter 10.
1. Components 1 and 2 are in series, together they are in parallel to 1 and
all this block is in series to component 4. It is elementary to check that the
structure function is
ϕ(X) = [1 − (1 − X2 X3 )(1 − X1 )]X4 .
System reliability is
E[ϕ(X)] =
R(p) = [1 − (1 − p2 p3 )(1 − p1 )]p4 = |pi ≡ p| = [1 − [1 − p2 )(1 − p)]p.
After setting all pi = p, we obtain R = p3 + p2 − p4 .
2. What is called the normalized C ⋆ -spectrum in this chapter is exactly
the C-spectrum introduced in Chapter 8. Consider any permutation of n
component numbers. Without loss of generality let this permutation be
(1, 2, 3, ..., k, k + 1, ..., n)
Suppose we do the destruction of the network, erasing components as
they appear from left to right. Suppose that the network gets DOWN at the
failure of component k, on the k-th step. Now consider the same permutation
and do the construction process moving from right to left. Obviously, the
network will be UP exactly at adding component k, not before and not later,
i.e. on the (n − k + 1)-th step of the process. This proves that fk = gn−k+1 .
3. Since R(p) = [1 − (1 − p2 p3 )(1 − p1 )]p4 ,
R(0, 1, 1, 1) = p2 p3 p4 ; R(1, 0, 1, 1) = p1 p4 ,
R(1, 1, 0, 1) = p1 p4 ; R(1, 1, 1, 0) = 0.
After setting pi ≡ p, we obtain
F V IM1 = 1 −
p
;
1 + p − p2
F V IM2 = 1 −
1
= F V IM3 ; F V IM4 = 1.
1 + p − p2
32
4.
Rewrite the formula (10.3.2) in the following form:
BIMj =
n
∑
Zi,j pi−1 q n−i
i=1
i!(n − i)!
−
n
∑
(Yi − Zi,j )pi q n−i−1
i=1
i!(n − i)!
,
(10.1)
where the first sum stands for R(p1 , ..., pj−1 , 1j , pj+1 , ..., pn ) and the second
for R(p1 , ..., pj−1 , 0j , pj+1 , ..., pn ). Now, placing R(p1 , ..., pj−1 , 0j , pj+1 , ..., pn )
into the numerator of the formula (10.6.1) and replacing Yi − Zi,j by Vi,j we
arrive at (10.6..
5.
The following 4 permutations of edges have anchor equal 2: (1, 2, 3, 4),
(1, 2, 4, 3), (2, 1, 3, 4), (2, 1, 4, 3). Anchor equal 3 has the following 14 permutations: (1, 3, 2, 4), (1, 4, 2, 3), (2, 3, 1, 4), 2, 3, 4, 1), (2, 4, 1, 3), (2, 4, 3, 1),
(3, 1, 2, 4), (3, 2, 1, 4), (3, 2, 4, 1), (3, 4, 2, 1), (4, 1, 2, 3), (4, 2, 1, 3), (4, 2, 3, 1),
(4, 3, 2, 1). Finally, the following 6 permutations have anchor 4: (1, 3, 4, 2,
(1, 4, 3, 2), (3, 1, 4, 2), (3, 4, 1, 2), (4, 1, 3, 2), (4, 3, 1, 2). From this we get the
expression (10.2.1).
33
11
Chapter 11.
1. Subtract (11.3.3) from (11.3.2). After simple algebra, the result will be
D = r2 (1 − p)2 − p2 (1 − p)(1 − r).
Cancel out (1 − p). D > 0 because for r > p, r2 > p2 and 1 − p > 1 − r.
2. Consider the Taylor series for R(p1 + δ1 , p2 + δ2 , ..., pn + δn ) and take the
first two terms. For small δi
R(p1 + δ1 , ..., pn + δn ) ≈ R(p1 , ..., pn ) +
n
∑
(∂R/∂pi )δi .
i=1
If at our disposal is only one nonzero δi , the greatest reliability increment
will be achieved for the component j which maximizes (∂R/∂pi )δi . Recall
that the partial derivative is the component j BIM .
3. Let us use the following notation:
Res -integer resource for reliability improvement.
p1, p2, p3 - component reliability.
f i(p1 , p2 , p3 ) - reliability gradient of component i = 1, 2, 3.
R = R(p1, p2, p3) -system reliability.
di = 0.2 · exp[−pi]]-reliability increase of component i after investing into it
one unit of resource.
Set pi := min(1, pi + di)- component i improved reliability.
Qi = f i · di increment in system reliability after component i is was reinforced.
Description of the Algorithm
a. For i = 1, 2, 3, compute di, f i and Qi = f i · di,.
b. If Q1 = max[Q1, Q2, Q3], set p1 := min(1, p1 + Q1), Else If Q2 =
max[Q1, Q2, Q3], set p2 := min(1, p2 + Q2), Else set p3 := min(1, p3 + Q3).
c. Calculate system reliability R = R(p1, p2, p3).
d.Set Res := Res − 1.
e. If Res > 0, GOTO a. Else STOP
Print[R]
4.
a. In a five-node network, locating the terminal in node 2 provides
reliability 1 since there is a reliable path connecting all three terminals.
(Network reliability is defined as all terminal connectivity).
b. In a six-node network, nodes 2,3,5 and 6 by symmetry are equally
good for locating the third terminal.
34
View publication stats
Download