Chapter 1

advertisement
Chapter 4
Networks, Series, and
Cyclic Queues
CONTENTS
4.1 SERIES QUEUES ................................................................................................ - 5 4.1.1
QUEUE OUTPUT ................................................................................... - 6 -
4.1.2
SERIES QUEUES WITH BLOCKING ..................................................... - 12 -
4.2 OPEN JACKSON NETWORKS ........................................................................... - 15 4.2.1
OPEN JACKSON NETWORKS WITH MULTIPLE CUSTOMER CLASSES - 23 -
4.3 CLOSED JACKSON NETWORKS ...................................................................... - 24 4.3.1
MEAN-VALUE ANALYSIS .................................................................... - 29 -
4.4 CYCLIC QUEUES ............................................................................................. - 41 4.5 EXTENSIONS OF JACKSON NETWORKS .......................................................... - 43 4.6 NON-JACKSON NETWORKS ............................................................................ - 44 -
 The networks we considered are with the following
characteristics:
r10
node
1
(i)
rk0
rij
r12
m1
g1
r20
…
2
k
m2
i
mk
g2
j
rji
gk
Arrival process from “the outside” to node i follows a
Poisson process with mean rate g i , 1  i  k .
(ii) The service time distribution at node i is independently
and exponentially distributed with parameter mi .
(iii) The routing probability that a customer who has
completed service at node i will go to next node j is rij ,
where
i  1, 2,
,k ,
j  0, 1, 2,
,k
and
ri 0
indicates the probability that a customer will depart from
the system from node i.
This kind of network is called Jackson networks.
-2-
 Closed Jackson Network
(i) g i  0 ; No customer may enter the system from the outside.
(ii) rio  0 ; No customer may leave the system.
 Example: M/M/c/c//K The machine-repair problem
1
1
2
…
k
i = 1, 2
j = 0, 1, 2
m1 = l
m2 = m
c-1
Operation condition:
Holding time = le-lt
2
gi = 0
r12 = 1
r21 = 1
rij = 0, elsewhere
c
Repair condition:
Service time = me-mt
 Open Jackson network
l ,
(i) ri  
0,
i 1
1,

(ii) rij  1,

0,
1  i  k  1,
j  i 1
ik,
j0
elsewhere
elsewhere
-3-
We also call it as
Series Queues
We restrict ourselves mainly to Markovian systems:
 Poisson input
 Exponential service time
 rij routing probabilities are state independent
-4-
4.1 SERIES QUEUES
Station 1
Station 2
1
1
…
c1
c2
M/M/c1/∞
 Example:
(i)
Computer networks
(ii) Time-sharing system
(iii) Registration process
(iv) Call control process
(v) Clinic physical examination
-5-
1
…
…
…
l
Station n
cn
4.1.1 QUEUE OUTPUT
For the first station, its departure process is Poisson with mean
rate l , or say, its inter-departure time distribution is exponential
with mean 1 l
Proof
If we observe at the output of the first station in an
infinitesimal time interval, its probability distribution is
Pr Exact one departure in t
 p1mt  p2  2 mt  
c 1

n 1
n c

 pc 1  c  1 mt   pncmt
n c
  pn  nmt    pncmt
l n
l n

(
)
(
)

 c 1
m
m 
 mt   n 
  c  n c  p0
n
!
c !c 
n c
 n 1


l n 1
l n 1 

(
)
(
)


l  c 1 m
m
 mt  p0  


m  n 1 (n  1)! n c c!c ( n 1) c 


l n
l n

(
)
(
)

 c 1 m
l
m 
 mt p0  

 lt
n c 
m
n
!
c
!
c
n c
 n 0



-6-
Pr more than one departure in t  O  t 
since it contains  t  which is negligible when it compares
2
with t .
Pr no departure in t  1  lt  O  t 
This implies that it is, which is at all affected by the
exponential service mechanism, a Poisson process with mean
rate l (We have proven it before)
Hence, all stations are M/M/ci / models. For this cascade
model, the average system size is just the summation of the
system size of each station, and so is the system time.
 An alternative approach for the proof
T
t
Consider M/M/c/ queue:
N(t)
Define N(t) as the number of customers in the system at a
time t just after the last departure.
Pr  N (t )  n  pn
(Continuous time Markov chain)
Define T as the random variable of “time between two
successive departures”.
-7-
Fn (t )  Pr  N (t )  n, T  t

(4.2)
C (t )  CDF of T  Pr T  t
(4.3)
Clearly,

C (t )  1  Pr T  t = 1   Fn (t )
n0
Find Fn (t )
 Fn (t  t )  (1  lt )(1  cmt ) Fn (t )  lt (1  cmt ) Fn 1 (t )


O(t ),
,n  c

 Fn (t  t )  (1  lt )(1  nmt ) Fn (t )  lt (1  nmt ) Fn 1 (t )

O(t ),
,1  n  c


 F0 (t  t )  (1  lt ) F0 (t )  O(t )
with boundary condition Fn (0)  Pr  N (0)  n, T  0  pn
dFn (t )
 (l  cm ) Fn (t )  l Fn 1 (t )
dt
,n  c
dFn (t )
 (l  nm ) Fn (t )  l Fn 1 (t )
dt
,1  n  c
dF0 (t )
 l F0 (t )
dt
-8-
(4.4)
F0 (t )  p0e lt
F1 t     l  m  F1 (t )  l F0 (t )  (l  m ) F1 (t )  l p0e  l t
 F1 (t )  Ce
 l  m t
 F1 (t )  p1e  l t


l
p0e  l t , C  0  C (t )  1  p e  l t

n
m
n0
 1  e lt
 C  t   c  t   lelt
 Fn  t   pn e  l t
 Example
M/M/c
Checkout Counter
M/M/∞
1
Shopping
l
Exponential
m = 4/3
…
l= 40/hr
2
c
me-mt
1
= 4 min = 1/15 hr
m
for each counter
(i)
What is the minimum number of checkout counters?
P  the checkout counters  
 Cm  3
-9-
l 40

m 15
2.67
(ii) If it is described to add one more, then how is the system
performance?
It is an M/M/4/ model for the checkout station
1
l
( )4
m


4

1
 1 l n
4 
 p0    ( ) 

  0.06
l
4! 4  
 n  0 n! m

m 
Wq  0.019 hr  1.14 min
Lq  lWq  0.76
L  0.76  l
m  0.76 
40
For shopping station, L  l
m
15
 3.44
 40
43
 30
Thus the total number of customers in the system is
30  3.44  33.44
- 10 -
If the input to station i comes from external system and from
another station, and if each of them are independent and
Poisson, then we can say the input is still Poisson with mean
rate
N

li  li   q ji l j
j 1
where li is the mean rate of input process coming from
external, and q ji is the probability that the output of station j
will become the input of station i.
We can use the results of M/M/c/ without question.
If the problems are that there are limits on the capacity at a
station, we treat some of these types of models.
- 11 -
4.1.2 SERIES QUEUES WITH BLOCKING
 Type 1: Two-station single-server-at-each-station sequential
model, no queue is allowed.
Note that: the server has one room!
l
m 1e
(i)
 m1t
m 2e
 m2t
If there is a customer completed service at station 1, but
at this time station 2 is busy, the customer has to wait
until the station 2 is free.
(ii) Arrivals at station 1 are turned away if station 1 is busy.
Define (n1, n2) as system state, and
(n1, n2)
Description
(0, 0)
System is busy only
(1, 0)
Station 1 is busy only
(0, 1)
Station 2 is busy only
(1, 1)
Both station 1 and 2 are busy
(b, 1)
Station 2 is busy, and customer is
finished at station 1 but waiting for
station 2 to become available
- 12 -
Then the transition rate diagram is given by
(0,0)
m2
m1
(0,1)
m2
(b,1)
l
l
(1,1)
(1,0)
m2
m1
And the state-transition equation at steady state is
l p0,0  m 2 p0,1  0

 m1 p1,0  l p0,0  m 2 p1,1  0

  l  m  p  m p  m p  0
2
0,1
1 1,0
2 b ,1


  m1  m 2  p1,1  l p0,1  0

 m 2 pb ,1  m1 p1,1  0


 pn1 , n2  1
- 13 -
(4.7)
If we let m1  m2  m , then

l (l  2 m )
l
p

p
,
p

p0,0
0,0
0,1
 1,0
2
m
2m


l2
l2

p0,0
, pb ,1 
p0,0
 p1,1 
2
2
2m
2m


2m 2
p 
 0,0 3l 2  4 ml  2 m 2
Blocking Probability: PB  p1,0  p1,1  pb ,1
W  system time for accepted customer 
 E service time accepted 
2
5
p0,0 ( )  p0,1 ( )
m
2m

p0,0  p0,1
Note :
2
m
P(T1  T2 )  (
1
m

1
m

1
m
) P(T1  T2 )
21 31

m2 m2
5

2m

 Type 2: Two stations, single server at each station, sequential
model, one customer is allowed to wait between the
stations.
- 14 -
4.2 OPEN JACKSON NETWORKS
g1
node
r10
1
g2
r12
2
gi
…
r20
gj
rji
…
i
rij
ri0
gk
…
j
rj0
k
rk0
g i : The mean arrival rate of the external arrival process to node i,
1  i  k ; it is in Poisson distribution.
mi : The mean service rate of the exponential service time
distribution at node i.
rij : Independent of system states, 1  i  k  1 , 1  j  k ,
(including back to itself)
ri 0 : The probability that a customer will leave the network from
node i.
 There is no limit on queue capacity at any node.
 Markovian system
 System state
 Steady-state
 State Transition equations
 The method of stochastic balance is used to write down the
equations.
System state   n1 , n2 ,
, ni ,
, nj,
, nk  is somewhat
cumbersome.
Simplified system state as shown in Table 4.2, p. 174
- 15 -
Flow out
Flow in
k
g
k
g i
i 1
i 1
i
n ; i-
n
n
k
k
k
 m 1  r 
i 1
i
m r
k
 m r
i 1 j 1
n ; i+ j-
i 1
i ij
n ; i+
i j
ii
i i0
Note
或兩邊cancel
不算在內
n
Thus we obtain the state equation
k
g
i 1
k
i
k
k
pn ;i    mi rij pn ; i  j    mi rio pn ; i 
i 1 j 1
i j
i 1
(4.9)
k
k
i 1
i 1
  mi (1  rii ) pn   g i pn
Jackson showed that the solution to these equations is, amazingly,
of what has come to be generally called Product form.
pn  C 1n1 2n2
kn
k
For each node,
k
li  g i   rji l j or λ  γ  λR ,
j 1
- 16 -
i 
li
mi
(4.10)
And the solution is
pn  pn1 , n2 ,..., nk
 (1  1 ) 
n1
1
(1   2 )  2
(1   k )  k
n2
(4.11)
nk
That is, the network behaves as if its nodes are independent
M/M/1’s. But, as the matter of fact, Disney (1981) showed that the
actual internal flow in these kinds of networks which have
feedback is not Poisson.
(4.11) is a solution of (4.9)
Proof
pn  C 1n1  2n2
 C
Notice that
 Kn
k
K
g
i 1
n
i
net flow-in
of the network

k
l r
i 1
i io
net flow-out
of the network
代入 (4.9), we have
k
k
k
gi
i
n
n
C 
 C  mi rij
 C  mi rio i


i 1
j 1 i 1
i 1
i
j
k
n
i j
?
 C
k
n
k
 m (1  r )  C  g
n
i 1
i
ii
i 1
- 17 -
i
li m j k
g i mi k k
li ? k
  mi rij
  mi rio   ( mi  mi rii  g i )

l j mi i 1
mi i 1
i 1 li
j 1 i 1
k
 
i j
From (4.10)
k
k
i 1
i 1
i j
l j  g j   rij li  g j   rij li  rjj l j
k
r l
ij i
i 1
i j
 l j  g j  rjj l j
代入   式

 k
g
m
m


 i i  i l g  r l  l r   m  m r g

i
i ii
i
 i i ii i  i i 0 

l
l
i 1
i 1

i
i 


k
We can obtain
k
?
k
 l r  g
i 1
i io
i 1
i
Thus the proof is accomplished!
And
 
C  
 nk 0
k
n
n


 1 2
1
n1  0
  (1  i ),
i 1
1

2

 1
 KnK   
1  1

i  1,
- 18 -
i  1, 2,
1 

1  k 
,k
1
Since Li 
i
L
, and Wi  i
1  i
li
The total wait within the network of any customer before its final
L
departure W 
g
i
i
(Little's formula for the entire network)
i
i
The above results for Jackson networks can be generalized easily
to c-channel nodes
ci : the number of servers at node i which is with exponential
service time mi .
pn  pn1 , n2 ,..., nk  pn1  pn2
k
pnk   pni
i 1
For the case of ci-channel nodes, the above results for Jackson
networks can be generalized easily as
ri ni
pn  
p0i
a
(
n
)
i 1 i
i
k

l 
,  ri  i 
mi 

Ref. (2.24) in p.69
- 19 -
(4.12)
ni !,
ai (ni )  
n c
ci i i ci !,
where
ni  ci
ni  ci
(4.13)

p0i
p0i ri ni
1
is such that 
a
(
n
)
ni  0 i
i
Thus, again, what we have is a network that just acts as if each
node is an independent M/M/c .
In fact, it can be shown that (see Disney 1981) as long as
there is any kind of feedback, the internal flows are not
Poisson.
 pn  pn1n2
nk
 pn1  pn2
 pnk
- 20 -
 Waiting time distribution
For arborescent (tree-like (藤狀的)) flow, no direct or indirect
feedback, since the input process of each node is Poisson, the
actual waiting time distribution at each node should be the
same as that for the M/M/c model.
However, for a general Jackson network, the input to each
node is not necessarily truly Poisson, so that we can not be
sure that qn  pn (where qn is the probability of n customers
in the system at the arrival point).
We can have the virtual waiting time distribution at each
node in (2.39).
(但是真正的 waiting time distribution 則不得而知,除非是
Arborescent network!)
- 21 -
 Example :
1-p
l
1
p
c1 = 1
2
c2 ≥ 2
1
3
c3 = 1
Bypass 或 multiple servers 的緣故,故 T1( n ) and T3( n ) are
dependent. 其 departure process 是否仍為 Poisson?
Melamed (1979) showed that “for single-server Jackson
networks with an irreducible routing probability matrix
  rij  , and i  1 (every entering customer eventually
leaves the network), the departure process for nodes are
Poisson and that the collection over all nodes that yield these
Poisson departure processes are mutually independent.
 For Jackson networks,
As long as there is no feedback, flows between nodes and to
the outside are truly Poisson.
If there is feedback, the output process is not Poisson. But,
amazingly, Jackson’s solution still holds.
- 22 -
4.2.1 OPEN JACKSON NETWORKS WITH MULTIPLE
CUSTOMER CLASSES
R   : The routing probability matrix for a customer of type-t
t
 t  1,
2,
, n
λ  t λ (t ) ,
λ (t )  (l1(t ) , l2(t ) ,  lk(t ) )
( λ ( t ) : the arrival rate of type-t customers)
Lt : The average length for node i with M/M/c model
Li
(t )

li (t )
n
l
k 1
- 23 -
i
(k )
Li
4.3 CLOSED JACKSON NETWORKS
If g i and ri 0 are zero for all i (N items travel inside the network),
we have a closed Jackson network.
From (4.9) which comes from the open network model will
become
k
k
k
mr p
j 1 i 1
(i  j )
n ; i j
i ij
  mi 1  rii  pn
(4.14)
i 1
where n  (n1, n2 , , nk )
It will certainly not be surprising that the solution to it is still of
the form
pn  C 1n1 2n2
kn  Cn
(4.15)
k
Since input rate = output rate for each node, that is
k
li  mi i   m j  j rji
j 1
, i  1, 2,
,k
lj
We can prove (4.15) is the solution of (4.14)
(用課本內容說明)
and

n  N
pn 
(4.16)

n1  n2 
 nk  N
- 24 -
C 1n1
kn  1
k
Equivalently,

C
1n1

 n1  n2   nk  N
1

 knk 

 CN
Ref. to (94a)
 N  k  1
There are 
possible ways that the N customers can be

N


distributed among the k node. We denote it by C  N  , and define
G  N   C ( N ) 
1
Thus
pn 
1
1n1  2n2
G( N )
 kn
k
where
GN  

n1  n2 
1n 2n
1
 nk  N
2
kn
k
The closed network can be extended to Ci servers at node i.
The solution now becomes
1
pn 
G( N )
in
k
 a (n )
i
i 1
i
- 25 -
i
  
where
(ni  Ci )
(ni  Ci )
ni !
ai (ni )   ni Ci
Ci !
Ci

G( N ) 
n1  n2 
in
k
 a (n )
 nk  N
i 1
i
i
i
 Example:
N 2
1
 1
a 1 ( n1 )  
 2
1- r12
r12
1
r23
2
a2 (n2 )  1
3
a3 (n3 )  1
1- r23
Holding
time
l
m2
m3
C1 = 2
C2 = 1
C3 = 1
Thus
pn  pn1 n2 n3
1 1n1  2n2 3n3

G (2) a1 (n1 )
- 26 -
, n1  0 , 1
, n1  2
Find 1 ,  2 , 3 and G (2)
Output rate = input rate for each node
Node 1 : l1  m2  2 (1  r23 )  m3 3  Routing Matrix

Node 2 : m2  2  l1r12
 # 
:

Node 3 : m3 3  l1 (1  r12 )  m2  2 r23 
G(2) 
12
2
 22  32  12  2 3  13
 N  K  1  2  3  1  4  4  3

 
6



N
2

  2   2
(2 0 0)
(1 1 0)
(i)
(0 2 0)
(1 0 1)
(0 0 2)
(0 1 1)
#  are redundant.
(ii) From    , we seem to be able to conclude that pn is
independent of the first selection of 1 , or  2 , …, or
 k . Thus, we arbitrarily set 2  1. Then
2  1 ,
1 
m2
,
r12l
n1
pn1 , n2 , n3
3 
m2 (1  r12  r12r23 )
r12 m3
1  m2 
1  m2 (1  r12  r12 r23 ) 





G ( N )  r12l  a1 (n1 ) 
r12 m3

- 27 -
n3
An efficient algorithm to find G  N  - by Buzen (1973)
Let f i (ni ) 
G( N ) 
in
i
ai (ni )
, then

n1  n2 
 nK  N
in
k
 a (n ) 
i 1
i
i
k

i
n1  n2 
 nk  N
 f (n )
i 1
i
i
Buzen setup an auxiliary function:
g m ( n) 
m

n1  n2 
 f (n )
 nm  n
i 1
i
i
m nodes
n customers
If m = k, n = N, then gk ( N )  G( N ) .
We now set up a recursive scheme for calculating gm (n)
Suppose we fix nm 

g m ( n)   

 0  n1  n2   nm 1 

f i (ni ) 

i 1

m
n
n

  fm ( ) 
 n  n 
0
n

n

m 1
1 2
n
n
  f m ( ) g m 1 (n  )

f i (ni ) 

i 1

m 1
, n  1, m  2
0
Note that g1 (n)  f1 (n) and gm (0)  1.
Then we can have gm (n) , and thus G  N  .
- 28 -
 g0  n   0 ,if n  0


 g0  n   1 ,if n  0
Further, this scheme aids in calculating marginal distributions
as well.
pi  n   Pr  N i  n


pn1 ,
Si  N  n

fi  n 
, nk
k
1
 
f j nj 

G
N


Si  N  n
j 1
f n 


GN 
k
Si  N  n j 1
j i
where Si  n1 
j
j
 ni 1  ni 1 
,  n  0, 1,
, N
 nk
4.3.1 MEAN-VALUE ANALYSIS
 Two basic principles:
(Closed Jackson queueing network)
 qn ( N )  pn ( N  1) ,0  n  N  1
 Little’s formula is applicable throughout the network.
 For M/M/1:
system time W  ( L  1) m
 (the system size + itself)  service time
 For M/M/c :
System time W = no adjustment
which is the same as above.
- 29 -
∴ For the closed network (single server)
 The first principle:
Wi ( N ) 
Li ( N  1)  1
mi
(4.23)
where Wi ( N ) : mean system time at node i for a network
containing N customers.
Li ( N  1) : mean number of customers at node i in a
network with (N-1) customers.
mi : mean service rate.
 The second principle:
Li ( N )  li ( N ) Wi ( N )
(4.24)
Considering (4.23) and (4.24), we can calculate Li ( N )
and Wi ( N ) recursively if we can find li ( N ) .
Starting with
Li (0)  0 , Wi (1)  1 mi  Li ( N ), Wi ( N ) , N  1
 Compute li ( N ) :
li ( N ) 
N
,
Di  N 
1
 rate per customer   N
Di  N 
- 30 -
where Di  N  : the average delay per customer between
successive visit to node i for a network
with N customers.
 To get Di  N  :
vi  mi i (instead of using li)
k
vi   v j rji
,1  i  k
(4.25)
j 1
One of the k equations is redundant; we set one of vi (say
v ) equal to 1 and obtain vi , 1  i  k .
vi is the relative throughput through node i.
vi  mi i , assume v is the one we normalize.
Then
k
D ( N )   viWi ( N )
i 1
, v  1次,其他 node 為 vi 次,
通過 node
k
故 D ( N )   viWi ( N )
i 1
- 31 -
The MVA algorithm for finding Li ( N ) and Wi ( N ) in a
k-node,
single-server-per-node
network
with
probability R  rij  is as follows
(i)
Solve the traffic equation (4.25)
k
vi   v j rji
,1  i  k
j 1
Setting v  1
(ii) Initialize Li  0   0 , 1  i  N
(iii) For n  1, to N, calculate
(a)
Wi (n) 
(b)
l ( n) 
1  Li (n  1)
mi
n
k
 vW (n)
i 1
i
, i  1,
,k
(assume v  1)
i
(c)
li (n)  l (n)vi
(d)
Li (n)  li (n) Wi (n)
- 32 -
,i  1,
,k
routing
 Example 4.5

0

2
(v1 , v2 , v3 )  (v1 , v2 , v3 ) 
3

 1

1
4

1
3

0


3
4
0
0
with N  1
Also check with normalizing-constant method.
Solution
2
v1  v2  v3 ,
3
3
v2  v1 ,
4
1
1
v3  v1  v2
4
3
4
2
Set v2  1, then we have v1  , v3 
3
3
Applying step (iii) of the algorithm, we have
(a)
W1 1 
W2 1 
1  L1  0 
l
1  L2  0 
W3 1 
m2
- 33 -
1
2
1
 1
1
1  L3  0 
m3


1
3
(b)
l2 1 
1
3
 vW
i i 1
i 1

1
4 1
2 1
  1 1  
3 2
3 3
(c)
l1 1  l2 1 v1 
9 4 12
 
17 3 17
l3 1  l2 1 v3 
9 2 6
 
17 3 17
(d)
L1 1  l1 1W1 1 
12 1 6
 
17 2 17
L2 1  l2 1W2 1 
9
9
1 
17
17
L3 1  l3 1W3 1 
6 1 2
 
17 3 17
- 34 -

9
17
 Check with Normalizing Constant Method
k
mi i   m j  j rji
(4.16)
j 1
Therefore we have
2 1 
2
 2  33
3
3
4
 2   2  1
33 
 2  1 ,
pn1n2 n3 
1 
2
,
3
1
1
 2  1   2
4
3
3 
1
1n1  2n2 3n3
GN 
2
9
, n1  n2  n3  N  1

 2
2  17
where G 1    1n1  2n2 3n3     1   
9 9
 n1  n2  n3 1
 3
 p100
2
2
6
2
1
9
 , p001  9 
 3  , p010 
17
17
17
17
17
17
9
9
9
Then we have
L1  1 p100 
9
2
6
, L2  1  p010  , L3  1  p001 
17
17
17
- 35 -
MVA algorithm to get marginal steady-state probability for
extending to the multiple-server case.
Detailed balance equation for M/M/1
l pn 1  m pn

pn 
l
pn 1
m
Applying it to MVA
pi (n, N ) 
li ( N )
pi (n  1, N  1)
mi
(4.26)
We’ll prove it later
where pi (n, N ) is the marginal probability of n in an
N-customer system at node i, pi (0,0)  1
For our example,
p1 (1, 1) 
l1 (1)
l (1) 12 17
p1 (0, 0)  1 
 6 17
l
l
2
p2 (1, 1) 
l2 (1)
9 17
p2 (0, 0) 
 9 17
m2
1
p3 (1, 1) 
l3 (1)
6 17
p3 (0, 0) 
 2 17
m3
3
∴ p1 (0, 1)  11 17 , p2 (0, 1)  8 17 , p3 (0, 1)  15 17
- 36 -
Checking with our normalizing-constant solution
p1 (0, 1)  p010  p001
p1 (1, 1)  p100 

p2 (1, 1)  p010 

p3 (1, 1)  p001 
p2 (0, 1)  p001  p100
p3 (0, 1)  p100  p010
We can see the results from these two approaches are the same
in p.193.
 The MVA algorithm for multiple-server cases
Wi (n) 
1
mi

n -1
1
ci mi
 ( j  c  1) p ( j, n  1)
i
j  ci
i
ci  2

1 

1  Li (n  1)   (ci  1  j ) pi ( j , n  1) 
ci mi 
j 0

Thus we have to further know pi  j, n  1 , j  0,
pi ( j, n) 
li (n)
pi ( j  1, n  1)
 i ( j ) mi
where
 j
i ( j)  
ci
- 37 -
, j  ci
, j  ci
, ci  2 .
So the MVA algorithm for multiple servers:
k
(i)
Solve vi   rji v j with setting v  1
j 1
(ii) Initialize Li (0)  0 ; Pi (0,0)  1; Pi ( j,0)  0 , j  1.
(iii) For n  1, to N, calculate
ci  2

1 
(a) Wi (n) 
1

L
(
n

1)

(
c

1

j
)
p
(
j
,
n

1)


,
i
i
i
ci mi 
j 0

i  1,
,k
(b) l (n)  n
k
 v W ( n)
i 1
i
i
(c) li (n)  l (n)vi
(d) Li (n)  li (n)Wi (n)
(e)
pi ( j, n) 
, i  1,
li (n)
pi ( j  1, n  1)
 i ( j ) mi
,k
, j  1,
- 38 -
,n
pi (n, N ) 
Prove
li ( N )
pi (n  1, N  1)
mi
--- the (4.26) for single-server case.
Proof
pi (ni ; N )  Pr  N i  ni | N customers in network


n1  n2 
 ni  ni 1 
1
1n1
 nK  N  n j G ( N )
in
 Kn
i
K
Define a complementary marginal cumulative probability as
Pi (ni ; N )  Pr  N i  ni | N in network

N

j  ni

n1  n2 

 i j
j  ni


in
i
G( N )
in
i
G( N )

 ni  ni 1 
1
1n1
 nK  N  n j G ( N )
1
g k 1 ( N  j )
G( N )


0
i
g k 1 ( N  ni  )
g k ( N  ni ) 
Now
- 39 -
in G ( N  ni )
i
G( N )
i j
 kn
k
pi (ni ; N )  Pi (ni ; N )  Pi (ni  1; N )

in
i
G( N )
G( N  ni )  iG( N  ni  1)
Thus
pi (ni ; N )
ini
G ( N  1)


pi (ni  1; N  1) G ( N )
ini 1


∴
G ( N  ni )  iG ( N  ni  1)
G ( N  1  ni  1)  iG ( N  1  ni  1  1)
iG ( N  1)
G( N )
pi (ni ; N ) 
iG ( N  1)
G( N )
pi (ni  1 ; N  1)
Since
li ( N )  throughput at mode i
 Pr Server is busy at node i   mi
 P (1 ; N ) mi 

iG ( N  1)
G( N )

i
G( N )
G ( N  1) mi
pi  ni ; N 
li ( N )

mi
pi  ni  1 ; N  1
 pi (ni ; N ) 
li ( N )
pi (ni  1 ; N  1)
mi
- 40 -
4.4 CYCLIC QUEUES
1
n1
m 1e
…
2
n2
 m1t
m 2e
k
k-1
nk-1
 m2t
m k 1e
nk
 m k 1 t
mke
 mkt
And there are N customers in the system
It is a special case of a closed Jackson network, and is called
cyclic queue, where
,if j  i  1, 1  i  k  1
1

rij  1

0
,if i  k , j  1
, elsewhere
Thus we have
pn1 , n2,
, nk
1

1n1 2n2
G( N )
 kn
k
where
G( N ) 

n1  n2 
 nk  N
1n  2n
1
2
 kn
k
but
 mi 1 i 1 , i  2, 3,
mi  i  
,i  1
 mk  k
- 41 -
,k
That is
m1
2 
1
m2
3 
m2
m
 2  1 1
m3
m3

Due to redundancy


Select   1
m
 k  1 1
mk
Then
pn1 , n2,
, nk
m1N  n1
1

 n2 n3
G ( N ) m 2 m3
mknk
where
G( N ) 

n1  nk  N
 2n   kn
2
k
For multiple-server case, we can similarly treat it and obtain the
result.
- 42 -
4.5 EXTENSIONS OF JACKSON NETWORKS
 Jackson (1963)
外來的
(i)
Open network with state-dependent exogenous arrival
processes and state-dependent internal service.
The solution is also of product form. But the normalizing
constant is difficult to obtain.
(ii)
Jackson networks with travel time (walk time) between
nodes of network,
(iii)
Multiclass Jackson network in which customers have
different classes.
※ Baskett et. al. (1975) obtained the product-form solutions.
:
:
:
- 43 -
4.6 NON-JACKSON NETWORKS
The only non-Jackson networks we considered here are those
where the {rij } are allowed to be state dependent.
 Example
Consider a closed network with three nodes and two customers,
and the holding times at each node and exogenous inputs if
any to them still remain Exponential and Poisson. Suppose
r31
r13 = 1 - r12
1
r12
2
r23
r32 = 1 - r31
r21 = 1 - r23
(i)
3
The output customer from each node will visit the next
node which has the fewest customers.
(ii) If there is a tie, the customer chooses among the
remaining nodes with equal probability.
- 44 -
m3
m1
r12=0.5
r13=0.5
m2
200
020
0.5m1
0.5m1
002
110
101
m3
0.5m2
0.5m2
m1
011
m2
0.5m3
0.5m3
0   m1 p200
0   m 2 p020
0   m3 p002
0  ( m 2  m3 ) p011 
1
1
m2 p020  m3 p002  m1 p110  m1 p101
2
2
0  ( m1  m 2 ) p110 
1
1
m1 p200  m2 p020  m3 p011  m3 p101
2
2
0  ( m1  m3 ) p101 
1
1
m1 p200  m2 p002  m2 p011  m2 p110
2
2
 p200  p020  p002  0 ,and
- 45 -
p101  p110  p011  1
But for not an unusually large network, say N  50 , K  10 ,
 N  K  1  59 
10
there are 
system states,

1.26

10



N

  50 
 1.26  1010 equations, if all states are possible. It is impossible
even for modern large-scale computer (A formidable work!).
This is why the product form solutions for Jackson network
still are valuable.
- 46 -
Download