# Perturbation analysis

```Chapter 10
Sample path sensitivity analysis
Learning objectives :
•
•
•
Understand the mathematics behind discrete event
simulation
Learn how to derive gradient information during a simulation
Able to verify the correctness of sample gradients
Textbook :
C. Cassandras and S. Lafortune, Introduction to Discrete Event
Systems, Springer, 2007
1
Plan
•
•
•
•
•
•
•
•
•
Introduction
Perturbation analysis by example
GSMP - Generalised Semi-Markovian Process
Some mathematical basis of discrete event simulation
Perturbation analysis of GSMP
Applications to manufacturing systems
Sensitivity estimation revisited
Extensions of perturbation analysis
2
Introduction
3
Sensitivity measures
Two types of sensitivity measures:
• Derivatives : d J(q)/dq
• Finite difference : J(q+D) - J(q)
where J(q) is the performance funtion of a system at parameter q
Sensitivity measures are for
• determination of the direction of change to take
• optimization of a system design
• performing &quot;what-if&quot; analysis
4
Sensitivity measures
Difficulties of sensitivity analysis:
• Unknown performance function J(q)
• Estimation by simulation in most cases
Main ideas of perturbation analysis
• Prediction of the behavior of perturbed system q+D along the
simulation of the system q
Main steps of perturbation analysis
• Perturbation generation: changes in quantities directly linked
to q
• Perturbation propagation: changes in other quantities
generated by perturbation generation
5
Stochastic approximation
Stochastic approximation (gradient based minimization) :
x(n+1) = x(n) - a(n)g(n)
where
• x(n) : value of x at iteration n
• a(n) &gt; 0 : stepsize
• g(n) : gradient estimation of a performance function f(x)
Round-Robin Theorem: Assume that f(x) is continuously
differentiable and there exists a single x* such that df(x*)/dx = 0.
If
E[g(n)] = df(x)/dx
and
limn a(n) = 0, Sn a(n) = ∞
then x(n) converges with probability 1 to x*.
6
Perturbation analysis by example
7
An inspection machine
•
•
•
•
•
•
•
Products arrive randomly to an inspection machine
Each product requires a random number of tests
The inspection of a production requires a setup of time g
Each test takes q time units
Products are inspected in FIFO order
The buffers are of unlimited capacity
The system is empty initially
• Performance measure = T(q,g), average time at the inspection
station.
Input
Buffer
Inspection
Station
Ouput
Buffer
Arrival
8
A G/G/1 queueing model
Inter-arrival
time A
Service time
X = g + Lq
9
Sample path
Buffer level
A1
P1
A2
P2 P3
A3
P4
P5
A4
X1
P6
A5
X2
X3
X4
0
P1
P2
P3
P4
Time
Definition :
A Sample Path is the sequence (event, state, time) of a simulation or
experimentation.
Example :
(Start, 0, 0), (arrival, 1, A1), (arrival, 2, A1+A2),
(arrival, 3, A1+A2+A3), (departure, 2, A1+X1), …
Property :
The sample path contains all information of the simulation.
10
An inspection machine
Each simulation run or a sample path is characterized by :
• Li : number of tests of product i
• Xi = g + Liq : service time of product i
• Ai : inter-arrival time between product i and product i-1
• ti : time at the inspection station of product i (also called
system time)
Attention ti ≠ Xi
Input
Buffer
Inspection
Station
Ouput
Buffer
Arrival
11
An inspection machine
The best estimator of the average time at the inspection station:
N
1
Tˆ q, g    t i
N i 1
Under some regularity conditions
lim Tˆ q, g   T q, g ,
w .p .1
N 
Goal of perturbation analysis
Evaluate ∂T(q, g)/∂ g and ∂T(q, g)/∂ q
Input
Buffer
Inspection
Station
Ouput
Buffer
Arrival
12
Event-based system dynamics
{e1, e2, ..., en, ...} : sequence of events with en {a, d}
tn : occurrence time of en with tn = 0
qn : number of parts in the system at time tn+ with qn = 0
Dn : time in state qn
En : set of active events at tn+
ren : remaining time till occurrence of event e at time tn+
13
Event-based system dynamics
en : sequence of events
Time to next event :Dn = Min (ren , eEn)
tn : occurrence time of en
Next event : en+1 = Argmin (ren , eEn)
qn : number of parts at time tn+
Dn : time in state qn
New state
En : set of active events at tn+
qn+1 = qn +1 if en+1 = a
ren : remaining time of event e
qn+1 = qn -1 if en+1 = d
Update event clocks
ren+1 = ren - Dn
En+1 = En -{en+1}
Create new event
Case e = d and qn+1 &gt; 0
Case e = a and less than N arrivals
AND
En+1 = En+1 +{a}
Case e = a and qn = 0
ran+1 = An+1
En+1 = En+1 +{d}
ran+1 = g+Ln+1q
14
Event-based system dynamics
Performance evaluation :
Sn = total time of all parts in the system up to
event n
en : sequence of events
tn : occurrence time of en
qn : number of parts at time tn+
Dn : time in state qn
En : set of active events at tn+
ren : remaining time of event e
Sn+1 = Sn + qnDn
T(q,g) = SV /N
where eV is the departure event of N-th part
15
Event-based system dynamics
Simulation algorithm
1/ Initialization : q = 0, t = 0, E = {a}, generate r.v. A, ra = A
Na = Nd = S = 0
2/ Next event : e = Argmin (re, eE), D = re
Statistics : Na = Na+ 1(e=a), Nd = Nd+ 1(e=d), S = S + qD
3/ State update : q = q + 1(e = a)-1(e= d)
4/ Update event clocks : re = re - D, E = E -{e}
5/ Generate new events
If e=a  Na &lt; N, E = E +{a}, generate r.v. A, ra = A
If (e=a  q= 1) or (e=d  Nd &lt; N), E = E +{d}, generate r.v. L, rd = g+Lq
16
Perturbation analysis
Typical question (What-If) :
What would be the average system time
if the experimentation were performed with parameter q+Dq?
Definitions :
•
•
•
•
Nominal system : the system with parameter q of the simulation
Perturbed system : the system with parameter q+Dq of the What-If
question?
Nominal sample path : sample path of the nominal system
Perturbed sample path : sample path of the perturbed system (preferably
under the same random condition as the nominal sample path
17
Perturbation analysis
•
Perturbation generation
The perturbation of parameter q generates directly a perturbation ∆Xi =
Li∆q to each product service time (recall that Xi = g + Liq)
•
Perturbation propagation
The perturbation of the service time of a product will be propagated to
other products
•
Fundamental PA condition : the sequence of events does not change for
small enough perturbation Dq
18
Perturbation analysis
Nb of products
P1
A1
A2
P2 P3
A3
P4
P5
A4
X1
P6
A5
X2
X3
X4
0
P1
DX1
DX2
DX3
P2
P3
P4
Time
DX4
19
Event-based perturbation analysis
Next event : en+1 =e*= Argmin (ren , eEn)
Time to next event : Dn = re*n , Dn /q =  re*n /q
New state qn+1
Update event clocks
ren+1 = ren - Dn , ren+1/q = ren+1 /q - Dn/q
En+1 = En -{en+1}
Create new event
New arrival : En+1 = En+1 +{a}, ran+1 = An+1, ran+1/q = 0
New depart : En+1 = En+1 +{d}, rdn+1 = g + Ln+1q, rdn+1/q = Ln+1
Statistics : Sn+1 = Sn + qnDn,  Sn+1 /q =  Sn /q + qnDn/q
20
Event-based system dynamics
Simulation algorithm
1/ Initialization : q = 0, t = 0, E = {a}, generate r.v. A, ra = A
Na = Nd = S = 0, grad_ra = 0
2/ Next event : e = Argmin (re, eE), D = re , grad_D = grad_re
Statistics : Na = Na+ 1(e=a), Nd = Nd+ 1(e=d), S = S + qD, grad_S = grad_S +
3/ State update : q = q + 1(e = a)-1(e= d)
4/ Update event clocks : E = E -{e}, re = re - D, grad_re = grad_re - grad_D
5/ Generate new events
If e=a  Na &lt; N, E = E +{a}, generate A, ra = A, grad_ra = 0
If (e=a  q= 1) or (e=d  Nd &lt; N), E = E +{d}, generate L, rd = g+Lq, grad_rd = L
6/ If Nd &lt; N, go to 2; otherwise, T = S/N, grad_T = grad_S /N, END
21
Busy period
Definition :
A busy period (BP) is a period of time during which the
inspection station is never idle.
For a product i of the 1st busy period (BP1):
• arrival date : A1+A2 + ...+Ai
• departure time: A1+ X1+X2 + ...+Xi
• system time : ti = Xi + (X1+X2 + ...+Xi-1) - (A2 + ...+Ai)
Total system time of products of BP1
4  i

   X j   A j 
i 1  j 1
j 2

i
22
Busy period
• Each busy period BPm is initiated by the arrival of a product
that finds an empty system
• Let km +1 be the product that initiates busy period BPm
• System time of the i-th product of BPm
i
ti 
X
j 1
i
km + j
  Ak m + j
j2
• Total system time of products of BPm
nm

i 1
i
 i

  X k m + j   Ak m + j 
j2
 j 1

where nm is the number of products inspected in BPm
23
Busy period
• Total system time of products of BPm
T q , g

1
N
M
nm

m 1 i 1
i
 i

  X k m + j   Ak m + j 
j2
 j 1

where M is the total number of busy periods observed during
the simulation
• M is a random variable that depends on the simulation
realization.
24
Busy period-based Perturbation analysis
• Busy periods remain the same dnder the fundamental condition.
• System time in the perturbed system
ti q + D q  
i
X
j 1
km + j
q
i
+ D q    Ak m + j  t i  q  +
j2
i
 DX
j 1
km + j
• As a result,
 ti
q
 Tˆ
q
i


j 1

1
N
 X km + j
q
N

L
j 1
 ti
 q
i 1
i

1
N
km + j
M
nm

m 1 i 1
 i

  Lkm + j 
 j 1

25
Busy period-based Perturbation analysis
• To summarize, for Xi = g + Liq,
 Tˆ
q
 Tˆ
g

1
N

1
N
N
 ti
 q ,
w ith
i 1
N
 ti
 g ,
i 1
w ith
 ti
q
 ti
q
i


Lj
j  km +1
i


1
j  km +1
26
Busy period-based Perturbation analysis
Perturbation analysis algorithm during the simulation
0) Initialization :
1) During the simulation :
If the end of service of product with L tests,
1.1) Perturbation generation :
1.2) Perturbation propagation
1.3) If the system is empty, then
2) At the end of the simulation,
27
Validation of the results
Unbiasedness
E T q, g , N 
q
E T q, g , N 
g
 T q , g , N  
 E

q


?
 T q , g , N  
 E

g


?
Consistency
lim
lim
T q , g , N 
N 
q
T q , g , N 
N 
g
?

?

T q , g ,  
q
T q , g ,  
g
28
GSMP - Generalised SemiMarkovian Process
A general framework for representation of
Discrete event systems
29
Basic concepts
States : a state is a possible system configuration
Ex : number of products to inspect
Events : the system state changes only upon the occurrence of events
Ex : arrival and end of inspection of a product
Transition probabilities : which rule the change of the state when an event
occurs
Ex : routing probabilities among multiple alternatives
Clock : which indicate the remaining time to occurrence of events.
• A clock is set according to some random variable when it is activated
• It then decrements as long as it is active.
• An event occurs when its clock reaches zero.
Attention : Remaining processing times of products are associated with clocks
of related events and not in the definition of the state
30
GSMP
A GSMP is defined by :
• S : set of states
• E : set of events
• E(s) : set of events that are possible at state s
• p(s'; s, e) : probability of jumping to state s' when event e occurs at state s
• Fe : probability distribution of a new clock of event e
Example (Station d'inspection) :
• S = {0, 1, …} : number of products waiting for inspection
• E = {a, d} with a = product arrival, d = product departure
• E(s) = {a, d} if s &gt; 0 and E(0) = {a}
• p(s+1; s, a) = 1, p(s-1; s, d) d 1
• Fa = distribution of inter-arrival time, Fd = distribution of service time
31
GSMP
Non-Interruption (NI) condition :
 s, s' Œ
 S and e  Œ
E(s), p(s'; s, e) &gt; 0  E(s')  E(s) - {e}
The NI condition implies that, once activated, an event remain active till its
occurrence.
32
Simulation algorithm of an GSMP
1.
•
•
•
•
Initialization :
Number of events : n = 0
Set initial state : S0
Set clocks of activated events : C0(e) = X(e, 1), e  Œ
E(S0)
Set event counter : N(e, 0) = 0,  e Œ
E
2. Determine the next event :
e n +1  arg min C n  e 
e  E s 
3. Sojourn time at Sn : tn+1 = Cn(en+1)
4. Update the event counters :
 N  e , n  + 1, if e  e n + 1
N  e, n + 1  
 N  e , n  , otherw ise
33
Simulation algorithm of an GSMP
5. Update the system state :
Sn+1 = F(Sn , en+1 , U(en+1, N(en+1, n+1)))
where
• F is a function such that P{F(s, e, U) = s'} = p(s'; s, e)
• U is a random variable with uniform distribution on [0,1].
6.
Update existing clocks :
Cn+1(e) = Cn+1(e) - tn+1, e  E(sn) -{en+1}
7.
Generate new clocks :
Cn+1(e) = X(e, N(e, n) +1), e  E(sn+1) - (E(sn) -{en+1})
8. If the end of simulation, stop. Otherwise, n:= n+1 and go to step 2
34
Simulation algorithm of an GSMP
Routing mechanisms
•
Each active event e is associated a uniform random variable U(e), called
routing indicator
Step 5 of the GSMP simulation algorithm is decompose into two steps:
•
5.1. Generation of the routing indicator : U(en+1,N(en+1, n+1))
•
5.2. Determination of the new state :
Sn+1 = F(Sn , en+1 , U(en+1,N(en+1, n+1)) )
where F is such that P{F(s, e, U) = s'} = p(s'; s, e).
Example of F :
Let S = { s1, s2, … , sm }. Sn+1 := si such that :
i1
 p s j
j 1

; S n , e n +1  U 
i
 p s j
j 1
; S n , e n +1

35
Simulation algorithm of an GSMP
Performance measures
Let
Zt : be the state of the system at time t and
f(Zt) : the cost associated with state Zt .
The following performance measures are considered
•
LT 
T
0
f Z t dt
where T is a given constant
•
LN 
tN
0
f Z t dt
where N is a given integer and tN is occurrence time of N-th event, i.e. tN
= t1 + t2 + ... + tN ;
•
L a,k 
T  a ,k 
0
f Z t dt
where T(a,k) is the time of k-th occurrence of event a .
36
Some mathematical basis of
discrete event simulation
37
Random Number Generation
The linear congruential technique
Goal:
• Generate samples of U[0, 1] distribution
The linear congruential technique :
Xk+1 = (aXk + c) mod m, k = 0, 1, ...
where
a = multiplier, c = increment, m = modulus
X0 = seed
Quality strongly depends on the selection of parameters a, c, m.
The &quot;pseudo&quot; random samples have cycle of length at most m.
m = 2n , as large as possible with 231 for 32-bit computer, is often used to
maximize the cycle and to ease to the modula arithmetic
38
Random variate generation
The inverse transform technique
Goal:
• Generate samples of probability distribution F(x)
The inverse tranform technique :
• Generate a sample U of distribution U[0, 1]
• Transform U into X with X = F-1(U)
Online proof
Examples :
• EXP(l) with F = 1 - exp(-lx)
• Weibull with F = 1 - exp (-(l(x-g))b)
Drawback:
• The computation of the inverse transform can be time consuming.
39
Random variate generation
Acceptance-rejection technique
Majorizing function:
• g(x) such that g(x) ≥ f(x)
Density function related to g(x)
hx  g x



g  y  dy
Acceptance-rejection technique :
1. Select a majorizing function g(x) and its
related density function h(x)
2. Generate a random variate Y with pdf h(x)
3. Generate a random number U[0,1]
independently from Y
4. If U ≤ f(Y)/g(Y),
then set X = Y. Otherwise, repeat step 2.
Criteria for selection of majorizing function:
• Ease of generation of distribution h(x)
• Small rejection points, i.e. small surface between g(x) and f(x)
40
Terminating simulation output analysis
Point estimation
Observations:
• X1, X2, ..., Xn : simulation observations of true but unknown performance q
Sample mean
1
qˆn 
n
n

Xi
i 1
Unbiasedness:
E qˆn   q
 
For iid observations X1, X2, ..., Xn,

2
E qˆn   E  X i   q , V ar qˆn  
, lim qˆn  q
 
 
n n 
strong law of large number
Sample variance:
S
2
n

1
n 1
n

i 1
X i  qˆn

2
41
Terminating simulation output analysis
Interval estimation
Central limit theorem
Zn 
qˆn  E qˆn 


V a r qˆn 
 
tends to a standard normal distribution as n goes to infinity.
(1-a)-percent confidence interval (why):
P qˆn  z a / 2 

2
/ n  q  qˆn + z a / 2 
2
/ n   1a

where
• za/2 is a defined on a standard normal
distribution F(z).
• 2 is evaluated by sample variance
Student's t distribution is also used to avoid
checking how large n is for normal distribution
approximation
42
Nonterminating simulation output analysis
Problem setting
Observations:
• X1, X2, ... : simulation observations of a nonterminating simulation
q  lim E  X k 
k
Time average:
lim
n 
1
n

n
Xi
i 1
Under ergodicity condition
lim
n 
1
n
n

Xi q
i 1
43
Nonterminating simulation output analysis
Replications with deletions
I.
• Simulation for a time horizon of m+r steps
• Ignore data of the first r steps
• Estimation with data of the remaining m steps
qˆm , r 
1
mr
m

Xi
i  r +1
II.
• Perform K replications of I
• Use point estimation methods to evaluate the performance
Key questions :
• How long should the warmup interval r be?
44
Nonterminating simulation output analysis
Batch means
•
Based on a single but long simulation run
•
Observed data X1, X2, ... are grouped into Batches
Assume batches of equal length m with a warmup period of length r
•
Batch mean Bj is estimated
Bj 
•
1
m
r + jm

Xi
i  r + ( j  1) m + 1
Batch means are then used in point estimation
1
ˆ
qn 
n
n
B
j
j 1
45
Nonterminating simulation output analysis
Regenerative simulation
•
•
•
Assumption:
There exists some regenerative state (regeneration point) after which the
system is independent of past history
Regeneration cycles
R1, R2, ... : regeneration times
[Rj, Rj+1) : regenerative cycle or period
Cycle total (Lj) and cycle length (Nj) of cycle j
R j 1
Lj 

X i , N j  R j  R j 1
i  R j 1
•
1
q 
E L
E N 
 lim
n 
n
1
n
n
L
n
j 1
 lim
n
N
j 1
L
j
n 
j
j
j 1
n
N
j
j 1
46
Perturbation analysis of GSMP
47
Functional GSMP
System paramets :
• Assume that the probability distribution of new clocks depend on a
continuous parameter q.
Let :
• Fe(x; q) : distribution of new clocks of event e
• Xq(e, n) : n-th clock of event e
Hypothesis (H1) :
•  e and q, Fe(x; q) is continuous in x and Fe(x; q) = 0.
Hypothesis (H2) :
•  e and q, Xq(e, n) is continuous differentiable in q.
Remark :
• Under H1, the probability that two clocks reach zero at the same time is null
48
Perturbation generation
Remark : U = Fe(Xq(e, n); q) est une v.a. uniforme dans [0, 1]
The inverse transform technique for generation of event clocks X :
1. Generate a random variable U uniformly distributed on [0, 1]
2. Xq(e, n) = Fe-1(U, q)
Property :
When the inverse tranform technique is used for generation of event clocks, the
following infinitesimal perturbations are generated :
dX q  e, n   
 q F X q  e , n ; q 
 x F X q  e , n ; q 
dq
49
Perturbation generation
Special cases :
• scale parameters : Fe(x; q) = Ge(x/q)
dXq(e, n) / dq = Xq(e, n) / q
Ex : a exponentially distributed random variable with mean q
• location parameters : Fe(x; q) = Ge(x - q)
dXq(e, n) / dq = 1
Ex : X = g + L q where g is a location parameter
50
Perturbation propagation
Theorem: Under hypothese NI, H1 and H2, with probability 1,
(i)
the events en and the states sn remain unchanged within a small enough
neighorhood of q;
(ii) the sojourn time tn and event clocks Cn(e) are continuously differentiable at
q. Further,
d t n +1
dq

dC n  e n + 1 
dq
 dC n  e  dC n  e n + 1 

,  e  E  S n    en +1 

dC n + 1  e   d q
dq

dq
 dX  e , N  e , n  + 1 
,  e  E  S n +1    E  S n    e n +1  

dq

51
Sensitivity of performance measure
Theorem :
The performance measures LT, LN et La, k are all differentiable in q with
dL T

dq
dL N

dq
dL a , k
dq
NT 

n 1
N

n 1
dt n
dq
dt n
dq

f S n 1   f S N  T 

n 1

n 1
dt n
dq
f S n 1 
N  T a ,k 

N T 
dt n
dq
f S n 1 
where N(t) is the number of events occurred in (0, t).
52
Simulation algorithm and perturbation analysis
of an GSMP
1.
•
•
•
Initialization :
Number of events : n = 0
Set initial state : S0
Set clocks of activated events : C0(e) = X(e, 1), e  Œ
E(S0)
Gradient of C0(e) (perturbation generation): d C n + 1  e  d X  e ,1 
• Set event counter : N(e, 0) = 0,  e Œ
2. Determine the next event :
e n +1  arg min C n  e 
dq

dq
e  E s 
3. Sojourn time at Sn : tn+1 = Cn(en+1)
d t n +1
dq

dC n  e n + 1 
dq
4. Update the event counters :
53
Simulation algorithm and perturbation analysis
of an GSMP
5. Update the system state :
Sn+1 = F(Sn , en+1 , U(en+1, N(en+1, n+1)))
6.
Update existing clocks :
Cn+1(e) = Cn+1(e) - tn+1, e  E(sn) -{en+1}
dC n + 1  e 
dq
7.

dC n  e 
dq

dC n  e n + 1 
dq
Generate new clocks :
Cn+1(e) = X(e, N(e, n) +1), e  E(sn+1) - (E(sn) -{en+1})
dC n + 1  e 
dq

dX  e , N  e , n  + 1 
dq
8. If the end of simulation, stop. Otherwise, n:= n+1 and go to step 2
54
estimators
55
We only check the unbiasedness, i.e.
dE  L 
dq
 dL
 E
 dq

 , (1)

A counter-example
1, if   q
L  
 0, if   q
where w is a uniform random variable on [0, 1]
Necessary and sufficient condition for (1) : uniform differentiability of L
Uniform differentiability is in general impossible to check directly.
56
Commuting Condition (CU)
CU condition: For all s1, s2, s3, a, and b such that
a, b  E(s1) and p(s2; s1, a) p(s3; s2, b) &gt; 0 ,
there exist a state s4 such that:
p(s4; s1, b) = p(s3; s2, b) and p(s3; s4, a) = p(s2; s1, a).
Further, p(z1 ; s, a) = p(z2 ; s, a) &gt; 0 z1 = z2.
s2
b
a
s1
s3
b
a
s4
Example: G/G/1 (CU), G/G/1/K (no CU)
57
Commuting Condition (CU)
Remarks :
• CU condition ensures the continuity of the state trajectory.
• The perturbation of two events only generates temporal
perturbations of the state trajectory.
• The last part of the CU condition is not restrictive and we
usually limit ourselves to the first part.
Example :
CU condition holds for the inspection system example.
58
Commuting Condition (CU)
Hypothis (H3) : There exists a constant B &gt; 0 such that:
dX q  e, n 
 B X q  e, n  + 1 
dq
Theorem CU : Under (NI), (H1) - (H3) and (CU).
•
•
•
If E[supq N(T)2] &lt; ∞ , then GSMP gradient estimator of LT is unbiased,
If E[supq tN] &lt; ∞, then GSMP gradient estimator of LN is unbiased,
If E[supq N(T(a, k))2] &lt; ∞ and E[supq T(a, k)2] &lt; ∞, then GSMP gradient
estimator of La,k is unbiased,
where N(t) is the number of event occurred in in [0, t]
Remark :
•
Conditions E[supq N(T)2] &lt; ∞ , E[supq tN] &lt; ∞, E[supq N(T(a, k))2] and
E[supq T(a, k)2] &lt; ∞ hold for most models of production systems.
•
In general, we limit ourselves to the verification of other conditions.
59
Correctness conditions of Perturbation Analysis
Conditions on event clocks
Hypothesis (H1) :  e and q, Fe(x; q) is continuous in x and Fe(x; q) = 0.
Hypothesis (H2) :  e and q, Xq(e, n) is continuous differentiable in q.
Hypoth&egrave;se (H3) : There exists a constant B &gt; 0 such that:
dX q  e, n 
 B X q  e, n  + 1 
dq
Conditions on the GSMP structure
NI condition:  s, s' Œ
 S and e  E(s), p(s'; s, e) &gt; 0  E(s')  E(s) - {e}
CU condition: For all s1, s2, s3, a, and b such that
a, b  E(s1) and p(s2; s1, a) p(s3; s2, b) &gt; 0 ,
there exist a state s4 such that:
p(s4; s1, b) = p(s3; s2, b) and p(s3; s4, a) = p(s2; s1, a).
Further, p(z1 ; s, a) = p(z2 ; s, a) &gt; 0 z1 = z2.
60
Correctness conditions of Perturbation Analysis
A GSMP can be represented by its event diagram in which
• nodes = states
• arcs = possible transitions
Event diagrams are particular useful for checking NI and CU conditions.
s1
s3
a, P1
b, 1
s
a, P2
s2
61
Applications to manufacturing
systems
62
A. G/G/1 queue for departure time, throughput rate, average
waiting time, ...), G/M/1, M/G/1, M/M/1
B. Optimization of mean WIP of production lines
M1
B1
M2
B2
M3
C. Failure-prone continuous flow production lines (wrt
failure/repair/production rates)
63
D. KANBAN systems
• Each Kanban stage is associated with a given number of
kanbans
• each product in a kanban stage is associated with a kanban
• Performance measure: average product waiting time
Kanban stage 2
Kanban stage 1
I1
M1
O1
I2
M2
O2
64
E. A flexible manufacturing system
• The system is composed of a set of machines (servers) and
a given number of pallets (clients) for material handling
• The path of a client through the system is a Markov chain
with a probability qij to joining machine j after leaving i
• The queues are of unlimited capacity
• Performance measures: time for server 1 to serve N
clients.
65
F. A flexible cell
• A flexible machine produces sucessively for three
dedicated machines (round robin rule)
• Performance measures : mean WIP
B0
M0
B1
M1
B2
M2
B3
M3
66
Sensitivity estimation revisited
67
Problem setting
•
Performance measure:
J(q) = E[L[X1(q), X2(q), ...]]
•
Focus on scalar case for simpliciy, i.e. J(q) = E[L[X(q)]].
•
Assume that X(q) has continuous probability distribution F(x, q).
•
Then,
J q

    L  x  q  dF  x , q 
68
The IPA approach
•
Represent X as a continuous function of q by inverse transformation X(q)
= F-1(U, q) where U = UNIF(0, 1)
•
Then
•
By differentiation (provided the interchange of expectation and
differentiation is allowed)
J q
dJ  q
dq
•
1
   0 L  x  u , q   du



1
 L  x  u , q  
0
q
du
Further,
 L  X  U , q  
q
 L  X  U , q    X  U , q


X
q
perturbation propagation

perturbation
generation
69
Likelihood Ratio Approach (LR approach)
•
J q

    L  x  f  x ; q  dx
•
L(x) is used instead of L[x(q)], and a common realization of X is used
while q varies
•
By differentiation (provided the interchange of expectation and
differentiation is allowed)
dJ  q
dq





L x
f
 x;q 
q
dx
70
Likelihood Ratio Approach (LR approach)
•
•
dJ  q


dq
Since



L x
 ln f
f
q
 x;q 

q
•
 x;q 
f
dx
 x;q 
q
1
f
 x;q 
if f(x; q) ≠0
dJ  q

dq
LX





 ln f
L x
 ln f  x ; q
 X ;q 
q
q

 ln f  X ; q  

f  x ; q  dx  E  L  X 


q


Likehood Ratio estimator
Score Function estimator
71
IPA or LR
•
IPA ≠ LR
•
•
IPA needs more detailed system dynamics analysis
LR is easier to obtain
•
•
IPA is more accurate which variance goes to zero as time horizon increase
LR has large variance which increases to infinity for the time horizon
increase
72
Extensions of perturbation
analysis
73
When IPA does not work
Ex 1 : A production line where the performance measure is the average number
of parts inspected per time unit.
Ex 2 : Inspection machine with an input buffer of limited capacity
State-dependent routing
Ex 3 : Jackson network where q is a routing probability
Structural parameter
Discontinuity of performance measures (Ex 1)
Discontinuity of the sample path (Ex 2 and 3)
74
Solution 1 : change of the representation of the stochastic process
Ex : Birth-death process
Solution 2 : change of measures (case 1)
For a production line, from Little's law,
L=lT
where Lis the average WIP, l the throughput rate, T the mean sojourn time.
Then,
dl
dq

dL dq  l dT dq
T
and dL/dq and dT/dq can often be estimated by IPA
75
Solution 3 : Conditional perturbation analysis (SPA)
• The biase of an IPA estimator, i.e. E[dL/dq] - dE[L]/dq, is essentially due to
the discontinuities of the random performance function L
• SPA consists in evaluating the derivative of a conditional performance
function, , i.e.
•
d[E[L /z]]/dq
The SPA estimator is unbiased if :
 dE L / z   dE z E L / z 
Ez 

dq
dq


•
In general, E[L /z] is better smoothed than L and SPA estimator is more
likely to be unbiased.
76
```