Equation 10 -- Variances and Covariances of... (Note: calculates the covariance between pairs of stations correctly. ...

advertisement
Equation 10 -- Variances and Covariances of Jobs at All Stations
(Note: The derivation of Equation 10 assumes that the above matrix equation for S t calculates
the covariance between pairs of stations correctly. Otherwise, we would not be able to recurse on
the equation for S t to find the steady-state variances and covariances. Section 5.10 proves that
the equation for S t calculates the covariance terms correctly.)
But now, recall that we want to find the variances and covariances in terms of instructions at each
station. We note that Pt =
sum to find:
(93)
''J X
. Then we apply the formula for the variance of a random
Var[P ] = E[P ]Var[X] + (E[X])2 S jj .
Further, we found in Section 5.3 that Cov[Nk,t_, N,t_, ] = Cov[Pkt,Pl
Then we can write:
(94)
t_, ] E[Xk ]E[XI].
Cov[Pk,t-, P,t-] = E[Xk ]E[Xl ]S u .
These are formulas for the steady-state variances and covariances of all of the station workloads,
as desired.F]
5.7 Mixed Network Mechanics
In a mixed network, we have some stations operate according to an instruction-control rule, and
other stations operate according to the job-shop control rule.
In general, we can apply the recursion equations we found in the previous sections. We use the
instruction-control recursion equation at stations that operate under the instruction-control rule,
and the job-control recursion equation at stations that operate under the job-control rule. Then,
we write all of the recursion equations simultaneously in matrix form (whether they are
instruction-rule or job-rule equations), and find recursive relationship that tells us what the
expectation and variance of demands are for all stations.
However, to do this we must make some adjustments to the recursion equations. In a mixed
model, we can have job-rule stations feed into instruction-rule stations, and vice versa. But, the
analytic model equations for job-rule stations maintain all statistics in terms ofjobs, while the
equations for instruction-rule stations maintain all statistics in terms of instructions. We will need
to convert statistics in terms of jobs (E[Nkt-l], Var[Nkt-l]) to statistics in terms of instructions
(E[Pk,t-l], Var[Pktl]), and vice versa. Fortunately, this is simple to do. We make the
following substitutions:
When a model equation callsfor E[Nk,tl] We use E[Nk,tl ] directly if station k uses the
job-control rule, since the model equations track this value under the job-control rule. If
stationj uses the instruction-control rule, the model equations track E[Pkt-l]. In the latter
case, we use the formula for E[Nk tl] in terms of E[Pk t-l], which is:
E[Nkt_, ] = E[ Pk,t-, ]E[Xktl ] -
(This equation was derived in Section 5.2.)
39
*
When a model equation callsfor Var[Nkt-l]: We use Var[Nktl] directly if station k uses
the job-control rule. Otherwise, we use either the lower-bound or the upper-bound formula
for Var[Nkt-l] in terms of Var[Pktl]. The formulas are:
Lower bound:
Upper bound:
M
AT
T/,-,
>
Var[Pk -l-]
E[Pk,,_, ]Var[Xk ]
Lk(E[Xk ])3
F " L k~t-I J C (E[Xk 1)2
Var[N
].<
1
Var[Pk,t-]
E[Xk ]2
E[Pk,tl ]
Lk
kt-
E[Xk]
E[Pk t - ]Var[Xk ]
Lk(E[Xk]) 3
(These equations were derived in Section 5.3.)
·
When a model equation callsfor Cov[Nk tl, Nl,tl], we make the following substitutions:
- If stations k and 1both use job control, we use Cov[Nk,t 1, Nl,t-l] directly.
- If station k uses job control, and station 1 uses instruction control, we make the substitution
Cov[Nk,tl_, N,,, ] = Cov[Nk , P ] / E[X,].
- If stations k and 1 both use instruction control, we make the substitution
Cov[Nk,tl, Nl,t_1 ] = Cov[ Pk, P, ]/ E[Xk ]E[X, ].
(These equations were derived in Section 5.3.)
We then recalculate the recursion equations for each station, making these substitutions. Section
5.8 gives the results of these calculations for the expectation equations, and Section 5.9 gives the
results of these calculations for the variance equations.
5.8 Mixed Network Expectations
5.8.1 Instruction-Rule Stations
Making the substitutions given in the previous section, we can derive the following recursion
equation for instruction rule stations:
E[P,t] = 1-
E[Pi,t-,] +
L (jkPk,t-1
LLj
k
+,U
K
where
Pk,t-l = E[Nkt-l],
(95)
Pkt-l =
if k uses job - control;
E[Pk.t-l], if k uses instruction- control;
E[J ]E[Xj ], if k uses job - control;
()jk
=
(jk
= E[Jj]E[X ]/E[Xk ], if k uses instruction control; and
uj = E[R]E[Jj,R]E[X.
Equation 11 -- Recursion Equation for the Expectation of an Instruction-Rule Station
40
5.8.2 Job-Rule Stations
Making the substitutions given in the previous section, we eventually derive the following
recursion equation for job-rule stations:
E[Nj,,] = 1- 1- E[Njt-] +
(jkpk,,_l , +
1j
where
(96)
Pk,t-l =
E[Nk,t- ], if k uses job - control;
Pk,t-l =
E[Pk,t-l ], if k uses instruction - control;
)jk =
(I)jk
=
E[Jj ], if k uses job - control;
E[Jj] / E[Xk ], if k uses instruction control; and
Pj = E[R]E[JjR].
Equation 12 -- Recursion Equation for the Expectation of a Job-Rule Station
5.8.3 The Matrix Equation for Expectations
Define the following vectors and matrices:
* I is the identity matrix.
* D is a diagonal matrix with the lead times on the diagonal.
*· Fis a matrix whose (,k) entry is given by the formulas in the recursion equations above.
* p is a vector whosejth entry is given by the formulas in the recursion equations above.
* p is a vector whosejth entry is given by the formulas in the recursion equations above.
Then we can rewrite all of the recursion equations in matrix form simultaneously in the following
form:
(97)
P = (I - D)pt_, + D()pt_ + Du,
= (I - D + D'I)pt-_ + D,
Call this B
= Bpt_ + Dpu.
The proof of this matrix equation is term-by-term multiplication. Infinitely iterating the above
equation, and applying linear algebra theory, we find:
(98)
|p= (I-
)-ul,
Equation 13 -- Results Vector Used to Calculate Expected Demands
But then, we have a results vector p whosejth entry is E[Nj] if station j uses the job-control rule,
and E[Pj] if stationj uses the instruction-control rule. In the former case, we can calculate E[Pj]
by using the relationship E[Pj] = E[Nj]E[Xj]. This gives us a method to calculate the expectation
of demand at all stations in the mixed-rule model, as desired.
41
5.9 Mixed Network Variances
5.9.1 Instruction-Rule Stations
Making the substitutions discussed in Section 5.7, we can derive the following recursion equation
for an estimate of the variance of an instruction-rule station:
Var[ P] =
(99)
Var[P]
-
2+
+
+
2 1
-
+
-j
1LLk
)i[
(Skkt-l
[Djkit,
E
eK
+ Uk) +
j, k,t]
_,1]
K
lk
IjkSj,t-]
kcj
Equation 14 -- Recursion Equation for the Variance of an Instruction-Rule Station
In this equation:
· (jk = E[Jjk ]E[Xi ] / E[Xk ] if station k uses instruction control.
* (jk = E[Jik ]E[Xj ] if station k uses job control.
E[Pk] (E[Jjk ]Var[X ] + Var[Jk ](E[Xj ])2).
y2
)
*
= E[R ](E[J
J
]Var[X] + Var[JjR](E[Xj ])2 + Var[R](E[JjR]E[Xj ])
Skk,t-l = Var[Pk,,tl] if station k uses instruction control.
*
Skk,t l = Var[Nk,t_ ] if station k uses job control.
*
*
Sk, t- = Cov(Nktl, Nlt-l ) if stations k and both use job control.
Skt l = Cov(Pk,t-l, Nlt1 ) if station k uses instruction control and stationj uses job control.
*
Skl,t-l =
*
Uk= - E[Pk ][ Xk] if station k uses instruction control with the lower-bound
*er
Cov(Pkt-l, Plt-1) if stations k and both use instruction control.
Lk E[Xk]
approximation.
*
Uk
=
1J-(Lk)(1E
-) (
E[Pk]Var[Xk] if station k uses instruction control with
the upper-bound approximation.
*
uk = O0if station k uses job control.
5.9.2 Job-Rule Stations
Making the substitutions discussed in Section 5.7, we can derive the following recursion equation
for an estimate of the variance of the number ofjobs produced at a job-rule station:
42
2
2
L-i(1
(
I
k eK
tk
r
2
)2
-
(St-1
-
k )+
,k,t ]
1
(100)
'j-j)
C
jj)
kK
f jk
L
klst-l]
lwk
+
2 1- -I
I
k
S j,,t,
Ij
kzj
Equation 15 -- Recursion Equation for the Variance of a Job-Rule Station
In this equation:
(jk = E[Jjk]/ E[Xk]
*·
if station k uses instruction control.
jk = E[Jjk] if station k uses job control.
*·
·
2
2kt = E[Pk ]Var[Jjk]/ E[Xk].
2
= E[Rj](Var[JjR])+Var[R ](E[JjR])
*
Var[Pkt- ] if station k uses instruction control.
*
Skk,t-1
*
Skk,t1 = Var[Nk,t_ ] if station k uses job control.
*
Sk t-
=
=
Cov(Nk,t-_, Nlt,-l) if stations k and I both use job control.
Sk1
=
Cov(Pk,t-_, Nt-) if station k uses instruction control and stationj uses job control.
S kl, t-1
=
Cov(Pkt-l, P,t-l) if stations k and both use instruction control.
E[P]Var[Xk]
]Var
Uk
] if station k uses instruction control with the lower-bound
LkE[Xk]
approximation.
Uk
]) -
E [
if station k uses instruction control with
the upper-bound approximation.
·
uk = 0 if station k uses job control.
5.9.3 The Matrix Equation for Variances
Define the following vectors and matrices:
* St be a square matrix whose Sj,j,t and Sj,k,t-1 entries are given by the recursion equations
above.
* I is the identity matrix.
* D is a diagonal matrix with the lead times on the diagonal.
*· c is a matrix whose (i,k) entries are given by the formulas in the recursion equations above.
* U is a diagonal matrix with uk's defined above on the diagonal.
*
Y is a diagonal matrix with diagonal elements Z = oe +j
+
*
B be a square matrix that equals (I - D - DC).
43
kKcjkt
Then we can rewrite all of the recursion equations in matrix form simultaneously in the following
form:
(101)
St = BS t_,B'+(D())U(D())'+DZD.
This equation may be checked by term-by-term multiplication. Infinitely iterating this recursion,
we find:
(102)
S = ZB((D)U(D())'+DSD)B ' s.
s=O
Equation 16 -- Results Matrix Used to Estimate Network Covariances
But then, S is a results matrix whose entries are the following estimates:
* Sjj = Var[Pjl] if stationj uses instruction control.
* Sjj = Var[Nj] if stationj uses job control. To find Var[P]jl in this case, we use the following
relation: Var[Pj] = E[Nj]Var[Xj] + Var[Nj](E[X])2 .
* Sjk = Cov(Pj, Pk) if stationsj and k both use instruction control.
* Sjk = Cov(Nj, Pk) if stationj uses job control and station k uses instruction control. To find
Cov(Pj, Pk) in this case, we use the following relation: Cov(Pj, Pk) = E[Xj] Cov(Nj, Pk).
S*jk = Cov(Nj, Nk) if stationsj and k both use instruction control. To find Cov(Pj, Pk) in this
case, we use the following relation: Cov(Pj, Pk) = E[X]E[Xk] Cov(Nj, Nk).
These are estimates of the steady-state variances and covariances of demand for all of the
stations, as desired. These estimates do assume that the above matrix equation for S t calculates
the covariance between pairs of stations correctly. Otherwise, we would not be able to recurse on
the equation for St to find the steady-state variances and covariances. The following section
proves that the equation for S t calculates the covariance terms correctly.
5.10 Mixed Network Covariances
This section derives the covariances between pairs of stations, and shows that the formula for the
station variances and covariances derived in the previous section correctly calculates the
covariances.
5.10.1 Initial Development
Define the following variable: Wit is a measure of the work produced by station i at time t, and is:
*
Wit = Pit if station i uses the instruction-control rule, and;
*
Wit = Nit if station i uses the job-control rule.
We will derive a formula for Cov[Wi, t, Wj, t] in terms of the workstation expectations, variances,
and covariances at time t-1.
From Sections 5.1 and 5.4, we know that:
(103)
Wt = 1
+
44
A
whether Wjt is measured in jobs or instructions. Here, Ajt measures the arrivals to stationj at the
start of period t, and is measured in instructions if stationj uses the instruction-control rule, and
measured in jobs if stationj uses the job-control rule.
Then, we have that:
(104)
W
t
=
-
1Ai,,+
+
1 -
j
+
L,t A.
Taking the variance of (Wi, t + Wj t), we find:
(105)
Var[Wi,, + Wj,
t ] = Var[ Wt ] + Var[Wj ] + 2Cov[Wit, Wjt ]
= 1-
Var[Wit_'>]
We recognize this as Var[Wit
+ 1-
-) (9-) Cov[Wi,t-, Ai,t ]
Var[Ai t] + 2(1 -
+
].
Var[Ait]+2 1 -
Var[Wi,,-]+
Cov[W 1,, Ai,,]
We recognize this as Var[Wjt ].
+ 2(1 -
-)(1
+2
1-
,WCov[
-
,
1
-] + 21-i 1
Cov[ Ai,,, Wj t-1 ] + 2
---
Cov[ W,,
A),
Cov[ Ai't, Aj,t ].
Equating terms, and solving for Cov[ Wit, Wjt], we find:
Cov[W W ] = (1 -
L
]
- LcovCo][,_l,, j- j,
-
Cov[W,_, A,,
(106)
+( L
-1-
Cov[Ai,,
Wj,t,,](+
L-
Cov[ AiAi,
Aj,].
To calculate the terms in this expression, we will need to calculate Cov[Ai, t, Aj, t] and Cov[ Wit,
Ajt].
5.10.2 The Covariance of Arrivals
Aj,t has the following form:
Rjt
Nk,t-
(107)
Aj t
E
E
Yjkt +
keK 1=1
Arrivals from
other stations
Yj, R, t
1=1
Arrivals from
outside the system
Here, the N's are the incoming requests from the other stations, Rjt is incoming requests from
outside the network, and the Y's are independent, identically distributed random variables
45
representing the work per request from each source. (The Ys are the number of jobs per request
if the station uses the job-control rule. They are summations of instructions if the station uses the
instruction-control rule.)
By assumption, there is no dependence between any arrivals from outside the network and any
other arrivals. Then, to find Cov[Ait, Ajt], it is sufficient to find:
(108)
Z
Cov[AiAt,Aijt]eovL
1
Nk,,_
Nk[,_
iit
kEK 1=1
ZEykt
keK 1=1
To do so, we will extend Lemma 5 to calculate Var(Ait + Ajt), and use the resulting formula for
Var(Ait + Ajt) to calculate Cov[Ait, At].
First, define Tto be:
Nk,t-1
(109)
Nk,t-I
Y +Z ZYjk
T=E
kEK
=1
kEK
=1
The "Law of Total Variance" (see Lemma 2), tells us that
Var[T] = Var[E(T N)]+E[Var(7]N)],
(110)
where Nis some event. Here, let Nbe the event {Nk
t
,Vk} .
= nk
Now, using the linearity of expectation,
(111)
E(TIN)
NkE[Yik
=
keK
]+
NkE[ikI.
k~K
Then, noting that E[Yik] and E[Yjk] are constants, and applying the bilinearity principle of
covariance, we find that:
E[Yik ]E[Yil]Cov[Nk,N]+Y E[
Var[E(TN)] =
(112)
+2J
Yjk]E[]
[Y]Cov[
k
,Nl]
lkEKIEK
kEKEK
E[Yik ]E[Yj]Cov[Nk,N,].
kEK IEK
Next, recalling that the variance of a fixed sum of independent random variables is the sum of
their variances, we find:
Nk
Nk
Var[Yk] + Z EEVar[Yk]
Var(TI N) =
(113)
keK 1=1
keK 1=1
NkVar[Yik ],
NkVar[Yik ] +
=
keK
keK
and, taking the expectation over N, we find:
(114)
E[Var(TI N)] =
E[Nk ]Var[Yik]+
kEK
[Nk ]Var[Yjk ],
kEK
Adding these terms together, we find:
46
Var[ T] = Var[E(TI N)] + E[Var(TIN)]
(115)
= EZE[Yik]E[Yi]Cov[Nk,Nl]+EE[Yjk]E[Yjl]Cov[Nk,Nl]
kEKIEK
kEKIEK
keKIEK
E[Nk ]Var[Yk]
E[Nk ]Var[Yik ]+
E[ E[Yi
E[Y ]Cov[Nk,N, ]+
+2
kEK
kK
k
Now, let us group together the terms of Var[T] as follows:
Var[T]= (E
(116)
E[Yik]E[Y,]Cov[Nk, N,]+
E[Nk]Var[Yik])
+ (JEE[Yj ]E[Yjl ]Cov[Nk, N]+ I E[Nk ]Var[Yjk])
kEKEK
keK
+ 2ZE E[k ]E[Yjl]Cov[Nk,,N,],
kEK IEK
which we recognize to be:
Nk,t-
(117) Var[T]=Var[ I
Nktl
Ykt]+Var[ I"YJkt]+2
keK
1=1
kEK
=1
E[Yk]E[Yl]ov[NkN,
N].
keKleK
Now, by the definition of covariance, we know:
Nkt-I
(118)
Var[T] = Var[
Nkt-I
kEK 1=1
Nkt-I
Yjkt
]+2ov [
Yit ]+ Var[
kEK 1=1
kEK
Nk.t-
YtE EYj
1=1
].
keK 1=1
So, equating terms, we solve for Cov[Ait,Aj t]:
Nkt-I
NkV,-l
COV[A,
k(119)
A]
= Co
= EZ
lJ
[kK
I=1
-
keK
I=1
E[Yik ]E[Yjl ]Cov[Nktl,l.,_-
keKleK
Now, this expression for Cov[Ait,Ajt] is in terms of E[Yik] and Cov[Nk,tl,N, tl]. In section 5.8,
we found that we replace the E[Yik] terms with:
* E[Jik]E[Xi] if station i uses the instruction-control rule.
* E[Jik] if station i uses the job-control rule.
Next, in section 5.9, we found that, for k l1,we can replace the Cov[Nk,t_,N, t-l] term with:
* Cov[Nk,t-l,NI t-l, if stations k and both use the job-control rule;
Cov[Pkt_,N, t_] / E[Pk] if station k uses the instruction-control rule and station I uses the
job-control rule; and
* Cov[Pkt-l,P, tt- ] / E[Pk]E[PI] if stations k and both use the instruction-control rule.
*
Finally, for k = 1,the Cov[Nk,t_,Nlt-l] becomes Var[Nkt_l], and we replace this term with:
* Var[Nktl] if station k uses the job-control rule; and
*
(Var[Pk t-l] + Uk ) / E[Xk] 2 if station k uses one of the instruction-control rules. (Recall that
uk is one of the two variance correction terms discussed in section 5.3 of the analytic model
paper.)
But then, we can rewrite the expression for Cov[Ait,Ait] as follows:
47
(120)
Cov[ A
it jt ] =
(ik (i jkUkk
+
)ik(jlSk,lt-l
keK
keK IlK
where:
*· Cik = E[Jik] if stations i and k both use the job-control rule.
*· cik = E[Jik]E[Xi] if station i uses the instruction-control rule and station k uses the jobcontrol rule.
*· cIik = E[Jik] / E[Xk] if station i uses the job-control rule and station k uses the instructioncontrol rule.
*· ik = E[Jjk]E[Xi] / E[Xk] if stations i and k both use the job-control rule.
* Sk,l,t-1 = Cov[Nk,t-l, Nl,t-i], if stations k and I both use the job-control rule (recall that
Cov[Nk,tl,N t-l] terms become Var[Nk,t-l] terms when k = );
* Sk,l,t-1 = Cov[Pk,t-1, Nl,t-l] E[Pk] if station k uses the instruction-control rule and station I
uses the job-control rule;
* Sk,l,t-1 = Cov[Pk,t-l,Pl,t-1]/ E[Pk]E[Pl]if stations k and I both use the instruction-control
rule;
* Ukk = 0 if station k uses the job-control rule; and
* Ukk = Uk, if station k uses one of the instruction-control rules, and where uk is the
appropriate variance correction term, as discussed in section 5.3.
5.10.3 The Covariance of Arrivals and Workload
Now, we need to calculate the covariance of work arrivals with workload in the previous period,
or Cov[ Ait-l, Wit-l ]. To do so, let us consider Cov[ Ait-_, Ajj,,t-_ ], which is the covariance of
the arrivals at station i with the arrivals stationj sends to itself. We can write this as:
(121)
Cov[ Ait, A,j],t = Cov!
/kK
=1
Yi
,-,
'Y§
1=1
.
In the previous subsection, we found that the covariance of all the arrivals at station i with all of
the arrivals to stationj is:
Nk,t-I
(122)Cov[AA]COvl
1t-
Nk
i,
ZIk
IkEK=1
kEK 1=1
Z Z,
(ik(j1SSk,,t-1 +
Zik(kUkk
'
kEK
kEKIEK
But then, by the bilinearity principle of covariance, the covariance of all the arrivals at station i
with the single arrival of the work stationj sends to itself is:
(123)
Cov[Ait, Aj,j,t]
=
(ik(jjS
k,j,t-1
+
(IiiUjj.
i
k eK
Now, Sk,j,t1 is a linear function of Cov[Wk t_1, Wt_l]. Then, the bilinearity principle implies
that to change Cov[Ait, Aji,j ] to Cov[ Ait, Wjt ], we simply drop the jj terms from the
equation for Cov[Ai t ,Ajj,t ] . This gives us:
(124)
Cov[Ait, Wjt- l] =Z
(I)ikSk,j,t-l1 + ()iUjU
IEK
48
5.10.4 A Formula for the Covariance Terms
Recall that a formula for Cov[Wit, Wjt] is:
Cov[Wt,
t] =(1-
1-
(125)
-
Cov[W, _, Wjt_ ] + 1
Cov[W,t,,
Ai,]
(125)
i-
Cov[Ai,, Wj,+
Cov[Ait,
Aj,].
Using the formulas of the previous two subsections for Cov[Wi, t- 1 , Ajt] and Cov[Ait, Ajt], we
find:
CovW ]=(1-JjJ11+jCov[Wt+I
L
(Lj(
I-
-
j
(L j (k ( Djk
-
ki,t-l)I+
)jiii
(126)
+
(_
1
Li+ ij
((i)ik S k,,t- ) + ()YU j )
/)
ik )
keKleK
D (j+Sk,t- +-
kEK
(ik
( jk
U kk
Finally, we can write all of the non-variance Cov[ Wit, Wjt] equations in matrix form as follows:
St = BSt_,B'+(D()UD())'+DID,
(127)
where all the matrix variables are the same as they were for Equation 19. By term-by-term
multiplication, we can check that the (i,j) entry of St is the recursion equation above. ·
This completes the derivation of the Analytic Model. [1
6. Appendix: Lemmas Used in Model Derivations
6.1 Expectation of a Random Sum of Random Variables
Lemma 1. Let N be a random variable with finite expectation, and X i be a set of independent,
identically distributedrandom variables, independentof N, that have a common mean E[X7.
N
Define Q = Z X . ThenE[Q] = E[N]E[X].
i=l
Proof. (Taken from Rice's MathematicalStatistics and DataAnalysis.l) We first prove the
following result: E[Y] = E[E(IX)]. (This result is sometimes called "the law of total
expectation.") This law states that. To prove this result, we will show that:
49
E(Y) = E(YIX= X)Px(X) ,
(128)
x
where E(Y1X = x) =
ypy (y x) .
y
The proposed formula for E(Y) is a double summation, and we can interchange the order of this
summation. Doing so yields:
Z E(YIX = X)px(x) = I y
(129)
pYlx(lx)pPx(X).
y
x
x
Now, by the definition of conditional probability, we have:
py(y) = ZPylx(Y I)Px(X)
'
(130)
x
Substituting, we find that:
Y Pyx(Ylx)px(x) = lypr(y)= E(Y),
(131)
y
y
x
which is the desired result.
We now consider E[Q]. Using the result, E[Q] = E[E(TI N)]. Using the linearity of
expectation, E(QIN = n) = nE[X] , and E(QIN) = NE[X]. Then we have:
E[Q] = E[E((2N)] = E[NE(X)] = E[N]E[X],
(132)
which is the desired result. D[
6.2 Variance of a Random Sum of Random Variables
Lemma 2. Let N be a random variable with finite expectation and a finite variance. Let Xi be a
set of independent, identically distributedrandom variables, independent of N, that have a
common mean E[X], and a common variance, Var[X].
N
Xi . Then Var[Q] = E[N]Var[X] + (E[X])2 Var[N].
Define Q=
i=l
Proof. (Taken from Rice's Mathematical Statistics and Data Analysis. 2) We first prove the
following result: Var[Y] = Var[E(YI X)] + E[Var(YI X)]. (This result can be thought of as the
"law of total variance.") By definition of variance, we have:
(133)
Var(IYX) = E(Y 2 IX = x)- [E(Y)IX = x]2 .
Then the expectation of Var(YIX) is:
(134)
E[Var(YIX) = E[E(Y2 IX)] - E{[E(YIX)] 2 }.
Similarly, the variance of a conditional expectation is:
(135)
Var[E(Y]X)] = E {[E(YIX)] 2 } - {E[E(YIX)]} 2 .
Next, we can use the law of total expectation to rewrite Var(Y) as:
(136)
Var(Y) = E(Y 2 )
-[E(Y)] 2
= E[E(Y2 IX)]- {E[E(YIX)]} 2 .
50
Substituting, we find that:
Var(Y) = E[E(Y 2 IX)]- {E[E(YIX)]} 2
(137)
= E[E(Y2 IX)]- E{[E(YIX)] 2 } + E{[E(YX)] 2 } - {E[E(YIX)]} 2
= E[Var(YI X)] + Var[E(Y X)],
which is the desired result.
Now consider Var[Q]. Using the result, we have that
(138)
Var[Q] = Var[E(TI N)] + E[Var(TI N)] .
Because E(QI N) = NE(X), we have that
Var[E(QIN)] = [E(X)] 2 Var(N) .
(139)
Further, the fact that the Xi's are independent allows us to write:
N
(140)
Var(QI N) = VarX X i = N(Var[X]),
i=l
and, taking expectations, we find that:
(141)
E[Var(QIN)] = E(N)Var(X).
Substituting into the expression for Var[T], we find:
Var[Q] = E[N]Var[X] + (E[X])2 Var[N],
(142)
which is the desired result. L]
6.3 Uniform DistributionofArrivals in a Poisson Process
Lemma 3. Let X i be a set of independent, identically distributedrandom variables, that come
from an exponential distribution with parameter A. Consider a queue containingNjobs, where N
is some positive integer, and where each job has length Xi. Then the breakpoints of the
first N - 1 jobs in the queue will be uniformly distributed.
Proof: (Taken from Gallager's Discrete Stochastic Processes.3) First, note that the queue has a
total length of Q =
El
Xi . Next, define Si to be the location in the queue of the breakpoint
between the ith and i+lth job. We know SN = Q, since the end of the last job marks the end of
the queue.
We will calculate the joint distribution of SI, S2, ... SN- , which isf(S I N -) =
f(Sl=sl, ... , SN- =sn- I N-1 breakpoints in Q). Now, for a small , f(S I N- )Sapproximately
equals the probability of no breakpoints in the intervals (0, sl], (s 1 +35 s2], ... ,(sN- + Q], and
precisely one breakpoint in each of the intervals (si, si+ 7, i = 1 to N- 1, conditional on the event
that exactly N- 1 breakpoints occurred.
We first consider the unconditional probability off(sl, ... sN-l)S. Since the Xi's are
exponential with parameter X, the probability of no arrivals in one of the (si+G si+l] intervals
equals exp[-(si+l - si - )]. Similarly, the probability of one of the arrivals falling in one of
the (si , si+d] intervals is AiSexp[-l28]. Then, the unconditional probability is simply the
51
product of all the exp[-(si+I - s i - S)] and A2exp[-AS] terms. Now, there is one
exponential term for each subinterval of (0, Q], so multiplying them together yieldsexp[-ilQ].
Further, there are N- 1 () terms, so we have:
(143)
f(si
,...,SN1 )
= (>i)
N
-
l
exp[-2Q].
Now, using conditional probability, we know that:
(144)
breakpoints in Q)S=
lN-1
f(s,...,s_
f (sl...,sNl)
P(N - 1 breakpoints in Q)
Since the Xi's are exponentially distributed, P(N - 1 breakpoints) is given by a Poisson
distribution. (Effectively, P(N - 1 breakpoints) is the probability of N - 1 arrivals of a Poisson
process with parameter X in an interval of length Q). Then we have:
f(sl,...,sN_ IN- 1 breakpoints in Q)S =
(145)
f(sI, .,s
)
P(N - 1 breakpoints in Q)'
_ (26)N- exp[-AQ]
(AQ) N- l exp[-AQ] '
(N-l)!
N-1 (N-
(2a,)
(AQ)
N-
1)!
I
Dividing by Sand taking the limit as 3 -->0, we find:
(146)
f(sl,...,sNlIN-1 breakpoints in
Q) =
QN-
< S <...<SN- < Q.
Thenf(S I N- 1) has a uniform distribution, as desired. D[
6.4 An Upper Bound on the Variance of the Number of Jobs Processed Per
Period, for "Nice Distributions"of the Number of InstructionsPer Job
Lemma 4. Suppose that Xk, a nonnegative random variablefor the distributionfor the number
of instructionsperjob at station k, satisfies thefollowing condition. For all values of a constant
to >0,
E[Xk - toIX > to] < E[Xk ].
Then thefollowing result holds.
Var[Nk]< E[qk](L-
Lk
+ L
2 Var[qk],
where Nk is the number ofjobs contained in the first 1 /Lkfraction of instructions in the queue at
station k, and qk is the total number ofjobs in the queue.
Discussion. In words, the conditionE[Xk - t o X > to] < E[Xk] means the following: Suppose
we arrive at station k at a random time, and wait for the current job in the queue to finish.
52
Further, suppose we know that station k has been working on the current job for at least t time
units. Then the condition says that the expectation of the time remaining until the job is finished
is less than the unconditional expected time to process a job at station k.
Distributions which satisfy this property can be thought of as "nice" distributions. Most
common distributions satisfy this property, including deterministic, uniform, triangular, normal,
and beta distributions. The exponential distribution satisfies this property with equality: by
definition, the fact that we have been waiting t o for the completion of a job tells us nothing about
when the job will be done.
An example distribution in which the property is not satisfied is the following: suppose
that the time to complete a job is either two seconds or five hours. If we wait for more than two
seconds, we expect to wait a long time before the current job is completed.
Proof. We consider two different nonnegative distributions: Xk, which satisfies the condition of
the lemma, and Xk', which is an exponential distribution. Both distributions have the same mean,
E[Xk]. The variance of Xk is not known; the variance of Xk' is (E[Xk]) 2 . However, an
established result from queuing theory is that:
Var[Xk] < Var[Xk']
(147)
E[Xk]
E[Y']
since Xk satisfies the property that E[Xk - to IX > to] < E[Xk ]. Since Xk and Xk' have the
same mean, we have that Var[Xk] < Var[Xk'].
Now, define Nk to be the number of jobs whose endpoints are in the first Qk / Lk
instructions of the work queue, given that the distribution of the number of instructions per job is
Xk. Similarly, define Nk' to be the number of jobs whose endpoints are in the first Qk / Lk
instructions of the work queue, given that the distribution of the number of instructions per job is
Xk'. Mathematically, we define Nk to be:
(148)
N
k
= Nk:
Xi
-
k
i=1
k
k i=
and the definition of N'k is similar. We want to show that Var[Nk ] < Var[Nk']. We will use
the Law of Total Variance to prove this result.
Recall this law states that Var[Y] = Var[E(YIX)] + E[Var(YIX)]. Here, let Y = Nk or
N'k as appropriate, and let Xbe the event that Qk and qk equal certain fixed values.
Var[E(YIX)]: Given qk, E[Nk] = E[Nk'] = qk / Lk. Then,
(149)
Var[E(Nk [X)]= Var[E(N'k IX)] = Var(qk) / L.
E[Var(YIX)]: Using the definition of variance for discrete distributions, we have that:
(150)
Var(Nk X)=
N
k
k
p(Nk qk Lk)
The expression for Var(N'k I X ) is similar.
Recall the probability that Nk equals a particular value, nk, is the probability that the sum
of the instructions of the first nkjobs in the queue is less than or equal to Qk / Lk, and that the
sum of the instructions of the first nk + 1 jobs is greater than Qk /ILk. Then we can rewrite the
above equation as:
53
(151)
Var(Nkl X)
kk
=
Xik
P
i=1
Nk=l
, Xi1
L
k
k
i=1
If we take the expectation of this expression, we find that:
(152)
E[Var(N IX)
E[q)
P
EX
< E[Q ]
N
Lk
Nk =0
Lk
j=
j=
The expression for Var(N'k IX) is similar. Then, the fact that Var[Xj] < Var[X'j ] implies that:
E[Var(Nk X)]=
Nk -
L
k
E
Eqq '/ 2
_[
) P
N
N'k =0
E[qk]
k
)
Nk =O
<
(153)
E aNiEqk2
X<
E[k+1
<
k
j=
E[I
N'k
X ,k
=l
]N'k--l
Xjk)
aI X' jk
Lk
Lk
=
=1
< E[Var(N'k IX)].
The inequality follows from the following argument: since Var[Xj ] < Var[X'j ], the overall
probability that
,YkN
probability that
j k)
takes on values comparatively far from E[Qk ] L k is less than the
Xy,k
X'I
k
takes on values comparatively far from E[Qk ] / L k . The inequality
follows.
Var(Y) = Var[E(YIX)] + E(Var(YIX): We have shown that
Var[E(N k X)] = Var[E(N'k X)], and that E[Var(Nk IX)] < E[Var(N'k IX)]. Adding
these two terms together, we find that:
Var[Nk ] = Var[E(Nk IX)] + E(Var[Nk IX)
< Var[E(N'k IX)]+ E(Var[N'k IX) = Var[N'k ],
Now, we found in section 5.3 that
(155)
Var[N' ]
-(
Lk
L
2
Var[qk].
(Recall that the result follows from the fact that X'k is an exponential distribution.) The result of
the lemma immediately follows. [
6.5 Covariance of a Sum of Random Sums of Random Variables
Lemma 5. Let Nk be a set K of random variables, each with afinite expectation and afinite
variance. Let Xik be a set of independent, identically distributed random variables, independent
from every Nk, that have a common mean E[Xk, and a common variance, Var[Xk.
Nk
Define T=
E Xik .
Then:
keK i=1
54
Var[T]= Z(E[Nk]Var[Xk ] (E[Xk
+
]) 2Var[N ])+Z
E[Xk]E[X]Cov[Nk, Nl]
kK IEK
1•k
kEK
Proof. We assume the following result: Var[Y] = Var[E(YIX)] + E[Var(YIX)]. (This result
was proved in the development of Lemma 2.) Let Nbe the event {Nk = n k , Vk} . Using this
result, we have that:
(156)
Var[T] = Var[E(TIN)] + E[Var(TIN)] .
Now, using the linearity of expectation,
(157)
E(TIN) = ENkE[Xk].
kEK
But then, noting that E[Xk] is a constant, and applying the bilinearity principle of
covariance, we find that:
Var[E(T N)] = Var
NkE[Xk
]],
keK
(158)
E[Xk ]E[XI]Cov[Nk ,N
(E[Xk])2 Var[Nk ]+ Z
=
].
k eK eK
lwk
keK
Next, recalling that the variance of a fixed sum of independent random variables is the
sum of their variances, we have:
Nk
(159)
Var(TIN) = Var
=
I
Var[X ] =
keK i=1
kEK i=l
Z NVar[Xk],
keK
and, taking the expectation over N, we find:
E[Var(]N)] = E(Nkar[Xk
(160)
3=
E[Nk ]Var[ Xk].
kEK
kEK
We therefore have:
(161)
Var[T] = Var[E(TIN)] + E[Var(TIN)]
=
(E[N ]Var[X] ]Var[
(E[X)N
Nk]) +
keK
E[
k EK eK
lvk
which is the desired result. D
55
[ ] X kE
Xl]Cov[NkNl],
References
Bertsimas, D., and D. Gamarnik. "Asymptotically Optimal Algorithms for Job Shop Scheduling
and Packet Routing." MIT working paper, 1998.
Conway, Adrian E., and Nicolas D. Georganas. Queuing Networks - Exact Computational
Algorithms: A Unified Theory Based on Decomposition and Aggregation. Cambridge:
MIT Press, 1989.
Conway, R.W., W.L. Maxwell, and W.W. Miller. Theory of Scheduling. Reading: AddisonWesley, 1967.
Gallager, Robert G. Discrete Stochastic Processes. Boston: Kluwer Academic Publishers,
1996.
Graves, Stephen C. "A Tactical Planning Model for a Job Shop." OperationsResearch, Vol. 34,
No. 4, 522-533.
Hall, L. Approximation Algorithms for NP-HardProblems, chapter 1. (D. Hochbaum, Ed.)
PWS Publishing, 1997.
Jackson, J.R. "Networks of Waiting Lines." OperationsResearch, Vol. 5, 518-521. 1957.
Jackson, J.R. "Jobshop-Like Queuing Systems." Management Science, Vol. 10, 131-142. 1963.
Karger, D., C. Stein, and J. Wein. "Scheduling Algorithms." MIT Working Paper. 1997.
Rice, John. MathematicalStatistics and Data Analysis, Second Edition. Belmont: Duxbury
Press, 1995.
Notes
' Rice, John. MathematicalStatistics and Data Analysis, Second Edition. (Belmont: Duxbury Press,
1995.) pp. 137-138.
2Rice, pp. 138-139.
3Gallager, Robert G. Discrete Stochastic Processes. (Boston: Kluwer Academic Publishers, 1996.) p. 45.
56
Download