Document 11159515

advertisement
LIBRARY
OF THE
MASSACHUSETTS INSTITUTE
OF TECHNOLOGY
Digitized by the Internet Archive
in
2011 with funding from
Boston Library Consortium IVIember Libraries
http://www.archive.org/details/lineardecisioncoOOmacr
Mi^ -^
''^^7'i^<^'-
-,
;cv*»x
^Yvic-*
working paper
department
of economics
LINEAR DECISION AND CONTROL
by
C.
Number 40
Duncan MacRae
May 1969
massachusetts
institute of
technology
50 memorial drive
Cambridge, mass. 02139
LINEAR DECISION AND CONTROL
by
Duncan MacRae
C.
Number 40
^
This research was supported in
Research, Economic Development
merce.
The views expressed In
ponsibility and do not reflect
the Massachusetts Institute of
Commerce.
T^
May 1969
part by funds from the Office of Economic
Administration, U. S. Department of Comthis paper are the author's sole resthose of the Department of Economics,
Technology, or the U. S. Department of
LINEAR DECISION AND CONTROL^
By C. Duncan MacRae
The linear decision problem[3] is to choose at the end of period
k-1 a vector of instrument, or control, variables,
observations on
where
u,
^
u,
»,
u,
_,
and on
. . .
y.
u^.
-i
»
given
yif_2»"*» ^°^ k«l,2,...,N,
,,
is a vector of partly controlled, or endogenous, variables,
so as to minimize the expected value of J, where
(1)
J
=
t'u + s'y + J5[u'Ru + y'Qy + u'Py + y'P'u]
N
N
^/k.kVi
k'
c=l
'
N
r
+*5
^
Ik.= 1
N
^
-^
^
k=i
J.^U^k
k=l
^
k'=l
N
N
N
"k-i\K-jK.
k'"k'-iX
"^
K-
N
N
"k-A k'^k'
k'»i ^ ^ '^*^ ^
^
*
^
^
k'=l
k=l
^W
k'^k'
»
l^t"^
N
1
^
k=i
^k^k k'"k'-i
k'-i ^ ^*^ ^ ^J
^
This research was supported in part by funds from the Office of
Economic Research, Economic Development Administration, U. S. Department of Commerce.
679453
-2-
subject to
Hu + h.
=
y
(2)
or
k-1
^k
where s,
t, R,
"
"
.lo'^.j+l^j
Vl
k=1.2.....N.
Q, P, and H are given, R and Q are symmetric, and h
is random with given mean and variance.
The purpose of this note is
to transform the linear decision problem into a stochastic linear
control problem and, then, to derive the optimal decision rules with
dynamic programming.
The advantages of transformation are in problem
formulation, solution, analysis, and generalization.
Consider the reduced form representation of a stochastic dynamic linear system:
n
^k
-
^^l^ ^j.k-i^k-j
"*
where v
-^
^j.k-iVj
j^_j^,
^j,k-iVjl
k-1, 2,...
°l,k-lVl
is a vector of exogenous variables, e
random variables with mean zero and variance
C
-^
D^ ^_^,
\-i»
^"'^
'^k-i
"®
^i,
,
is a vector of
i
»
^- v
i
•
^4
l-
i
given for j»l,2,...,n and k-1, 2,...,
and the predetermined variables as of period k are associated with
period k-j for j=l,2,.,,,n.
A state space representation of the
-3-
system is
Vi
(3)
where
,
l,k
Vk^ Vk
Vx+2'^k-X+l*"k-l'V2
f^k'^k-]
"
^'^k'Vl'
A,
Vk"' Vk-"
k==o,i,...
• * •
,
'"k-X+2 • Vx+1^
•
*^k-A+2'^k-X+l^
A,
A... A, ,
2,k
X-l,k X,k
,
• • * •
,
t
B-
,
2,k
B,
B,
... B
3,k ••• X-l,k X,k
,
l.k
I.
I
k
,
is the state of the system.
x,
\
"
.
B,
-4-
^l,k ^2,k •••
X
= inax{n,N}, A
S-l,kS,k
,
,
B
,
,
l.k
and C
are zero for
,
>
j
n, and x- Is given.
The final form representation of the system[3,4] then is given by (2), where
^k
'
Vic'
k.j
\.j+i
k-1
H A^
,:j i
=
\\,i^i^y
I,
*,
,
k,k
and
k-l
The linear decision problem transformed into a linear control
problem, therefore, is to choose u
,
given x
,
for k=l,2
as to minimize the expected value of J subject to (3), where
N so
-5-
N
k=l
\
^kVk
=
-^
K-iVk-1
^
,
Lfkl
\,k
-^
^i\
-^
"k-i\'
*>>.
\J
S,k-1
\,k-X+2 \,k-X+l
k,k-l
\,k-X+2 \,k-X+l
k,k+l
\,k-X+2 \,k-X+l
^k-l,k
^
'k-A+2,k
\-X+l,k
k,k
'k-l,k
P
k-X+2,k
k-X+l,k
R.
k+l,k
\=
\-X+2,k
\-X+l,k
-6-
=
'^^
k
\=
],
^\.v
\
Q
^.k
\,1
'
\,]^
is symmetric positive semi-definite, R,
definite, and
Q,
,
,,
and
P,
,
,
is symmetric positive
are zero for k and k'
<
1.
Applying dynamic programming [1] let
(^)
-
^ME{x^IVi>IIq +
N
+
<5)
=
-^
'^J-lS
J5var{x^Q^ + x^Sj^|x^_^}
^N-l^-l^A-l^-l
+ ^-if^i-i^N ^
^
S-AVl ^ ^^^N>N-1>^N
•"
*^N-lf«N-lVN-l ^
V\-l
ViVn-iVi^
Wl-lfVl^A-l^-l
-^
^N-AS-l^N-1
-^
^N-l^N
-^
'^N^
-7-
^ ^^N-l^N-l\^N-l^N-l * ^N-l^N-l^N
+ ^»^'<"n-i°n-iVn-i>
Minimizing y
•
with respect to u^
^
given x
-
^«n-iVn-iVi-^«n-i^n
where u
is the optimal u^
we obtain
+
V'
Then substituting (6) into (5) and
- .
rearranging terms we get
^ ^^tVl^N ^ S^
f
'
ViVn-1
+ ^tr{«^_^D'_^Q^D^_^}
where
.
-^
\]"'f^N-l^N +
'n^
-8-
and
^«>
Vi
'
\-i
^ ^-i^k ^ N^-iVk-i\-i
.-1
for k=N with
(9)
K.
N
=Q„ and g =s„, and
Yj^_,
N
M
.
N
y»,
N
is the optimal value of y„.
N
E{\.i(VrV2^
-^
Then
\l V2>
+ var{x^_^K^_^Xj^_^ + ^-iSk-l'Va^
"^
^
for k=N, where
(10)
«,
-
J^('.[Cj.i^j.il'[Kj-KjBj.llBj.,KjBj.,*Rjl-^Bj.,KjHC^.lZj.ll
* [Cj.jZj.,l'[gj-KjB^.l[B-.,KjBj., +Kjr^[B^.l8j *
* MB,.,gj * tjl'tBj.iQjB^.l * 'jl-'CB^.iSj +
and with the exception of
c,
is of the game form as (4)
tjl
t^l]
-9-
Hence,
u^_^
(11)
- [B'_,K^Bj^_3^
.
+ «i-iVk-i
(12)
Y^_^
where K
,,
g,
,
K-2fV2
=
,.
Y.
and
+
\l"'[»k-AVlVl
Vi
-
«
-^
^i-i^k
\-2^V2
-^
-^
^"«'«-^'
\^
^k-2f\-2
'^
'
k=N+l,N,...,2
,
~ ^^-1^
"^
• • •
\-l
are determined by (7), (8), (9), and (10),
respectively, the Ricattl matrix,
IL
,is
symmetric positive semi-definite,
k
Y,
,
is the optimal value of y
and the optimal evolution of the state
,
of the system is described by
Ci
=
-
t^
\£^i\+i\
-
\f^iVi\
+
Dj^Ej^
^
\+i3"\\+iHVk
^ \+i'''f^k^k+i ^
^
Vk^
Vi^
k=0,l,...,N-l
The structure of the optimal strategy may be seen immediately
from (11).
Certainty equivalence holds; hence, D
and
Q.
play no r41e
in the strategy, although the expected gain from minimizing expected util-
ity[3], or the expected value of perfect information on
i5tr{f2
D'
-.Q^D,
,}.
There is feedback on x
ly observable, rather than on past errors[3,5].
e,
.[2], is
and z,_,f which are direct-
There is feedforward on
-10-
future
through
R
,
g,
,
through K
A^, and B
and on future s
,
t
,
C
,
and
z,
Computation of the strategy is done recursively through
.
(7)
and (8) and involves inversion of matrices only of order equal to the
number of control variables.
From (12) we see that the optimal expected
*
Value of J is y
*
and the imputed price of x
for k=N,N-l,
. . .
,1 is
The problem may be generalized to include objective functions of the
form
(13)
J
=
t'[u-u] + s'[y-y] + J5[[u-u]'R[u-u] + [y-y]'Q[y-y]
+ [u-fl]'P[y-y] + [y-y]'P'[u-fi]]
,
where u and y are the desired levels of u and y, respectively, since (13)
is of the same form as
t
=
(1)
£ -
with
Ru - Py,
and the addition of a constant term.
s
=
s - Oy -
P'u
Moreover, since the state-space
representation, in contrast to the final form representation, corresponds
directly to the reduced form and the state of the system is observable
the problem may also be generalized to include unknown parameters or
errors in variablesfl]
Massachusetts Institute of Technology
-11-
REFERENCES
Optimization of Stochastic Systems.
New York, 1967.
[1]
Aoki, M.
[2]
Raiffa, H.:
[3]
Theil, H.: Optimal Decision Rules for Government and Industry .
Chicago, 1964.
[4]
Theil, H. and J.C.G. Boot:
"The Final Form of Economettic
Equation Systems," Review of the Infeernational Statistical
Institute , 30 (1962), 136-52.
[5]
Van de Panne, C: "^Optimal Strategy Decisions for Dynamic Linear
Decision Rules in Feedback Form," Econometrica 33 (1965), 307-20.
:
Decision Analysis .
Boston, 1968.
--.
- --^-i^nKi;
Date Due
DEC
08
fiB
1985
Itj
23 ffii
2 4 ??
SEP
m.
'mi
1
t
4
3
,995
nu
Lib-26-67
MIT LIBRARIES
DD3 iST
TDflD
3
5Sfl
MIT LIBRARIES
DD3 TST E7^
TDflD
3
MH
3
LIBRARIES
D03 TST 2^
TDfiO
MIT LIBRARIES
3
3
3
3
3
3
TDflO
DD3
ST 31
win LIBRARIES
TDflD
D03 TEA 337
MIT LIBRARIES
TDfiD
D03 TEA 3SS
MIT LIBRARIES
TDflD
DD3 TEA 376
MIT LIBRARIES
TDflD
D03 TEA 3^^
Mn
TDflD
UBRARIES
0D3 TST 33E
Download