Uploaded by ytse

boley1984

advertisement
Systems & Control Letters 4 (1984) 317-324
September
North-Holland
The Lanczos-Arnoldi
controllability
D.L. BOLEY
algorithm and
*
Computer
Science
Deparrment,
Uniuersiiy
of Minnesota,
Church St. SE, Minneapolis,
MN 55455, USA
G.H. GOLUB
Deporrmenr
CA 94305,
207
**
Cornpurer
USA
Science.
Stanford
Unioersily,
Stanford,
Received 31 January 1984
Revised 14 May 1984
We present methods for computing
the controllable
space
of a Linear
Dynamic
System. The methods
are based on
orthogonaIizing
Krylov
sequences and can use much less storage than other methods
with little or no loss of numerical
stability.
Keywords:
Controllability.
Numerical
methods,
Lannos
algorithm,
Large scale system.
Linear systems,
1. Introduction
A classical problem in linear control theory is
to compute the controllable space of a linear system
k=Ax+Bu,
(1)
A’ (n X n), B (n X p) are matrices and x, u
are vectors of states, inputs respectively [5]. The
most popular method at present to compute this
space is the so-called Staircase algorithm [l-4].
This method essentially consists of applying a
series of orthogonal similarity transformations to
(1) to move to a new basis in which the controllawhere
ble states may be read off by inspection. By using
orthogonal transformations, it is clear that this
scheme is backward stable in the sense that the
system that is computed is similar to one very
close to the original system. Unfortunately, this
does not imply that the method is forward stable
in the sense that if we perturb the system (and
therefore also the controllable space) slightly, the
result may be a large change in the dimension of
the computed controllable space [3]. In addition
the bound on the size of the perturbation needed
to reduce the size of the controllable space, which
one may obtain from the Staircase algorithm, may
be very rough [3].
For large systems, most present methods need
enough storage to store at least the matrix A,
which could be potentially very large. In this paper,
we propose two schemesto compute the controllable space, based on the Lanczos algorithm
[6,7,14,15] which can obtain mathematically equivalent results with much lessstorage.
2. Mathematical background
Given a linear dynamical system (l), the problem we are addressing is to compute the controllable subspace S, of (1). It is well known that SCmay
be characterized
we summarize
in various
equivalent
ways, which
in:
Theorem 1 [8,1,9]. The spacesdefined by thefollowing four characterizations are the same (where the
quantities are defined with respect to equation (1)):
SC= the controllable subspaceof (1).
l
The work of this author was partially
supported
by the
Centre for Mathematical
Analysis
at the Australian
National University
during the author’s visit there. It was also
supported
by the U.S. National
Science Foundation
under
grant ECS-8204468.
** The work of this author was in part supported by NSF
grant
MCS-78-11985.
0167-6911/84/$3.00
0 1984. Elsevier
Science
Publishers
1984
(2)
SC= smallest invariant subspaceof A
containing
SC =
the vectors (columns)
colsp( M,)
with M, E [B, AB, A’B,. . . ,A*-‘B]
matrix).
B.V. (North-Holland)
of B.
(3)
(4)
(controllability
317
Volume 4. Number 6
SYSTEMS & CONTROL LE-ITERS
SC = ( x: there exists an input u( t ), 0 I t I t,,
such that starting with x = 0 at t = 0,
one, can reach the state x, at time t = t,.
Proof.
Omitted-see
umns Q = [q,, q2,. . . ,qk]
equation:
(5)
the above references. 0
In the sequel we will use formulation (3) (4)
thereby reducing this problem to one in linear
algebra. Our schemes will be expressed in this
context.
We make several remarks regarding methods
for Theorem 1. It is well known among researchers
in the numerical aspects of this problem that computing S, using (4) directly can lead to instability
in that the computed matrix M, may have a rank
that is numerically indeterminant, depending more
on the choice of zero tolerance rather than the
parameters in the problem [3]! A better scheme is
to apply a seriesof orthogonal change of basesto
convert the system to one where the space S, may
be read by inspection. This schemeis the Staircase
algorithm, described in detail in [10.2,4]. Though
this schemehas been used very successfully, it has
been found to suffer from certain difficulties. In
order to carry out the algorithm, it is necessary to
store the matrix A as a full matrix, even if it is
large and sparse, so that for large problems, a
large amount of storage may be required. In addition it has been found in practice that the estimates one may obtain from this algorithm of the
sensitivity of the results to perturbations in the
coefficients can be very pessimistic and of little use
[3]. We will describe several schemes which attempt to avoid one or both of these difficulties.
3. Method
I-Arnoldi
We first describe the general procedure, then
show how it applies to computing S,. The method
was originally proposed [14] to solve the matrix
eigenvalue problem, but has been found to be of
use in many different areas (cf. [16]). We first
describe the simple case in which we start with a
single vector, corresponding to a single-input system:
i=Ax+bu.
Simple description: We start with the matrix A
and a vector q, with 1jq,112 = 1. Aim: fill in the
upper Hessenbergmatrix H and orthonormal col318
September
1984
satisfying the following
AQ=QH,
(6a)
QTAQ = H.
(6b)
In brief we accomplish this by using alternatively
equations (6b) and (6a) to fill in Q column by
column, H element by element. The resulting procedure is given in Appendix A.
Mathematically, the effect of the algorithm ARNOLDI-1
is the same as computing the
Krylov sequenceql, Aq,, A’q,, . . . and orthogonalizing each vector against the previous ones (using
e.g. Gram-Schmidt). We express this in the following theorem.
Remark.
2. Zf the z,, . . . , z,$-I in ARNOLDI-1 are all
nonzero then the spaces
Theorem
42~~.-4,1
= colsp[q,, Aq,, A’q,,. , . ,A”%]
colsP[q,,
(7)
have dimension s, where q,, q2,. . . are the vectors
from ARNOLD&~.
Proof. By induction on s and equation (6). See [6],
chapter 6, sec. 26-31.
Based on the remark above we may write the
procedure as given in Appendix B.
When using this procedure ARNOLDI-2, it has
been found that numerically one losesorthogonality in the q,, . . . ,qk. Simon [13] has done a detailed
study of the problem and proposed several solutions. We have adapted the simplest one: namely
to do complete reorthogonalization:
4.1. Orthogonalize zk against q,, . . . ,qk;
It is clear from (4) and Theorem 2 that to use
the Arnoldi procedure to compute S,, we should
initialize q, = b/]]6]]2. The only question remaining is when to stop. In this regard we note that if
we were to use (4) to obtain S,, we would compute
Akq,, k=l,2
,..., and continue until at some step
k, the entry Akq, contributes no new dimension to
colsp[q ,,..., A”-‘q,].
In exactly the same way we continue algorithm
Volume
4, Number
6
SYSTEMS
& CONTROL
until qk + , contributes no extra dimension, i.e. qk + , = zk = 0. By the Cayley-Hamilton Theorem this is guaranteed to occur for some
LETTERS
September
1984
ARNOLDI-1,2
4. Multiple
k<n.
For the multiple-input case, we may generalize
the procedure using
To be precise, we examine the situation when
we first encounter zk = 0. Since
AQ=QH,
(94
QTAQ
(9b)
zk
L
(413
H,
H
Q=[Q,IQ,l=[q,,...~qklQ~l
be a square orthogonal matrix. Then we have
AQ,= Q,H,
AQ=A[Q,IQ,l
for some H =
[ 1
I 1
=[Q,lQ,l r
;
[1
z
H
2,
E,
for i=2,...,k.
The columns Q, are obtained by orthonormalizing the columns B.
The algorithm that results will be a close extension of the algorithm ARNOLDI-1,2,
where we use
blocks Q,,. . . ,Qk, W,, Z, in place of the single
vectors q,,. . . ,qk, w,, z,; except lines 7, 8
(ARNOLDI-l),
line 5 (ARNOLDI-2)
need someattention, especially if Z, fails to have full column rank.
The Gram-Schmidt reduction in line 4 (ARNOLDI2) is carried out separately for each column of Wk.
For
lines
7, 8 (ARNOLDI-I),
hne 5 (ARNOLDI-2)
we discusstwo cases:
Case Z: Z, has full column rank.
We can write
2
(84
2
QTAQ=,
Q= [Q,, Q,v...,Q,l
(no. of ~01s.in Q, ) Q (no. of ~01s.in Q, _ , )
where H is upper Hessenberg.
So we have found an orthonormal basis
)?...?qk)
for SC.
(4
We have the following close relationship with
the Staircase algorithm: Let
= [Q,HIQHI
where H is block upper Hessenbergand
is a set of orthonormal columns with
[ 1
H
o
2
and
= H,
42v...>qk)*
]]zk]] = 0 when dim S, = k. This implies II~+,.~ = 0
and hence
QTAQ=
inputs
(8b)
2
where H is upper Hessenberg.
This is exactly the result obtained using the
Staircase algorithm, so all the estimates obtained
from the Staircase algorithm also apply here.
However, we note from the appendix for
ARNOLDI-1,2,
that the matrix A is used only to
form the matrix-vector product in line 3, so in the
case A is large sparse or has special structure, we
may save considerable storage by storing A in
some collapsed form. We will discuss this in more
detail for the block generalizations of this algorithm below.
‘k
=
Qk+lHk+,.k
where H is upper triangular, Qk + , is a matrix of
orthonormal columns with the same shape as Z,.
The matrices Qk+,, Hk+,.k can be computed using
the QR decomposition (see e.g. [12]), or using
modified Gram-Schmidt reduction (as we have
done).
Case ZZ: Z, has not full column rank.
We write
Z,E= [Q,+,vQ][ fk+,.,
where Hk+,,k is square, [Qk+,, Q] has the same
shape as Z, and has orthonormal columns. The
matrix E is a permutation matrix which serves to
order the columns of Z, so that the first (rank( Z,))
columns are linearly independent.
A decomposition of (10) is always possible for
using, for example, a QR decomposition with
319
Volume
4, Number
pivoting (cf. [I7]). If we relax the condition and
allow E to be any orthogonal matrix, we can even
compute it with the Singular Value Decomposition
(S.V.D.)
[17]. However, to conserve storage, we
apply modified Gram-Schmidt reduction to each
column in Z, in turn, and delete those columns
that are reduced to zero. In addition we discard
the coefficients H, accumulating only the vectors
forming the basis for S,.
By using this simple scheme to reorder the
columns, we do sacrifice the robustnesspresent in
using QR with pivoting or the S.V.D. [17].
However, since we obtain Z, by using a
Gram-Schmidt orthogonalization of W, against
Q 1,“” Qk without any reordering, there is little
loss of stability by continuing to use GramSchmidt to orthogonalize Z,. This was what we
actually observed in our numerical experiments.
If at some point all the columns are deleted, we
have Z, = 0 and the procedure terminates. This
condition is analogous to the stopping criterion
zk = 0 used in ARNOLDI-I,2.
We note that, just as in the single-input case,
colsp[Q,,...,
Qk]=colsp[Q,,AQ,,...,A”-‘Q,].
Hence if Z, = 0 for the value of k such that
Z, # 0,. . . ,Z, # 0 then, in a manner similar to the
single-input case,
~01s~[Q,,...,Q,]
Table 1
Storage requirements
for two methods (m = no. of nonzeros
in
A; n = order of A = no. of rows in L?; p = no. of COhmns
in B;
II~ = dimension
of SC)
Skircase
A*
PI2
B I’
“P
Accumulated
Householders
BLOCK-ARNOLDI
b
II’
Total
2n’+
A
m
Vectors Q
W, and Z, ’
Total
“P
np
nn,
m + “II, + np
a Full matrix needed to apply transformation.
h Full matrix needed to recover basis.
’ Same space can be used for W, and Z,.
and the Staircase algorithm is the use of
Gram-Schmidt instead of Householder reflections. Since the cost is comparable, the main saving is in the storage requirements. We present in
Table 1 a summary of the storage requirements for
the two methods.
Remarks. If A is very sparse, then m is small, and
we can save almost 0( n* ) space. If the system
model is much larger than its minimal realization,
as can happen when there are many variables, nc
will be small and we can save another O(n*)
space.
=s,.
The detailed procedure for the multiple-input
case (BLOCK-ARNOLDI)
can be found in Appendix
C. The effect of lines 9-17 of BLOCK-ARNOLDI
is:
If no rank deficiency found then just do normal
Gram-Schmidt.
If a rank deficiency is found, manifested by a
zero column, then delete that column, replacing it
with a later column (line 14), and reduce the
number of columns by 1 (line 15).
In our implementation we again used complete
reorthogonalization, i.e. we repeated steps 4.1&8.
If nc -K n this will not be a great expense. We also
discarded the coefficient H, keeping only the information in Q needed to for the basis for S,. If H
is needed, it can easily be computed using (9b).
We remark that if we use BLOCK-ARNOLDI
on
a single-input system, that is p = 1, then the procedure reduces to ARNOLD1-2,
with the stopping
criterion included and the coefficients H discarded.
The main difference between BLOCK-ARNOLDI
320
September1984
SYSTEMS & CONTROL LETTERS
6
5. Nonsymmetric
Lanczos
This is another variation on the Lanczos algorithm proposed by Lanczos [7]. An outline of the
basis procedure is in 161,p. 391. We sketch the
procedure and then show how it may be applied to
computing S, for a single-input case as in Section
3 above.
Starting with two vectors q,, Q,, chosen so that
q$, = 1, the procedure can be thought of as biorthogonalizing the two Krylov sequences
{4,* 4,
A2q, ,... },
(91, dj,,
AZTij1,“. }
to obtain
Q = [q,, 42 9.. . I
and
Q= [G,, Q*,... 1.
The biorthogonalization condition implies that
qTijj=O
if i#j.
In the sequel, we will assumeqi, ei are scaled so
Volume 4, Number 6
SYSTEMS&CONTROLLElTERS
that qT4, = 1, i = 1, 2,. . . , and
p’Q
NON-SYM-LANCZOS
thus
= I.
(11)
The matrices Q, Q are generated according to the
relations
satisfy
COlSP[q,, Aq,,...,Ak-‘q,],
colsp[&, . . . ,&] = COlS&,
ATi ,,.. .,( AT)k-l&].
Both spaceshave dimension k.
AQ=QH,
@A = HQT M AT&
COlSP[!7,,...,q,]=
September1984
(124
&jT,
(12b)
where the coefficients H are computed to satisfy
the biorthogonalization conditions. From (12a) we
find an H which is upper Hessenberg, and from
(12b) a lower Hessenberg, hence H is tridiagonal
(nonsymmetric):
H=
(13)
Proof. Similar to the proof of Theorem 2. q
So we have here a stopping criterion analogous
to that for ARNOLD&&
namely:
11.1. IF z, small, THEN STOP.
In order to apply this procedure to the problem
of computing SC, we set q, = b and 4, a random
vector, and then scale them so that qT$, = 1.
6. Numerical experiments
The procedure is therefore ([6], p. 391) the algorithm NON-SYM-LANCZOS
given in Appendix D.
Remarks on NON- SYM-LANCZOS: To achieve
numerical stability we must re-biorthogonalize zk,
zk analogously to the re-orthogonalization in
BLOCK-ARNOLDI.
This is accomplished with a step
of the form:
15.1. zk = zk - qk(&$k) - ’ ’ ’ - ql(&);
15.2. ik = ik - (i,(q;ik) - . . ’ - ‘j&k);
Failure conditions: since this procedure does
not generate sequences of orthogonal vectors, it
can be subject to failure conditions unrelated to
the original problem on which the method is applied. When any of these conditions arise we must
restart the procedure with different vectors ql, tjl
([6], p. 389). The conditions are
ik is small in line 12,
(14a)
or
zz?, is small in lines 16-17.
(14b)
It is not well understood when these conditions
occur or how to avoid them. If these conditions do
not occur, then we can show:
Theorem3. Ifz, f 0,..., Zk-, # 0 and(14) doesnot
occur for i,, . . . , ik _, , then the vectors generated by
We ran two series of problems, one with increasing sizes with single inputs, and one with
multiple inputs. The results shown for these sets
were typical of all the problems we ran. For each
problem we started in canonical form and applied
progressively worse conditioned transformations
to it.
Remarks on Table 2: It was found, as illustrated by the table, that the Amoldi method is
as stable as the Staircase method. Since it has
comparable cost and much smaller storage requirements, the Amoldi method is seento be of particular use to large problems. For example for a large
circuit, the matrix will be large and sparse,but the
number of the controllable modes will be relatively
small.
The NON -SYM- LANCZOS method failed much
more often, especially on large problems. It is of
course expected that this method be less stable,
since it does not use orthogonal transformation.
However, motivated by a theorem of Parlett et al.
[l], Main Theorem, we computed the minimum
residual obtained before termination:
residual = , <ye,
IlZill~
where k has the final value at termination
It was found that, unlike the corresponding
quantity for BLOCK-ARNOLDI
and Staircase WI,
this quantity provided a consistent flag of trouble.
321
Volume
4, Number
SYSTEMS
6
& CONTROL
Table 2
Experimental
results. In Case A, TOL=lO-s,
conditioning”
TOL = lo-‘. conditioning
tried 10 ‘, 103, 106, and no. of inputs
of A); aF denotes dimension of S,; NA means not applicable)
Case A
Case B
a
b
’
d
conditioning
for which -method
0.
1.
2.
3.
4.
5.
6.
worked
11.
Maximum
Staircase
BLOCK-ARNOLDI
NON-SYM-LANCZOS
NON-SYM-LANCZOS
4
8
16
24
32
40
48
56
64
2
4
8
12
16
20
24
28
32
1E4b
lE4h
1E4h
1E4b
1E4b
lE4b
lE4b
lE3’
1E4b
lE4b
1E4b
lE4b
lE4’
1E4h
1E4b
1E4b
1E3c
lE4’
1E3
lE3
1E3
lE2
1E2
1E2
d
8E+2
6E+l
6E+1
2E-4
lE-4
4E-4
IE-4
ZE-3
3E-4
11
11
a
4
lE3
1E6b
lE3
lE6b
BEGIN
k= 1;
REPEATEDLY
d
d
NA
NA
applied.
the close connection
DO BEGIN
END
END
B: ARNOLDI-2
BEGIN
1.
k=l;
2.
3.
4.
5.
6.
7.
8.
REPEATEDLY
Do BEGIN
wk = Aq,;
ORTHOGONAL~ZE
NORMALIZE
zk
k=k+l
END
END
between
the two.
ditioning = 103. In the cases where the residual
was large, the correct n, had been obtained, and
in cases where it was small, an incorrect nC had
been obtained. This example serves to show a
typical outcome.
k=k+l
0.
322
NA
NA
wk= Aq,;
((save this matrix-vector product for future use))
FOR i=l,...,k
DO
h,, = qTwk; ((use equation(6b)))
zk = wk- qkhkk - . . . - q,h,,;
((use equation (6a)))
Reorthogonalization:repeat steps4-6.
hk+,.k = llz,ll2;
((normalize to obtain next vector qk+,))
qk+i
= Zkihk+,.k;
Appendix
residual
“c
A: ARNOLDI-1
6a.
7.
8.
9.
10.
Smallest
n
When it is small, it indicates that a small perturbation to the original problem is likely to change
nn,, hence the computed value of nC should not be
trusted. In the last column we give an example of
these quantities, computed with TOL = 10d4; con-
1984
in Case B,
tried 10’. IO’, lo*, 10’. 104, and no. of inputs=];
= 4 (TOL denotes zero tolerance used: n denotes size of system (order
By ‘conditioning’
we mean the condition
number of the random similarity
transformation
Highest conditioning
tried for this case.
Both Staircase and BLOCK-ARNOLDI
failed in the same way at the same place, illustrating
Never worked.
NON-SYM-LANCZOS
failed most often due to conditions
(14).
Appendix
September
LETTERS
wk againstq,, . . . ,qkr to obtain zk: coefficients go into h ,k+...,hkk;
to obtain qk+ ,; scalingfactor goesinto h, + ,.k;
Volume 4, Number 6
Appendix
C:
BLOCK-
((start
0.
BEGIN
1.
k=l;
REPEATEDLY
4.
FORj=
BEGIN
((denote
l,...,S
W,=[o
,,...,
u,],e.g.
W,=[o
,,...,
u,,]))
DO
replaCeS
0,;
SET z,=[O,,...,q];
((t holds approximation
j = 1; I = S;
WHILE
j Ii
DO
IF U, #
to rank( Z,)))
BEGIN
0nTHocoNALtzE
9 against 0,. . , y- ,, replacing
((no loss of rank))
0 THEN
0,;
BEGIN
NORMALIZE
0,;
j=j+l
END
BEGIN
((rank deficient})
= u,; ((replace deleted column with last column))
I = I - 1 (( reduce estimate of rank by 1))
ELSE
U,
END
ENDIF
END;
STOP;
((done))
Qk+, =[u ,,..., u,]; ((now
IF f = 0 THEN
SET
7’s are orthonormal))
k=k+l
END
END
D:
((start
17.
18.
19.
columns of B))
O, against ah columns [Q,, . . , Qa 1, result
Repeat this step fo; reorthogonalization.
Appendix
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
September 1984
LETTERS
ORTHONORMAL~ZE
5.
6.
0.
1.
DO
W,=AQk;
4.1.
& CONTROL
ARNOLDI
with matrix A and Q, = orthonormahzed
2.
3.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
SYSTEMS
NON
- SYM-
LANCZOS
with matrix A and two vectors q,, 6, with q$j,
= 1
and q, = b/llbI12))
BEGIN
k=
1;
REPEATEDLY
DO
BEGIN
ak = tj;r;qqk ;
IF k = 1 THEN
tk
= Aqk
ik = Ap,
BEGIN
-
akqk;
- akdk
END
ELSE
BEGIN
/fk
= $--
,Aqk;
Pk
= q:-
,&,i
zk=Aqk-akqk-b$~k-l:
ik = ATcjk -
akdk
-
fikik--l
END
ENDIF;
RE-BIORTHOGONALIZE
qk+,
=
zk/,&
((see remarks))
scale vectors so that inner product = 1))
Zk.
((
i,;
END
END
323
Volume
4, Number
6
SYSTEMS
& CONTROL
References
[I]
[2]
[3]
(41
15)
[6]
[7]
[8]
191
324
P. Van Dooren, A. Emani-Naeini
and L. Silverman,
Stable
extraction
of the Kronecker
structure of pencils, Proc. 17th
IEEE Co$ Decision and Control (Jan. 1979) pp. 521-524.
D. Boley. Computing
the controllability-observability
decomposition
of a linear time-invariant
dynamic
system. a
numerical
approach,
PhD Thesis, Standford
Computer
Science Report STAN-CS-81-860
(June 1981).
D.L. Boley, A perturbation
result for linear control problems, Univ. of Minn. Computer
Science Report No. 83-10
(May 1983).
C.C. Paige, Properties
of numerical
algorithms
related to
computing
controllability,
IEEE Trans. Au~omar. Control
26 (1981) 130-138.
R.E. Kalman, Mathemaucal
description
of linear systems.
SIAM J. Conrrol 1 (1963) 152-192.
J.H.
Wilkinson,
The Algebraic
Eigenualue
Problem
(Clarendon
Press, Oxford,
1965).
C. Lanczos, An iteration
method for the solution
of the
eigenvalue problem of linear differential
and integral operators, J. Res. Nor. Bur. Standards 45 (1950) 255-282.
CA. Desoer. A Second Course in Linear Systems (Van
Nostrand
Reinhold,
New York, 1970).
H. Kwakernaak
and R. Sivan, Inrroducrion
IO Oprimal
Confrol Theory (Wiley, New York, 1972).
LETTERS
[lo]
September
1984
P. Van Dooren. The generalized
eigenstructure
problem in
linear systems theory, IEEE Trans. Automat.
Control 26
(1981) 111-130.
[ll]
W. Kahan, B.N. Parlett and E. Jiang, Residual bounds on
approximate
eigensyslems
of nonnormal
matrices, SIAM
J. Numer. Anal. 19 (1982) 470-484.
[12] J.J. Dongarra,
J.R. Bunch, C.B. Moler and G.W. Stewart,
LINPACK
User’s Guide (SIAM.
Philadelphia.
1979).
1131 H. Simon. The Lanczos algorithm
for solving symmetric
linear systems, Ph.D. Thesis, Univ. Calif. Berkeley,
Ctr.
for Pure & Appl. Math. Report PAM-74
(April 1982).
[14] W.E. Arnoldi,
The principle
of minimized
iterations
in the
solution
of the matrix eigenvalue
problem,
Quart. App/.
Math. 9 (1951) 17-29.
[15] G.H.
Golub
and R. Underwood,
The block Lanczos
method
for computing
eigenvalues,
Proceedings
of /he
Symposium
on Mathematical
So/rware.
Madison,
1977
(Academic
Press, New York).
[16] D.L. Boley and G.H. Golub,
A modified
method
for
reconstructing
periodic
Jacobi matrices,
Math.
Comput.
(1984) to appear.
[17] G.H. Golub
and C.F. Van Loan, Mafrix
Compurarions
(Johns Hopkins
University
Press, Baltimore,
MD, 1983).
Download