7 Minimal realization and coprime fraction

advertisement
7 Minimal realization and coprime fraction
• 7.1 Introduction
• If a transfer function is realizable, what is
the smallest possible dimension?
• Realizations with the smallest possible
dimension are called minimal-dimensional
or minimal realizations.
7.2 implications and Coprimeness
• Consider
β1s3 + β2s2 + β3s + β4
N (s )
=
ĝ(s) =
D(s) s4 + α1s3 + α2s2 + α3s + α4
• Consider
ŷ(s) = N (s) D −1(s) û (s)
• Let a pseudo state
v̂ (s) = D −1(s) û (s)( or D(s) v̂ (s) = û(s))
• The realization is
⎡ − α1 − α2
⎢ 1
0
x& = Ax + bu = ⎢
1
⎢ 0
⎢
0
⎣ 0
− α3 − α 4 ⎤
⎡1 ⎤
⎢ 0⎥
0
0 ⎥
⎥x + ⎢ ⎥u
0
0 ⎥
⎢ 0⎥
⎢ ⎥
⎥
1
0 ⎦
⎣ 0⎦
y = cx = [β1 β2 β3 β4 ]x
• Its controllability matrix can be computed as
⎡1 − α1 α12 − α2
⎢
0
1
− α1
C=⎢
⎢0
0
1
⎢
0
0
⎢⎣0
− α13 + 2α1α2 − α3 ⎤
⎥
2
α1 − α2
⎥
⎥
− α1
⎥
1
⎥⎦
• Its determinant is 1 for any αi. Hence the
realization is called a controllable canonical
form.
• Theorem 7.1 The controllable canonical form
is observable if and only if D(s) and N(s) are
coprime.
• If the controllable canonical form is a
realization of ĝ(s), then we have, by definition,
ĝ(s) = c(sI − A ) −1 b
• Taking its transpose yields the state equation
(a different realization)
⎡ − α1
⎢− α
x& = A′x + c′u = ⎢ 2
⎢ − α3
⎢
⎣ − α4
1 0 0⎤
⎡ β1 ⎤
⎢β ⎥
0 1 0⎥
⎥x + ⎢ 2 ⎥u
0 0 1⎥
⎢β3 ⎥
⎥
⎢ ⎥
0 0 0⎦
⎣β4 ⎦
y = b′x = [1 0 0 0]x
• It is called an observable canonical form.
• The equivalent transformation x = Px with
⎡0
⎢0
P=⎢
⎢0
⎢
⎣1
0 0 1⎤
0 1 0⎥
⎥
1 0 0⎥
⎥
0 0 0⎦
• will get the different controllable and
observable canonical form.
• 7.2.1 Minimal realizations
• Let R(s) be a greatest common divisor (gcd) of
N(s) and D(s). Then, the transfer function can
be reduce to (coprime fraction)
ĝ(s) = N (s) / D(s).
where N (S) = N (s) R (s) and D(s) = D(s)R(s)
• We call D(s) a characteristic polynominal of ĝ(s).
Its degree is defined as the degree of ĝ(s).
• Theorem 7.2 A state equation (A, b, c, d) is
a minimal realization of a proper reational
function ĝ(s) if and only if (A, b) is
controllable and (A, c) is observable or if
and only if
dim(A) = deg(ĝ(s))
• The Theorem provides a alternative way of
checking controllability and observability.
• Theorem 7.3 All minimal realizations of ĝ(s)
are equivalent.
• If a state equation is controllable and
observable, then every eigenvalue of A is a
pole of ĝ(s) and every pole of ĝ(s) is an
eigenvalue of A.
• Thus we conclude that if (A, b, c, d) is
controllable and observable, then we have
Asymptotic stability ⇔ BIBO stability.
7.3 Computing coprime fractions
• Let write
N (s ) N (s )
=
D(s ) D (s )
which implies D(s)( − N (s)) + N (s) D(s) = 0
• Let
D(s)=D0+D1s+D2S2+D3s3+D4s4
N(s)=N0+N1s+N2s2+N3s3+N4s4
D (s) = D0 + D1s + D2s2 + D3s3
N (s) = N 0 + N1s + N 2s2 + N 3s3
• Sylverster resultant (Homogeneous linear
algebraic equation)
⎡ D0
⎢D
⎢ 1
⎢ D2
⎢
D
S := ⎢ 3
⎢ D4
⎢
⎢ 0
⎢ 0
⎢
⎣ 0
N0
0
0
0
0
0
N1
D0
N0
0
0
0
N2
D1
N1
D0
N0
0
N3
D2
N2
D1
N1
D0
N4
D3
N3
D2
N2
D1
0
D4
N4
D3
N3
D2
0
0
0
D4
N4
D3
0
0
0
0
0
D4
0 ⎤ ⎡− N0 ⎤
0 ⎥ ⎢⎢ D0 ⎥⎥
⎥
0 ⎥ ⎢ − N1 ⎥
⎥
⎥⎢
N 0 ⎥ ⎢ D1 ⎥
=0
⎥
⎢
⎥
N1 − N 2
⎥
⎥⎢
N 2 ⎥ ⎢ D2 ⎥
N3 ⎥⎢− N3 ⎥
⎥
⎥⎢
N 4 ⎦ ⎢⎣ D3 ⎥⎦
• D(s) and N(s) are coprime if and only if the
Sylverster resultant is nonsingular.
• Theorem 7.4 Deg ĝ(s) = number of linearly
independent N-columns =: µ and
the coefficients of a coprime fraction
[− N 0
D0
− N1
D1 . . − N µ
Dµ
]′
equals the monic null vector of the matrix
that consists of the primary dependent Ncolumn and all its LHS linearly independent
columns of S.
• 7.3.1 QR Decomposition
• Consider an n×m matrix M. Then there
exists an n×n orthogonal matrix Q such that
QM = R
where R is an upper triangular matrix.
• Because Q is orthogonal, we have
Q −1 = Q ′ =: Q and M = QR
7.4 Balanced realization
• The diagonal and modal forms, which are
least sensitive to parameter variations, are
good candidates for practical implementation.
• A different minimal realizations, called a
balanced realization.
• Consider a stable system
x& = Ax + bu
y = cx
• Then the controllability Gramian Wc and the
observability Wo are positive definite if the
system is controllable and observable
AWc + WcA’ = -bb’
A’Wo + WoA = -c’c
• Different minimal realizations of the same
transfer function have different controllability
and observability.
• Theorem 7.5 Let (A, b, c) and ( A, b, c ) be
minimal and equivalent. Then WcWo and
Wc Wo are similar and their eigenvalues are all
real and positive.
• Theorem 7.6 A balanced realization
For any minimal state equation (A, b, c)
an equivalent transformation x = Px
such that the equivalent controllability and
observability have the property Wc = Wo = Σ
7.5 Realizations from Markov parameters
• Consider the strictly proper rational function
β1sn −1 + β2sn − 2 + ... + β n −1s + β n
ĝ(s) = n
s + α1sn −1 + α2sn − 2 + ... + αn −1s + αn
• Expend it into an infinite power series as
ĝ(s) = h (0) + h (1)s −1 + h ( 2)s −2 + ...
( h(0) = 0 for strictly proper)
• The coefficient h(m) are called Markov
parameters.
• Let g(t) be the inverse Laplace transform of
ĝ(s). Then, we have
h( m) =
d m −1
dt
m −1
g( t ) t = 0
• Hankel matrix (finding Markov parameters)
h ( 2)
h(3)
⎡ h (1)
⎢ h ( 2)
h(3)
h ( 4)
⎢
T( α, β) = ⎢ h(3)
h ( 4)
h(5)
⎢
.
.
⎢ .
⎢⎣ h( α) h( α + 1) h ( α + 2)
h(1)
⎤
h(β + 1) ⎥
⎥
.
h(β + 2) ⎥
⎥
.
.
⎥
. h ( α + β − 1)⎥⎦
.
.
h(2) = -α1h(1) + β2;
h(1) = β1;
h(3) = -α1h(2) - α2h(1) + β3; …
h(n) = -α1h(n-1)-α2h(n-2)- … -αn-1h(1)+βn
• Theorem 7.7 A strictly proper rational
function ĝ(s) has degree n if and only if
ρT(n, n) = ρT(n+k, n+l) = n
where ρ denotes the rank.
7.6 Degree of transfer matrices
• Given a proper rational matrix Ĝ(s) , assume
that every entry of Ĝ(s)is a coprime fraction.
• Definition 7.1 The characteristic polynomial
of Ĝ(s) is defined as the least common
denominator of all minors of Ĝ(s) . Its
degree is defined as the degree of Ĝ(s) .
7.7 Minimal realizations-Matrix case
• Theorem 7.M2 A state equation (A, B, C, D)
is a minimal realization of a proper rational
matrix Ĝ(s)if and only if (A, B) is controllable
and (A, C) is observable or if and only if
dim A = deg Ĝ(s)
• Theorem 7.M3 All minimal realizations of Ĝ ( s )
are equivalent.
7.8 Matrix polynomial fractions
• The degree of the scalar transfer function
ĝ(s) =
N (s)
= N (s) D −1(s) = D −1(s) N (s)
D(s )
is defined as the degree of D(s) if N(s) and
D(s) are coprime fraction.
• Every q×p proper rational matrix can be
expressed as (right fraction polynomial)
Ĝ (s) = N (s) D −1(s)
• The expression (left polynomial fraction)
Ĝ (s) = D −1(s) N (s)
• The right fraction is not unique (The same
holds for left fraction)
Ĝ (s) = [ N (s) R (s)][ D(s) R (s)]−1 = N (s) D −1(s)
• Definition 7.2 A square polynomial matrix
M(s) is called a unimodular matrix if its
determinant is nonzero and independent of s.
• Definition 7.3 A square polynomial matrix
R(s) is a greatest common right divisor
(gcrd) of D(s) and N(s) if
(i) R(s) is a common right divisor of D(s) N(s)
(ii) R(s) is a left multiple of every common
right divisor of D(s) and N(s).
If a gcrd is a unimodular matrix, then D(s)
and N(s) are said to be right coprime.
• Definition 7.4 Consider
Ĝ (s) = N (s) D −1(s) (right coprime)
= D −1(s) N (s) (left coprime)
Then, its characteristic polynomial is
defined as
det D(s) or det D(s)
and its degree is defined as
deg Ĝ(s) = deg detD(s) = deg det D(s)
• 7.8.1 Column and row reducedness
• Define
δciM(s) = degree of ith column of M(s)
δriM(s) = degree of ith row of M(s)
⎡s + 1 s3 − 2s + 5 − 1⎤
• For example: M(s) = ⎢
⎥
2
s
0 ⎥⎦
⎢⎣s − 1
δc1 = 1, δc2 = 3, δc3 = 0, δr1 = 3, and δr2 = 2.
• Definition 7.5 A nonsingular matrix M(s) is
column reduced if
deg detM(s) = sum of all column degrees
It is row reduced if
deg det M(s) = sum of all row degrees
• Let δciM(s) = kci and define Hc(s) = diag(skc1,
skc2, …). Then the polynomial matrix M(s)
can be expressed as
M(s) = MhcHc(s) + Mlc(s)
Mhc: The column-degree coefficient matrix
Mlc(s): The remaining term and its column
has degree less than kci.
• M(s) is column reduced⇔Mhc is nonsingular.
• Row form of M(s)
M(s) = Hr(s)Mhr + Mlr(s)
Hr(s) = diag(skr1, skr2, …).
Mhr: the row-degree coefficient matrix.
• M(s) is row reduced⇔Mhr is nonsingular.
• Theorem 7.8 Let D(s) is column reduced,
Then N(s)D-1(s) is proper (strictly proper) if
and only if δciN(s)≤δciD(s) [δciN(s)<δciD(s)]
• 7.8.2 Computing matrix coprime fraction
• Consider Ĝ(s) expressed as
Ĝ (s) = D −1 (s) N (s) = N (s) D −1 (s)
• Imply N (s) D(s) = D(s) N (s)
• Assuming
D (s) = D0 + D1s + D2s2 + D3s3 + D4s4
N (s) = N 0 + N1s + N 2s2 + N 3s3 + N 4s4
D(s) = D0 + D1s + D2s2 + D3s3
N (s) = N 0 + N1s + N 2s2 + N 3s3
• A generalized resultant (the matrix version)
⎡ D0
⎢
⎢ D1
⎢ D2
⎢
D
SM := ⎢ 3
⎢ D4
⎢
⎢ 0
⎢ 0
⎢
⎢⎣ 0
N0
0
0
0
0
0
N1
D0
N0
0
0
0
N2
D1
N1
D0
N0
0
N3
D2
N2
D1
N1
D0
N4
D3
N3
D2
N2
D1
0
D4
N4
D3
N3
D2
0
0
0
0
0
0
D4
0
N4
0
D3
D4
0 ⎤ ⎡− N0 ⎤
⎥
0 ⎥ ⎢⎢ D0 ⎥⎥
0 ⎥ ⎢ − N1 ⎥
⎥⎢
⎥
N 0 ⎥ ⎢ D1 ⎥
=0
⎥
⎥
⎢
N1 − N 2
⎥⎢
⎥
N 2 ⎥ ⎢ D2 ⎥
N3 ⎥⎢− N3 ⎥
⎥⎢
⎥
D
N 4 ⎥⎦ ⎣ 3 ⎦
• Theorem 7.M4 Let µi, be the number of
linear independent. Then deg Ĝ(s) = µ1 + µ2 + ... + µp
and a right coprime fraction obtained by
computing monic null vectors.
7.9 Realization from matrix coprime fraction
• Define (for µ1 = 4 and µ2 = 2)
⎡sµ1
H (s) := ⎢
⎢⎣ 0
0 ⎤ ⎡s4 0 ⎤
, and
=
µ2 ⎥ ⎢
2⎥
s ⎥⎦ ⎢⎣ 0 s ⎥⎦
⎡sµ1 −1
0 ⎤ ⎡ s3
⎢
⎥ ⎢ 2
. ⎥ ⎢s
⎢ .
⎢ 1
0 ⎥ ⎢s
=
L(s) := ⎢
µ 2 −1 ⎥ ⎢
s
⎢ 0
⎥ ⎢1
⎢ .
. ⎥ ⎢0
⎢
⎥ ⎢
1 ⎥⎦ ⎢⎣ 0
⎢⎣ 0
0⎤
⎥
0⎥
0⎥
⎥
0⎥
s⎥
⎥
1⎥⎦
• Let
ŷ(s) = Ĝ (s) û (s) = N (s) D −1û (s)
and define v̂ (s) = D −1(s)û(s)
• Then, we have
D(s) v̂ (s) = û (s), and ŷ(s) = N(s)v̂(s)
• Let define
⎡sµ1 −1
⎡s3v̂1(s) ⎤ ⎡ x1(s) ⎤
0 ⎤
⎥
⎢ 2
⎥ ⎢
⎢
⎥
x
(
s
)
s
v̂
(
s
)
.
.
2
⎥
⎢ 1 ⎥ ⎢
⎢
⎥
⎢ 1
0 ⎥ ⎡ v̂1(s) ⎤ ⎢ sv̂1(s) ⎥ ⎢ x 3 (s) ⎥
x̂ (s) = L(s) v̂ (s) = ⎢
=
⎥ := ⎢
⎥
µ 2 −1 ⎥ ⎢ v̂ (s)⎥ ⎢
x
(
s
)
v̂
(
s
)
0
s
⎥ ⎢ 4 ⎥
⎢
⎥⎣ 2 ⎦ ⎢ 1
⎢ sv̂ (s) ⎥ ⎢ x 5 (s) ⎥
⎢ .
. ⎥
⎢
⎥
⎢ 2 ⎥ ⎢
⎥
x
(
s
)
1 ⎥⎦
⎢⎣ 0
⎢⎣ v̂ 2 (s) ⎥⎦ ⎣ 6 ⎦
• Express D(s) as
D(S) =DhcH(s) + DlcL(s)
• Then we have
H (s) v̂ (s) = − D −hc1Dlc x̂ (s) + D −hc1û (s)
and
β
β113 β114 β121 β122 ⎤
⎡β
ŷ(s) = N (s) v̂ (s) = ⎢ 111 112
⎥ L(s) v̂ (s)
β
β
β
β
β
β
213
214
221
222 ⎦
⎣ 211 212
⎡β111 β112 β113 β114 β121 β122 ⎤
=⎢
⎥ x̂ (s)
β
β
β
β
β
β
213
214
221
222 ⎦
⎣ 211 212
Download