Dynamic Systems - State Space Control - Lecture 12 ∗ October 16, 2012

advertisement
Dynamic Systems - State Space Control
- Lecture 12 ∗
October 16, 2012
THEOREM: Suppose G(s) is a qxm matrix of rational functions such that
the degree of denominator of each element exceeds the degree of the
numerator. Then there exist constant matrices A,B and C such that,
G(s) = C(sI − A)−1 B
PROOF: Let p(s) = sn + an−1 sn−1 + · · · + a1 s + a0 be the monic LCM of the
denominator. Write,
G(s) =
sn
En−1 sn−1 + · · · + E0
+ an−1 sn−1 + · · · + a1 s + a0
Then,

Im
0m
..
.
0m
Im
..
.
···
···
0m
0m
..
.
−a1 Im
−a2 Im
···
−an−1 Im
0m
0m
..
.


A=

−a0 Im
 
0m
 .. 
 
B= . 
0m 
Im
C = E0
∗ This
..
. E1
..
. ···
..
. En−1
work is being done by various members of the class of 2012
1





Control System Theory
2
Standard Controllable Realization
Let p(s) be the same LCM. This time expand G(s) about s = ∞ to get,
G(s) = L0
1
1
+ ···
+
s s2
Let,



A=

−a0 Iq



B=

L0
L1
..
.

Iq
0q
..
.
0q
Iq
..
.
···
···
0q
0q
..
.
−a1 Iq
−a2 Iq
···
−an−1 Iq
0q
0q
..
.









Ln−1
C = Iq ... 0q
..
. ···
..
.
0q
We have already seen that C(sI − A)−1 B = CB 1s + CAB s12 + · · ·
Since,
CB = L0

CAB = 0q
..
. Iq
..
.
0q
···

L0
 L1 
 
.  = L1
0q 
 .. 
CAn−1 B = Ln−1
Ln
C(sI − A)−1 B and G(s) agree upto the term Ln−1 s1n .
Now,
In s − A0
0n
···
0n 0n
I n s − A0 · · ·
0n det(sI − A) = ..
..
..
.
..
.
.
.
0n
0n
· · · In s − A0 = det(In s − A0 )
Where,

0
0
..
.


A0 = 

−a0
(Verify this!)
1
0
..
.
0
1
..
.
···
···
..
.
0
0
..
.
−a1
−a2
···
−an−1





Control System Theory
3
Since p(A0 ) = 0 we also have p(A) = 0
p(s)G(s) = (sn + an−1 sn−1 + · · · + a1 s + a0 )(L0
1
1
+ L1 2 + · · · )
s
s
(∗)
1
The coefficient of s in (∗) is,
a0 L0 + a1 L1 + · · · + an−1 Ln−1 + Ln = 0
At the same time, since p(A) = 0,
a0 CB + a1 CAB + · · · + an−1 CAn−1 B + CAn B = 0
⇒ Ln = CAn−1 B
This argument can be repeated due to the fact that the coefficient of s12 in
p(s)G(s) must also vanish and inductively it follows that CAk B = Lk for all
values of k.
This is standard observable realization.
EXAMPLE:
g(s) = ss+1
2 +1
The standard controllable realization is,
0 1
0
A=
, B=
,
−1 0
1
C= 0
1
Check:
s
1
(sI − A) =
−1
(sI − A)
C(sI − A)
−1
s
1
= 2
s +1
−1
B= 1
1
s
s
−1
1
s
−1
1
s
1
0
1 s2 + 1
=
The standard observable realization is,
A=
0
−1
1
,
0
C= 1
0 ,
b=
L0
L1
s+1
s2 + 1
Control System Theory
4
Find Lk ’s by long division.
1
1
On dividing we obtain the quotient as s + s2
1
b=
1
+ ···
Check:
−1
C(sI − A)
b= 1
0
s
−1
1
1
1
s
1 s2 + 1
=
s+1
s2 + 1
General Remarks on Stability:
x˙1
x˙2
=
(∗)


λ
0
1
t
λ

e
1
x1
λ
x2
λ
0
=
eλt
0
teλt
eλt
When λ is real, there are three cases of interest
Case 1:
λ<0
lim eλt = 0
t→∞
t
e−λt
t
= lim
t→∞ −λe−λt
= lim −λe−λt
lim teλt = lim
t→∞
t→∞
(l’Hôpital)
t→∞
=0

λ
0

λ < 0 =⇒ e

1
t
λ
=
0
0
0
0
as t → ∞
x1
0
=
is an equilibrium for the equation (*).
x2
0
It is asymptotically
stable in the sense that for any initial condition
x1 (t)
0
→
x2 (t)
0
In general, if eigenvalues are real and < 0 then eAt → 0 as t → ∞
The origin
More generally, if the real parts of the eigenvalues of A are < 0, eAt → 0 as
t → ∞.
Control System Theory
5
We call a linear system ẋ = Ax which the eigenvalues of A < 0 an
asymptotically stable system.
Left hand plane eigenvalues of the matrix A correspond to the left half plane
poles of (sI − A)−1
Case 2:
λ>0


e

λ
0
1
t
λ
=
eλt
0
teλt
eλt
→
∞ ∞
as t → ∞
0 ∞
In general, if A has any eigenvalue with positve real part, there are trajectories
x(t) such that ||x(t)|| → ∞ as t → ∞.
Case 3:
λ=0

0
0

e

1
t
0
=
1
0
t
1
→
1
0
∞
as t → ∞
1
In general, if A has any eigenvalue with positve real part, there are trajectories
x(t) such that ||x(t)|| → ∞ as t → ∞.
Stability depends on the poles of the transfer function (C(sI − A)−1 )
The simplest feedback control design is to use output feedback to modify
system dynamics:
ẋ = Ax + Bu
y = Cx
Let u = Ky = KCx for an appropriate gain k
ẋ = (A + BkC)x
Is it stable?
(Closed loop system)
Control System Theory
6
The control input is u + G2 y
y = G1 (u + G2 y)
Solving for y,
y=
G1
u or
I − G1 G2
(I − G1 G2 )−1 G1 u
G1
u or
I − G1 K
(I − G1 K)−1 G1 u
Supposing G2 (s) ≡ K,
y=
Suppose, G1 (s) comes from a linear, time-invarient system,
G1 = C(sI − A)−1 B
Then consider replacing the control with a feedback law,
U → U + KCx
The modified dynamics are,
ẋ = (A + BKC)x + Bu
y = Cx
The transfer function relationship is,
Y (s) = C(sI − A − BKC)−1 Bu(s)
How does this relate to
?
From what we saw above, the closed loop transfer function is,
(I − C(sI − A)−1 BK)−1 C(sI − A)−1 B
When C = I (i.e when all states are observed)
(I − (sI − A)−1 BK)−1 (sI − A)−1 B = [(sI − A){ I − (sI − A)−1 BK}]−1 B
= [(sI − A) − BK]−1 B
= (sI − A − BK)−1 B
Control System Theory
Which is what we obtained from the time domain representation.
In this case, the poles of the system tell the stability story, and these are
accessible from the system outputs.
They are zeros, I − G(s)K. As K varies these constitute the root locus.
7
Download