V. Laplace Transform and System Analysis

advertisement
Summer 2008
Signals & Systems
S.F. Hsieh
V. Laplace Transform and System Analysis
1
Introduction
Why Laplace transform?
• Generalization of Fourier transform: est = e(σ+jω)t
e+2t u(t) is a causal, but unstable signal. Its Fourier transform is NOT defined(does not exist);
1
with ROC: {s|Re(s) > 3}.
however, its Laplace transform is s−3
• System stability analysis: poles & zeros
• Transient responses: Solving differential equations with boundary conditions.
2
The Laplace Transform
2.1
Bilateral(two-sided) Laplace Transform: LT±
X(s) ≡
Z
∞
x(t)e−st dt
where s ≡ σ + jω
t:−∞
1. X(s) is the Fourier transform of x(t)e−σt ; in other words, Fourier transform X(ω) is a special
case of the (bilateral) Laplace transform: XF.T. (ω) = XL.T. (s)|s=jω .
2. Region of Convergence(ROC) = {s|X(s) converges }
[Ex] The Laplace transform of x(t) = e−αt u(t) is
X(s) ≡
=
Z
∞
−st
x(t)e
dt =
Z
∞
−(s+α)t
e
0
t:−∞
e−(σ+α)t
0
∞
· e−jωt −(s + α)
∞
e−(s+α)t dt =
−(s + α) 0
−(σ+α)t
e
· e−jωt
1
+ lim
s + α t→∞
−(s + α)
∞,
if σ < −α
=
1
,
if
σ > −α
s+α
=
1
; only when s falls into ROC: {Re(s) =
The Laplace transform of x(t) will converge to X(s) = s+α
σ > −α}. Therefore, a complete expression of the Laplace transform X(s) should specify its
ROC:
x(t) = e−αt u(t) ←→ X(s) =
1
, ROC = {s|Re(s) > −α}
s+α
3. Inverse Laplace transform:
1
x(t) =
j2π
Z
σ+j∞
X(s)est ds
σ−j∞
V-1
Laplace
F ourier
[Pf] Because x(t) ←→ X(s) can be considered as x(t)e−σt ←→ X(σ + jω). Based on the
Inverse Fourier Transform formula:
Z ∞
1
X(σ + jω)ejωt dω
x(t)e−σt =
2π ω=−∞
Z ∞
1
X(σ + jω)e(σ+jω)t dω
x(t) =
2π ω=−∞
1
because s = σ + jω, dω = ds
j
Z c+j∞
1
x(t) =
X(s)est ds, where (c − j∞, c + j∞) ∈ ROC.
2πj s=c−j∞
4. Two different time-domain signals can have identical (two-sided) Laplace transforms, but with
different ROC’s. For example, [Lathi, Ex 4.1, p 341]
1
, ROC1 : Re(s) > −α
s+α
1
x2 (t) = −e−αt u(−t) ←→ X2 (s) =
, ROC2 : Re(s) < −α
s+α
x1 (t) = e−αt u(t) ←→ X1 (s) =
In other words, if we do not specify ROC, it is possible that X(s) have two different Inverse Laplace transforms. As for the one-sided Laplace transform, inverse Laplace transform
is unique.(Only causal signals are considered.)
1
1
1
= s+1
− s+2
can be
[Lathi Ex 4.28] The inverse Laplace transform of X(s) = (s+1)(s+2)
(a) x1 (t) = e−t − e−2t u(t), a causal signal, if ROC1 : Re(s) > −1 .
(b) x2 (t) = −e−t + e−2t u(−t), an anti-causal signal, if ROC2 : Re(s) < −2 .
(c) x3 (t) = −e−t u(−t) − e−2t u(t), if ROC3 : −2 < Re(s) < −1 .
In general, a causal signal is assumed.(1-sided Laplace transform)
5. At this moment, maybe you will be skeptical about the uniqueness of the Fourier transform
pairs. We can prove the uniqueness of the Fourier transform as follows:
R
R
1
+jωt :
Plug X(ω) = τ x(τ )e−jωτ dτ into the Inverse Fourier transform 2π
ω X(ω)e
1
2π
Z
X(ω)ejωt dω =
ω
=
=
1
2π
Z
Zτ
Z Z
x(τ )e−jωτ dτ · ejωt dω
ω τ
Z
1
jω(t−τ )
e
dω
x(τ )
2π ω
x(τ )δ(t − τ )dτ = x(t)
τ
where
we have used the Fourier transform pair: δ(t − τ ) ←→ e−jωτ , namely, δ(t − τ ) =
R
1
−jωτ ejωt dω.
2π ω e
2.2
Unilateral(Single-sided) Laplace Transform: LT+
X(s) ≡
Z
∞
x(t)e−st dt
where s ≡ σ + jω
t:0−
1. The single-sided Laplace transform of x(t) is equivalent to the two-sided Laplace transform of
x(t)u(t).
V-2
2. The ROC of the Single-sided Laplace transform must be to the right-hand-side of some vertical
line in the s−domain. To justify it, suppose X(s) converges for some s = σ + jω, then X(s) must
also converge for other s = σ + β + jω, (β > 0). This is obvious because e−(σ+β+jω)t introduces
even severeR exponential decaying upon x(t), in the range of t ≥ 0. After integration, its Laplace
∞
transform 0− x(t)e−(σ+β+jω)t dt must converge.
3. Given some single-sided Laplace transform X(s)(without the need to specify its ROC), there
exists a unique causal time-domain signal x(t) which is its inverse Laplace transform.
4. Single-sided Laplace transform is very useful in solving differential equations with nonzero initial
conditions.
3
Laplace Transform Properties
1. Linear: ax(t) + by(t) ←→ aX(s) + bY (s).
2. Time-scale: x(at) ←→
s
1
|a| X( a ).
Note that a > 0 in case of single-sided Laplace transform.
3. Time-delay: x(t − τ ) ←→ X(s)e−sτ , τ > 0
R∞
R∞
[Pf] L[x(t − τ )] = t:τ x(t − τ )e−st dt = u x(u)e−s(u+τ ) du = e−sτ X(s).
(Q) Find and plot the inverse Laplace transform of s12 1 − e−s − e−2s + e−3s .
(Ans) Convolution of two rectangular pulses u(t) − u(t − 1) and u(t) − u(t − 2).
4. s-shift: e−at x(t) ←→ X(s + a). It is equivalent to frequency-shift in Fourier transform.
R∞
[Pf] L[e−at x(t)] = 0 x(t)e−(a+s)t dt = X(s + a).
[Ex] Because of u(t) ←→ 1s , and cos θ = 21 [ejθ + e−jθ ], we have
1
1
1
1
s
cos ω0 t · u(t) = [ejω0 t + e−jω0 t ]u(t) ←→ [
+
]= 2
2
2 s − jω0 s + jω0
s + ω02
−αt
e
s s+α
· [cos ω0 t · u(t)] ←→ 2
=
2
s + ω0 s←s+α (s + α)2 + ω02
n
X(s)
5. t-multiplication: tn x(t) ←→ (−1)n d ds
n . differentiation in the s−domain is equivalent to tFT
d
multiplication in the time-domain. This is similar to dt
x(t) ←→ (jω) · X(ω).
R
R
d
d
−st dt = (−t)x(t)e−st dt = L[−tx(t)].
X(s) = ds
[Pf] ds
t x(t)e
t
d 1
[Ex] LT {te−at u(t)} = − ds
s+a =
1
(s+a)2
6. Differentiation
(a) two-sided Laplace transform:
d
LT±
x(t) ←→ sX(s)
dt
(Pf) Inverse Laplace transform:
d
dt x(t)
=
1
2πj
V-3
R
[sX(s)]est ds = LT±−1 {sX(s)}.
(b) one-sided Laplace transform: important in solving diff. equations with boundary conditions.
d
LT+
x(t) ←→ sX(s) − x(0− )
dt
d2
LT+
x(t) ←→ s2 X(s) − sx(0− ) − x′ (0− )
2
dt
d3
LT+
x(t) ←→ s3 X(s) − s2 x(0− ) − sx′ (0− ) − x′′ (0− )
3
dt
..
.
[Pf]
Z ∞
d
dx(t) −st
LT+
x(t) =
e dt · · · integration by parts
dt
dt
0−
Z ∞
−st ∞
x(t)[−se−st ]dt
= x(t)e |0− −
0−
Z ∞
−
x(t)e−st dt = sX(s) − x(0− )
= 0 − x(0 ) + s
0−
2
d
d
d2
d d
′ −
LT+
x(t)
=
sL[
x(t)]
−
x
(0
)
·
·
·
= { }
dt2
dt
dt2
dt dt
= s[sX(s) − x(0− )] − x′ (0− )
= s2 X(s) − sx(0− ) − x′ (0− )
etc . . .
[Lathi Ex 4.7, p 365] same problem as Ex E4.3, p 363, but with a different approach
dx(t)
dt
x(t)
e
e
e-
2
0
1
2
3
6
6
-
0
3
2
d2 x(t)
dt2
0
-
2
-2
?
d2 x(t)
= δ(t) − 3δ(t − 2) + 2δ(t − 3)
2
2 dt d x(t)
= 1 − 3e−2s + 2e−3s
L
dt2
||
s2 X(s) − sx(0−) − x(0−) = s2 X(s)
X(s) =
1 − 3e−2s + 2e−3s
s2
7. Integration:
Z
y(t) =
Z
t
t
x(τ )dτ
LT±
←→
−∞
LT+
x(τ )dτ + y(0− ) ←→
0−
X(s)
s
X(s) y(0− )
+
s
s
LT±
8. Convolution thm: x(t) ∗ y(t) ←→ X(s) · Y (s) .
LT+
LT± [x(t) ∗ y(t)] =
Z
∞
t=−∞
Z
∞
x(τ )y(t − τ )dτ e−st dt
τ :−∞
V-4
3
=
Z
=
Z
∞
t=−∞
∞
τ =−∞
Z
∞
x(τ )y(t − τ )dτ e−st dt
τ =−∞
Z ∞
−st
y(t − τ )e dt dτ,
x(τ )
t=−∞
[·] is Laplace transform of the delayed signal y(t − τ )
Z ∞
x(τ )Y (s)e−st dτ
=
τ =−∞
Z ∞
x(τ )e−st dτ
= Y (s)
τ =−∞
= X(s)Y (s)
Note that the multiplication property, which is the dual of convolution,
is purposely omitted.
1 [X (s) ∗ X (s)]
x(t)y(t) ⇐⇒ 2πj
1
2
[Lathi Ex 4.8] Use the convolution property to find the convolution of eat u(t) and ebt u(t).
h
i
i
h
1
1
1
1
1
1
= L−1 a−b
= a−b
· s−b
−
(eat − ebt )u(t).
L−1 s−a
s−a
s−b
What if a = b?
9. Initial value: If x(t) and
d
dt x(t)
both have Laplace transforms and lims→∞ sX(s) also exists, then
lim sX(s) = x(0+ ) · · · initial value
s→∞
[Ex] Let y(t) = (1 + e−2t )u(t) ←→
1 + 1 = 2.
2(s+1)
s(s+2)
= Y (s). Check if lims→∞ sY (s) = 2 equal to y(0+ ) =
[Pf]
d
LT+
x(t) ←→ sX(s) − x(0− )
dt
Z ∞
d
−
x(t) · e−st dt
sX(s) − x(0 ) =
dt
−
0
Z 0+ Z ∞
d
d
−st
=
x(t) · e dt +
x(t) · e−st dt
dt
dt
0−
0+
Z ∞
d
0+
=
x(t)|0− +
x(t) · e−st dt
dt
+
0
Z ∞
d
+
−
x(t) · e−st dt
= x(0 ) − x(0 ) +
dt
0+
Z ∞
d
+
sX(s) = x(0 ) +
x(t) · e−st dt
dt
+
h
Z0 ∞ i
d
+
lim sX(s) = x(0 ) +
x(t) · lim e−st dt
s→∞
s→∞
dt
0+
+
= x(0 )
10. Final value: If x(t) and
exists,
d
dt x(t)
both have Laplace transforms and sX(s)(when Re(s) > 0) also
lim sX(s) = x(∞) · · · final value
s→0
[Ex] Let y(t) = (1 + e−2t )u(t) ←→
2(s+1)
s(s+2)
= Y (s). Check if lims→0 sY (s) = 1 equal to y(∞) = 1.
V-5
[Pf]
d
x(t) ←→ sX(s) − x(0− )
dt
Z ∞
d
−
x(t) · e−st dt
lim
lim [sX(s) − x(0 )] =
s→0
s→0 0−
dt
Z ∞
d
=
x(t) dt
dt
0−
−
lim [sX(s)] − x(0− ) =
x(t)|∞
0− = lim x(t) − x(0 )
t→∞
s→0
lim sX(s)
s→0
4
=
lim x(t)
t→∞
Some important Laplace transform Pairs
1. δ(t) ←→ 1, ∵
R∞
t:0−
δ(t)e−st dt = e−st |t=0 = 1.
2. u(t) ←→ 1s , by direct Laplace transform formula, or taking advantages of δ(t) ←→ 1 and the
1
(let α = 0).
integration property, or, e−αt u(t) ←→ s+α
3. e−αt u(t) ←→
1
s+α
.
4. cos(ω0 t)u(t) ←→
s
s2 +ω02
. ∵ cos θ = 21 [ejθ + e−jθ ] and e−αt u(t) ←→
5. sin(ω0 t)u(t) ←→
ω0
s2 +ω02
. ∵ sin θ =
6. e−αt cos(ω0 t)u(t) ←→
1 jθ
2j [e
− e−jθ ] and e−αt u(t) ←→
1
s+α .
1
s+α .
s+α
(s+α)2 +ω02
. Using the above (d) and the s−shift property: e−at x(t) ←→
ω0
(s+α)2 +ω02
. Using the above (e) and the s−shift property: e−at x(t) ←→
X(s + a).
7. e−αt sin(ω0 t)u(t) ←→
X(s + a).
5
Inverse Laplace Transform
• If H(s) is a rational polynomial,
– long division
– partial fraction expansion
– table-lookup(Table 4.1, p. 344) is a list of Laplace transforms)
• If H(s) is not a rational polynomial, perform the Inverse Laplace using the Residue-theorem
techniques in Complex Variables.
1. Assume the Laplace transform X(s) is a rational polynomial as follows:
X(s) =
bm sm + bm−1 sm−1 + · · · + b1 s + b0
B(s)
=
A(s)
an sn + an−1 sn−1 + · · · + a1 s + a0
By expanding A(s) and B(s) into factors, we have
X(s) =
bm (s − q1 )(s − q2 ) · · · (s − qm )
an (s − p1 )(s − p2 ) · · · (s − pn )
where,
V-6
• q1 , q2 , . . . , qm are the roots of the numerator polynomial (Repeated roots are possible. There
are m roots in all for an m-th order polynomial), which are called the zeros of X(s) (Note
that when s = q1 , q2 , . . . or qm , X(s) is zero.)
• Similarly, p1 , p2 , . . . , pn are the roots of the denominator polynomial, which are called the
poles of X(s) (Note that when s = p1 , p2 , . . . or pn , X(s) diverges.)
2. Long division: If m ≥ n, we have X(s) = D(s) +
R(s)
A(s) .
3. Partial-fraction expansion[See Lathi Background B.5, p 26–37]: (if m < n)
(a) If there are no repeated roots for the denominator A(s),:
B(s)
A(s)
n
X
ai
=
s − pi
X(s) =
i=1
ci = (s − pi )
[Ex] Y (s) =
R(s) A(s) s=pi
s3 +7s2 +18s+20
s2 +5s+6
• long division: Y (s) = (s + 2) +
2s+8
s2 +5s+6
A
• partial fraction expansion: Y (s) = s + 2 + s+2
+
2s + 8
=4
A = (s + 2)
(s + 2)(s + 3) s=−2
2s + 8
= −2
B = (s + 3)
(s + 2)(s + 3) B
s+3 ,
where
s=−3
−2
4
s+2 + s+3 .
δ′ (t) + 2δ(t)
Thus, Y (s) = s + 2 +
• table-lookup: y(t) =
+ 4e−2t u(t) − 2e−3t u(t).
(b) If there are repeated roots:
[Ex] X(s) =
2s3 +8s2 +11s+3
(s+2)(s+1)3
X(s) =
A =
B3 =
B2 =
B1 =
X(s) =
x(t) =
A
B1
B2
B3
+
+
+
2
s + 2 s + 1 (s + 1)
(s + 1)3
(s + 2)X(s)|s=−2 = 3
(s + 1)3 X(s)s=−1 = −2
1 d 3
(s + 1) X(s) =3
1! ds
s=−1
1 d2 3
(s + 1) X(s) = −1
2! ds2
s=−1
3
1
3
2
−
+
−
2
s + 2 s + 1 (s + 1)
(s + 1)3
−2t
−t
−t
2 −t
[3e
− e + 3te − t e ]u(t)
4. Complex conjugate roots:
X(s) =
=
2s2 + 6s + 6
(s + 2)(s2 + 2s + 2)
Bs + C
A
+
s + 2 s2 + 2s + 2
V-7
A = (s + 2)X(s)|s=−2 = 1
3
A C
+ = ⇒C=2
X(s)|s=0 =
2
2
2
lim sX(s) = A + B = 2 ⇒ B = 1
s→∞
1
s+2
+ 2
s + 2 s + 2s + 2
1
s+1
1
=
+
+
2
s + 2 (s + 1) + 1 (s + 1)2 + 1
x(t) = [e−2t + e−t (cos t + sin t)]u(t)
X(s) =
5. If the rational polynomial includes e−t0 s , rewrite it as
X(s) =
B1 (s) −st1 B2 (s) −st2 B3 (s) −st3
e
+
e
+
e
+ ···,
A(s)
A(s)
A(s)
followed by partial fraction expansion. Finally,
x(t) = x1 (t)|t←t−t1 + x2 (t)|t←t−t2 + x3 (t)|t←t−t3 + · · · .
[Ex]
s2 e−2s + e−3s
s(s2 + 3s + 2)
1
s
−2s
e
+
e−3s
=
s2 + 3s + 2
s(s2 + 3s + 2)
2
1
0.5
1
0.5
−2s
=
−
+
−
e
+
e−3s
s+2 s+1
s
s+2 s+1
X(s) =
x(t) = 2e−2(t−2) u(t − 2) − e−(t−2) u(t − 2)
+0.5u(t − 3) + 0.5e−2(t−3) u(t − 3) − e−(t−3) u(t − 3)
6. Use the MATLAB residue command for partial fraction expansion[Lathi, p 355]
6
Applications of Laplace Transform
1. Computation of convolution:
To compute convolution c(t) = eat u(t) ∗ ebt u(t) in time-domain. Take Laplace transform on both
1
followed by partial fraction expansion and inverse Laplace transform
sides: C(s) = (s−a)(s−b)
L−1 , we have c(t) =
1
at
a−b (e
− ebt )u(t).
2. Solving differentiation equations with boundary conditions(Did you remember that we have
skipped in Sections 2.1-2.4?):
input x(t)
- Laplace
X(s)
?
Diff Eqn
- Laplace
- Solution
6
Inverse Laplace
Y (s) partial
- fraction
expansion
-
L−1 &
- table-
-output y(t)
lookup
Initial conditions
[Ex] y ′′ (t) + 5y ′ (t) + 6y(t) = x(t), with initial conditions: y(0− ) = 2, and y ′ (0− ) = −12. Assume
the input signal x(t) = u(t). Find its output y(t), t ≥ 0.
V-8
(a) Take Laplace transform on both sides:
s2 Y (s) − sy(0− ) − y ′ (0− ) + 5[sY (s) − y(0− )] + 6Y (s) = X(s),
we have
X(s)
(s + 5)y(0− ) + y ′ (0− )
+
s2 + 5s + 6
s2 + 5s + 6
= Yzsr (s) + Yzir (s)
Y (s) =
(b) Partial fraction expansion:
1/6 −1/2
3y(0−) + y ′ (0−) 2y(0−) + y ′ (0−)
1/3
+
+
−
Y (s) = Yzsr (s) + Yzir (s) = · · · =
+
.
s
s+2 s+3
s+2
s+3
(c) Inverse Laplace transform from Table-lookup:
y(t) = yzsr (t) + yzir (t)
1 1 −2t 1 −3t
− e
+ e
=
u(t) + (3y(0−) + y ′ (0−))e−2t − (2y(0−) + y ′ (0−))e−3t u(t)
6 2
3
1 13 −2t 25 −3t
− e
+ e
u(t).
=
6
2
3
3. Electric circuits:
(a) Capacitor
Z
1 t
v(t) =
i(τ )dτ + v(0−)
C 0−
1
1
I(s) + v(0−), by the integration property
KVL: V (s) =
Cs
s
KCL: I(s) = CsV (s) − Cv(0−)
(b) Inductor:
d
i(t), with initial condition i(0−)
dt
KVL: V (s) = L[sI(s) − i(0−)], by the differential property
1
1
KCL: I(s) =
V (s) + i(0−)
Ls
s
v(t) = L
7
7.1
System Transfer Functions of LTI Systems
Transfer function H(s) as an eigenvalue
Just like ejωt , H(ω) is an eigenpair for an LTI system, est can be considered as the eigen function of
LT±
an LTI system because its output is H(s) · est //est , where h(t) ←→ H(s).
[Pf]
Z
Z
st
st
s(t−τ )
st
e → LTI: h(t) → e ∗ h(t) = h(τ )e
dτ = e · h(τ )e−sτ dτ = est · H(s).
Now, by expressing the input x(t)R as a weighted integral of est , namely, Inverse Laplace transform,
1
st
−1
its corresponding output will be j2π
s H(s)X(s)e ds = L {Y (s)}, thus Y (s) = H(s) · X(s).
[Pf] The input x(t) into an LTI system can be written as (Inverse Laplace transform):
Z c+j∞
1
x(t) =
X(s)est ds
2πj c−j∞
∞ X
X(n∆s) · ∆s
· e(n∆s)t
= lim
∆s→0
2πj
n=−∞
V-9
which is a sum of exponentially growing/decaying sinusoids e(n∆s)t .
With the eigen-property
e(n∆s)t → LTI: h(t) → H(n∆s) · e(n∆s)t
and LTI property, the output is
∞ X
X(n∆s) · H(n∆s) · ∆s
y(t) = lim
· e(n∆s)t
∆s→0
2πj
n=−∞
Z c′ +jω
1
X(s)H(s)est ds
=
2πj c′ −jω
Z c′ +jω
1
=
Y (s)est ds
2πj c′ −jω
Y (s) = X(s) · H(s)
The zero-state response(ZSR) of an LTI system can be written as yzsr (t) = x(t) ∗ h(t). From the
convolution property of Laplace transform, Y (s) = H(s) · X(s), where H(s) = L[h(t)] which is called
the (System transfer function).
7.2
Connection of LTI systems
1. Cascade: H1 (s)H2 (s).
- H1 (s)
- H2 (s)
-
2. Parallel: H1 (s) + H2 (s).
-
- H1 (s)
?
⊕ 6
- H2 (s)
3. Feedback:
G(s)
1+G(s)H(s)
+ ⊕
−6
-
G(s)
H(s)
[Ex] See Lathi, p 415-420, Suppose G(s) = 10, 000.
(a) No feedback: H(s) = 0, the equivalent system gain is T (s) = 10, 000.
(b) Negative feedback control: H(s) = 0.01
G(s)
1 + G(s)H(s)
 10,000


1+100 = 99.01,




20,000
=
1+200 = 99.50,





 ≈ 1 = 100,
H(s)
T− (s) =
if G(s) = 10, 000
if G(s) = 20, 000
if G(s)H(s) ≫ 1
The equivalent system gain T− (s) is insensitive to the variation of the forward gain G(s) at
the cost of reduced gain, which can be compensated by cascading stages.
V - 10
(c) Positive feedback leads to divergence, suppose H(s) = 9 · 10−5 ,
G
1 − GH
 10,000
 1−0.9 = 100, 000,
T+ (s) =
=

11,000
1−0.99
if G(s) = 10, 000
= 1, 100, 000,
if G(s) = 11, 000
The equivalent system gain T+ (s) is highly sensitive to parameter variations.
(d) An automatic control example:
x(t)
K
+L
-HH
−6 -
- G(s)
- y(t)
Suppose G(s) =
1
s(s+8) ,
T (s) =
KG(s)
1+KG(s)
?
find the step response y(t).
K
K
s(s+8)
i=
Y (s) = X(s)T (s) = h
2 + 8s + K)
K
s(s
s 1 + s(s+8)

+ 1)e−4t ]u(t),
if K = 16, critically damped(repeated poles)

 [1 − (4t
7 −t
1 −7t
(1
if K = 7 < 16, overdamped(real poles)
y(t) =
h − 6√ e + 6 e )u(t), i

 1 − 5 e−4t cos(8t + 153◦ ) u(t), if K = 80 > 16, underdamped(complex poles)
2
K = 80
y(t) 6
K = 16
K=7
7.3
- t
How to find the system transfer function H(s)?
Interrelationship among characteristics of an LTI system:
Freq resp
H(ω)
3
6
FT s
= jω
L{·}
- Transfer func
Impulse resp.
H(s)
h(t)
L−1 {·}
-
Block diag
1/s , gain, ⊕
3
?
Diff. eqn y ′ (t), y(t), x(t) +
6
V - 11
1. Given h(t), Take Laplace transform to get H(s).
2. Given a differentiation equation(LTI):
y(t) +
n
X
i=1
m
X dj
di
bj j x(t)
ai i y(t) =
dt
dt
j=0
After applying Laplace transform, its transfer function can be expressed a rational polynomial:
Pm
j
Y (s)
j=0 bj s
P
H(s) ≡
=
X(s)
1 + ni=1 ai si
[Ex] Consider a system equation:
transform:
d2
y(t)
dt2
d
+ 5 dt
y(t) + 6y(t) =
d
dt x(t)
+ x(t). Apply Laplace
s2 Y (s) + 5sY (s) + 6Y (s) = sX(s) + X(s),
H(s) ≡
Y (s)
s+1
= 2
.
X(s)
s + 5s + 6
3. Block diagram: get the time-domain equation first, and then transform.
[Ex ] Consider the following system:(integrator is preferred over differentiator(although both
are equivalent mathematically), due to its stability; differentiation may results in spikes at
jump/discontinuous points.)
x(t)
w(t) R
+ ⊕
−6
(·)
P
- 5PP
- y(t)
!
!!
P
P2P
Z
w(t) = x(t) − 2 w(τ )dτ
Z
y(t) = 5 w(τ )dτ
Take Laplace transform,(replace
R
by 1s )
1
W (s) = X(s) − 2 W (s)
s
5
Y (s) =
W (s)
s
Cancelling W (s), we have
H(s) ≡
5
Y (s)
=
X(s)
2+s
and
h(t) = 5e−2t u(t).
V - 12
7.4
Poles and time responses
H(s) =
=
B(s)
bm sm + bm−1 sm−1 + · · · + b1 s + b0
=
A(s)
an sn + an−1 sn−1 + · · · + a1 s + a0
bm (s − q1 )(s − q2 ) · · · (s − qm )
an (s − p1 )(s − p2 ) · · · (s − pn )
1. If all poles p1 , p2 , . . . , pn are simple and real-valued:
A2
An
A1
+
+ ··· +
s − p1 s − p2
s − pn
p1 t
p2 t
h(t) = [A1 e + A2 e + · · · + An epn t ]u(t)
H(s) =
We find that the locations of poles affect the decaying rate in h(t), so long as the real parts of
the poles of H(s) are less than zero, Re(pi ) < 0, then we can assure h(t) will not diverge.
2. If there exist complex-conjugate roots: pi = α0 + jω0 = p∗j , then
H(s) =
=
B(s − α0 )
Aω0
+
+ ···
(s − α0 − jω0 )(s − α0 + jω0 ) (s − α0 − jω0 )(s − α0 + jω0 )
B(s − α0 )
Aω0
+
+ ···
2
2
(s − α0 ) + ω0
(s − α0 )2 + ω02
h(t) = Aeα0 t cos(ω0 t)u(t) + Beα0 t sin(ω0 t)u(t) + · · ·
Similarly, the decaying rate of h(t) depends on the locations of the poles. If Re(pi ) < 0, then
h(t) will not diverge. The oscillation behavior depends on the imaginary parts of the poles ω0 .
3. It is similar in cases of repeated roots. Note that e−α0 t cos ω0 t will be multiplied by t-polynomial.
As t → ∞, the increase rate due to these t-polynomials is much less significant than the exponential function eα0 t . As a result, the convergence is the same as previous ones: Re(poles) < 0 .
4. Plots of time responses due to poles:
5. We conclude that:
A causal(h(t) < 0, ∀t < 0) LTI system with rational system function H(s) is stable if and only if all of the poles of H(s) lie in the left half of the s-plane, i.e.,
all of the poles have negative real parts.
Note that
(a) For an LTI system with a rational system function H(s), causality of the system is equivalent
to the ROC being the right-half plane of the rightmost pole.
R
(b) An LTI system is stable ⇐⇒ its impulse response is absolutely integrable, |h(t)|dt < ∞
⇐⇒ H(ω) converges ⇐⇒ The ROC of H(s) includes jω−axis.
7.5
Transfer function H(s) and Frequency response H(ω)
Hfreq response(ω) = Htransfer function (s)|s=jω
V - 13
1. Poles/Zeros plot and geometrical evaluation of H(ω):
The locations of poles and zeros are important to frequency responses. When s = jω = jω, ω
goes from 0 to ∞, at some ω0 close to the pole, the amplitude response |H(ω0 )| will increase.
Similarly, the closer to the zero, its amplitude response will be reduced.
H(ω) = H(s)|s=jω
(s − z1 )(s − z2 ) · · · (s − zm ) = c
(s − p1 )(s − p2 ) · · · (s − pn ) s=jω
=
(jω − z1 )(jω − z2 ) · · · (jω − zm )
(jω − p1 )(jω − p2 ) · · · (jω − pn )
in s-domain, ratio of the zero vectors: jω − zi , to the pole vectors: jω − pj
|jω − z1 | · · · |jω − zm |
· · · length of vectors
|jω − p1 | · · · |jω − pn |
∠H(ω) = ∠(jω − z1 ) + · · · + ∠(jω − zm ) − ∠(jω − p1 ) − · · · − ∠(jω − pn )
|H(ω)| = |c|
sum of angles of zero vectors − that of pole vectors
2. Examples
(a) H(s) =
1
s+a ,
a > 0, is a LPF (Low Pass Filter).
(b) H(s) =
s
s+a ,
s+b
s+a ,
s−a
s+a ,
a > 0, is a HPF (High Pass Filter).
(c) H(s) =
(d) H(s) =
a, b > 0, is also a HPF (High Pass Filter).
a > 0, is an allpass filter.
jω − a =1
|H(ω)| = jω + a ∠j(ω − a)
∠H(ω) =
= tan−1 (−ω/a) − tan−1 (ω/a) = π − 2 tan−1 (ω/a)
∠j(ω − a)
7.6
Causal and everlasting sinusoids inputs to LTI systems
1. Everlasting exponential or sinusoidal input(eigen waveform):
est → H(s) → est H(s)
For example, e(1+jπ)t → H(s) =
Similarly, ejω0 t → H(s) =
jω t
−jω t
1
s+1
1
s+1 ,
ROC:{Re(s) > −1} → e(1+jπ)t ·
→ ejω0 t ·
0
and, cos ω0 t = e 0 +e
→ H(s) =
2
1
1
| cos ω0 t + ∠ 1+jω
| 1+jω
.
0
0
1
1+jπ+1
1
1+jω0 ,
1
s+1
→
V - 14
1
1
+e−jω0 t · 1−jω
ejω0 t · 1+jω
0
2
0
= Re{ejω0 t ·
1
1+jω0 }
=
2. Causal(truncated) sinusoidal inputs:
x(t) = ejω0 t u(t)
1
X(s) =
s − jω0
Y (s) = H(s) · X(s)
B(s)
=
A(s) · (s − jω0 )
: partial fraction expansion
k2
kn
H(ω0 )
k1
+
+ ··· +
+
=
s − p1 s − p2
s − pn s − jω0
y(t) =
k1 ep1 t + k2 ep2 t + · · · kn epn t u(t) + H(ω0 )ejω0 t u(t)
{z
}
|
{z
} |
steady-state
transient
As t → ∞, (assuming that all poles lie in the left-hand-side of the s-plane) then ysteady-state (t) =
H(ω0 )ejω0 t u(t) which is identical to the output for an everlasting exponential ejω0 t , −∞ < t <
∞.
Similarly, for a truncated real sinusoid cos(ω0 t + θ)u(t), its steady-state output is equal to
|H(ω0 )| cos[ω0 t + θ + ∠H(ω0 )]u(t), which is identical to the output for an everlasting sinusoid
cos(ω0 t + θ), −∞ < t < ∞.
V - 15
Download