Uploaded by Md Nafis Uz Zaman

Soliman Srinath

advertisement
CONTINUOUS AND DISCRETE
SIGNALS AND SYSTEMS
Iru! csnG lc yqtfiqdl
I(ARIU INTEBNAIIOT{AL
Ohak -I2rg. BengLdcth.
Pho.. el3oa87, el38a2e
.:d
r,r
lt
.
I
jli:,
ni.
r.
t
CONTINUOUS AND DISCRETE
SIGNALS AND SYSTEMS
SECOND EDITION
SAMIR S. SOLIMAN
QUALCOMM Incorporated
San Diego, California
MANDYAM D. SRINATH
Southe
rn M ethodis t U niv e rs ity
Dallas, Texas
Prenlice-Holl ol lndio Frfivde
*"* ,"jH;10 001
Mmn80cd
Thls lndl8n
Raprlil4!. &E.O
(Otlghal U.S.
Etltffis.
5544.00)
OOIfflNUOIJS AND DISCRETE 536N,lUt AND SYSTEMS,
by Samh S. SolLrun ad mandyam D. Stfuth
zrd
E.L
lSS by PrcnbHa[, llt., (]Ew lmorvn as Pear8on Educauon, lnc.), Ono lako Stest, Uppst
Saddle Rlver, Nfl Jsr8oy 074{i8, U.SA. All rlghts reslved. No Pan o, thb bmk may bo
roprduEsd h any totm, by mlmeogmph ot any o$Er rneans, wlthout Potmlssbn h s'ddng tlotr
Ole Brblbhor.
@
bk
ffi
Bffi
Edu.le
Tlieso
Tho adE srd Brb&itp c, ods !@f hsvs used tlalt b€8t efiotb h Pr€Darrt8 0&
efrsdiYBt@. The auttE
0D (bvBloF rsd, r€sger4 8rd bdtg d tho UEotbs ard Fqtams b (labmets
ard grubtrsr nal(o no rsnat$y ol any ldrd, etg€ss€d @ lmpfled, wlth tggard b those Programs @ ths
lrtusrtal
!o ]bbb h any sYsnl
dcfiordetcl cmtehed h 0lfs !@k Ihe autlor ad trINshet thal
or colrssglr€tdal datttsgss h cglrE(tqt rdlh, B artdrE oul ot, Ulo hrrdsl'fiE, P€rlormarEo' ot usa ot tlr@
pro0rans.
d
E
tsBN€t-20&zu€
Pubtbhod by Asol(g
New Dolhl:lloool
N$v Dolrl-11(815.
K
GhGh, Prantbs-tlatl ol lndla P.hats LEnibd, M'c/, Connaught Cbas'
and Prlnted by V.K. Balra at Pearl Onset Prsos Prlvato Llmlted,
Contents
rlll
PNEFACE
1
NEPRESENNNG S'GA'AIS
1.1 Introduction 1
1.2 Continuous-Time vs. Discrete-Time Signals
1.3 Periodic vs. Aperiodic Signals 4
1.4 Energy and Power Signals 7
1.5 Transformations of the Independent Variable
1.5.1 The Shifting Operation, l0
1.5.2 The Retlection Operotion, 13
1.5.3 The Time-Scaling Operarion, 17
1.6 Elementary Signals 19
L6.I The Unit Step Function, 19
1.6.2 The Ramp Function, 2l
L6.3 The Sompling Function, 22
1.6.4 The Unit Impulse Function, 22
1.6.5 Derivatives of the lmpulse Function, 30
1.7 Other Types of Signals 32
1.8 Summary 33
1.9 Checklist of Important Terms 35
1.10 Problems 35
l0
,._,:v'.
I
, i
:
I
:.
.Its,'I
vl
ContenB
2 CONNNUOU$NMESYSTEMS
4I
2,1 Introduction 4l
2.2 Classification of Continuous-Time Systems
22.1
2.2.2
2.2.3
2.2.4
2.25
2.26
2.3
2.4
Linear Time-Invariant
2.3.1
2.3.2
Systems
2.4.4
Graphical Inrerpretation ol Convoluiot, 58
Systems U
Causal LTI Systems, &
Invertible LTI Systems, 65
Stoble
LTI
Systems, 65
Equations 67
Lincar,Corctant-Coefftcieu DiffereruialEquations,6T
Sptems Described by Differential
2.5J
2.52
2-53
2,5,4
Basic System Components,
6
Sinulation Diagrans for Contiauous-Tine Systems, 70
Fiading the Impulse Resporce,73
2.6 State-VariableRepresentation
2.6.1 Sute Equations, V
2.6.2
2.63
2.6.4
2.65
76
Time-Domah Solwion ol the State Equations, TE
State Equations h Fint Canonical Form, M
State Equztions h Second Canonical Fon4 E7
Stability Consideruions, 9l
2.7 Srrmmary 94
2.8 Checklist of Important Terms
2.9 Problems 96
S
52
The Convolution Integral 52
Properties of Linear, Time-Invariant
2,4.1 Memorylas LTI Systems, O4
2.42
2.43
2.5
42
Lineat and Nonlinear Sysums, 42
Tbne-Varying ond TimeJnvariant Systems, ,16
Systems with and without Memory, 47
CausolSysetta,4E
Invenibility and lnverce Sysums,50
Sublc Systerns,5l
96
FOURIER SEF'ES
3.1 Introduction 106
3.2 Orthogonal Representations of Sipals lUl
3.3 The Exponential Fourier Series lLz
3.4 Dirichlet Conditions 122
t@
vll
Contents
3.5
Properties of Fourier
3,5J
Series
125
Least Squores Approximation Property, 125
Elfecs of Symmetry, 127
Lineairy, 129
Product of Two Signals, 130
3.5.2
3.5.3
3.5.4
3.5.5 Convolution of Two Signals, 131
3.5-6 Paneval)s Theoretry lj2
3.5.7 Shilt in Time, 133
J.5.8 Inregration of Periodic Signab, 134
3.6 Systems with Periodic tnputs 135
3.7 The Gibbs Phenomenon 142
3.8 Summary 145
3.9 Checklist of Important Terms 148
Problems f48
3.11 Computer Problems
3.10
4
1@
,62
THE FOURIER TRANSFORM
4.1 Introduction 162
4.2 The Continuous-Time Fourier Transform 163
4.2.1 Development of the Fourier Transform, 163
4.2.2 Existence of the Fourier Tratsform, 165
4.2.3 Examples of the Continuous-Time Fourier Trarsform, 166
4.3 Properties of the Fourier Transform l7l
4.3.1 Lineafiy, I7I
4.3.2 Symmetry, l7i
4.3.3 Time Shifting, 175
4.3.4 Time Scaling 175
4.3.5 Differentiation,IV
4,3.6 Energy ofAperiodic Signab, 179
4.3.7 Convolution, lEI
4.3.8 Duality, 184
4.i.9
4.4
4.5
Modulatio+
185
Applications of the Fourier
Transform
4.4.1 Amplitude Modubrion, 190
4.4.2 Multipl*ing 192
4.4.3 The Sampling Theorem, 194
4.4,4 Sigtul Filteriag 2N
190
Duration-BandwidthRelationships 2U
4.5.1
4.5.2
Defiaitiotts of Duration and Bandwidrh,2M
The Uncertainty Priacipk,2$
(hntents
4.6 Summary 2ll
4.7 Checklist of Important Terms
4.8 Problems 2L2
5
212
224
THELAPLACETRANSFORM
5,1 Introduction 224
5.2 The Bilateral l-aplace Trensform 225
5.3 The Unilateral I-aplace Transform 228
5.4 Bilateral Transforms Using Unilateral Transforms 229
5.5 Properties of the Unilateral l:place Transform 231
J.s.t
Lineanry,232
5.5.2 Tine Shifiing,232
5.5.3 Shifiing in the s Domain,2i3
5.5.4 Time Scaling,234
5.5.5 Differentiatibn in the Time Domairy 234
5.5.6 Integration in the Time Domairy 237
5.5.7 DWrentiotion in the s Domain, 238
5.5.E Modulation,239
5.5.9 Convolutiory 240
5.5.10
5.5.1I
5.6, The
Initial-Value Theorer4 243
Final'Volue Theorem 2'14
Ihverse
l:place Transform 246
5.'l
Simulation Diagrams for Continuous-Time
5.8
Applications of the [:place
5.9
Transform
State Equations and the l:place
5.ll Summary
250
257
Problems
Transform
263
26
268
5.12 Checklist of Important
6
Systems
5.8.1 Solution of Differential Equatioru, 257
5.8.2 Application to RLC Circuit Analysb, 258
5.8.3 Application to Control" 2ffi
5.10 Stability in the s Domain
5.13
t
6
Terms
270
27O
D//SCNETE.NMESYSTEMS
6.1 Introduction 278
6.1.1 Clossification of Discrete-Time Signab,279
6.1.2 Transfornutions of the lndependent Variable, 281
278
lx
ConientB
6.2
Elementary Disrete'Time Signals 282
6,2.1 Dblete Imputse and Step Functlotts' 283
6,2,2
ExPonentialSequences,2E4
6.3 Discrete-Time SYstems 287
6.4 Periodic Convolution 294
6.5 Difference-EquationRepresentationofDiscrete-TimeSystems
6.5.1 Homogeneow Solution of the Difference Eqwtion' 299
6,5,2 The Panicular Solution, 302
6.5,3 Determlnation of the Impube Response' 305
6.6 Simulation Diagrams for Discrete'Time Systems 306
6.7 State-Variable Representation of Discrete-Time Systems 310
6.7.1 Solution of Smte-Space Equotions, 3li
6.7.2 lmpulse Response of Systems Described by State Equations, 316
6.8 Stability of Discrete'Time Systems 376
6.9 Summary 318
6.10 Checklist of Important
6.11
7
Problems
Terms
298
320
320
329
FOUN//EN ANALYS'S OF D//SCRETE.NME SYSTEMS
7.1 Introduction 329
7.2 Fourier-Series Representation of Discrete-Time Periodic Signals
7.3 The Discrete-Time Fourier Transform 340
7.4 Properties of the Discrete-Time Fourier Transform 345
7.4.1 Periodiciry,345
7.4.2 Linearity, j45
7.4.3 Time and Frequency Shifting, 345
7.4.4 DifferentiationinFrequency,346
7.4.5 Convolution,3tl6
7.4.6 Modulation,350
7.4.7 Fourier Transform of Dbcrete'Time Periodic Sequences' '150
7.5 Fourier Transform of Sampled Continuous-Time Signals 351
Reconsttuction of Sampled Signols,356
- 7-5.1
7.5.2 Sampling-Rate Conversion, 359
7.5.3 A/D and D/A Conversion,364
7.6 Summary 367
7.7 Checklist of Important Terms 369
7.8 Problems 369
331
x
8
Contens
THEZ.TRANSFO.AM
8.1 [ntroduction
E.Z
375*.
375
Z-Transform 376
8.3 C.onvergence ofthe Z-Transform 378
8.4 hoperties of the Z-Transform 383
The
8.4.1 Linearity,3E5
8.4.2 TimeShifting,386
8.4.3 FrequencyScaling,3ST
E.4.4
E.4.5
E.4.6
E,4.7
8.5
8.6
Differeruiation with Rapecr to
InitialValue,3S9
Fual Value, i89
Convolwio4 390
The lnyerse
E.sJ
8.5.2
z,38
Z-Transform
392
Invenion by a Power-Series Expansior* 394
Invenion by Panial-Fraction Exparcio4 j95
Systems 399
8,7 Z-Transform Analysis of State-Variable Systems M2
8.8 Relation Between the Z-Trdnsform and the Laplace Transform
8.9 Summary 4ll
Z-Trunsfer Functions of Causal Discrete-Time
8.10 Checklist of Important
. 8.11 Problems
Terms
4lO
414
414
9 THED/SCNETEFOUA//ERTRANSFOAM
9.1 Introduction 419
9,2 The Discrete Fourier Transform atrd Its Inverse 4Zl
9.3 Properties of the DFT 422
9.3.1 Linearity,422
9.3.2 TimeShiftitrg,422
9.3.3 Akemative Invenion Formulq 42j
9.3.4 Time Coivolwio4 42j
9.3.5 Relation a the Discrete-Time Fourier and Z-Transforms, 424
9.3.6 Mitrix Interpretarion of the DFT, 425
9.4 Linear Convolution Using the DFT 426
9.5 Fast Fourier Transforms 428
9.5.1 The Decimation-in-Time Algoritha 429
9.5.2 The Decination-in-Frequency Algoritlua,4j3
4Ig
xr
Contents
9.6
Spectral Estimation of Analog Signals Using the
DFI
4*46
91 Summary 445
9.8 Checklist of ImPortant Terms
9.9 Problems MB
10
448
tts2
DES//GN OF ANALOG AND DIGITAL FILTERS
10.1
Introduction
452
10.2 FrequencyTransformations 455
10.3.1
10.3.2
10.4 Digital
Filters
457
The Buttenoonh Filrer' 458
The ChebYshev Filrer,462
10.3 Design of Analog
Filters
468
10.2.1 Design of IIR Digitat Fitters lJshg Impube Invariance, 469
10.4.2 IIR Design Using the Bilineor Translormatio4 473
10.4.3 FIR Filter Desig4 475
10.4.4 Computer-Aided Design of Digirol Filters, tlEI
10.5
Summary
482
10.6 Checklist of Important
10.7
Problems
APPENDIX
A
Arithmetic
A.2.1
A,2.2
A.2.3
483
483
,lE5
Operations
487
Addiuon and Subtraction, tE7
Muhiplication,4ET
Division,488
A.3 Powers and Roots of Complex Numbers
A.4 Inequalities 490
APPENDIX
&5
COMPLEX NUMBENS
A.l Definition
A.2
Terms
B
489
491
MATHEMANCAL RELANONS
B.l
Trigonometric
B.2
Exponential and Logarithmic
ldentities
491
Functions
492
xll
8.3
ContentE
tspecial
B.i.I
Functions
8.3.2
8,3,3
8.4
8.5
Expansion 494
Sums of Powers of Natural Numbers
Power-Series
B.sJ
8.5.2
8.6
4g3
GanmaFuctlons,493
Incomplete Gatrutu Functlors, 494
Beu Funaions,494
495
Suns of Blnomial Coefficiens,496
Series of Exponentials, 496
Integrals 496
B,7 Indefinite Integrals 498
Definite
APPENDIX
C.l
Basic
C.2
Basic
C
ELEMEMARY MATRIX THEONY
Definition
5(2
Operations 503
C.2.1 Matrir Additio4 503
C.2.2 Differentiation and Inegrarton
C.2.3 Marrix Multiplicatiot, 503
C.3 Special Matrices 504
C.4
C.5
C.6
The Inverse of a
Matrix
Eigenvalues and
Eigenvectors
Functions of a
APPEND'X
D
Matrix
PARNAL
602
503
506
507
508
FNACNONS
D.l
Case I: Nonrepeated Linear
D.3
D.4
Case
III: Nourepeated Irreducible
Case
IV: Repeated Irreducible Second-Degree
512
Factors 513
D.2 Case II: Repeated Linear Factors 514
Factors 515
Factors 517
Second-Degree
BIBLIOGRAPHY
519
INDEX
521
Preface
The second edition of Continuous and Disqete Signak and Systems is a modified ver.
sion of the fint edition based on our experience in using it as a textbook in the intro'
ductory course on signals and systems at Southern Methodist Universily, as well as the
coEments of numerous colleagues who have used the book at other universities. The
result, we hope, is a book that provides an introductory, but comprehensive treatment
of the subjeci of continuous and disqrete.time signals and systems, Some changes that
we have made to enhance the quality of the book is to move the section on orthoSo'
nal representations of signals from Chapter I to the beginning of Chapter 3 on Fourier
serles, which permlts us to treat Fourier series as a epecial case of more general repre'
sentations. Oiher features are the addition of sections on practical reconstruction fil'
tera, rampling-rate conversion, and A/D and D/A converters to Chapter 7, We have
aleo added reveral problems in various chapters, emphasizing comPuter usage. How'
ever, we have not suggested or requlred the use of any specific mathematiQal software
be left to the preference of the lnstructor,
packages
-Overall, as we feel that this choice should
about a third of the problems and about a fifth of the examples in the book
have been changed,
As noted in the first edition, the aim of building complex systems that perform
sophisticated tasks imposes on engineering students a need to enhance their knowl'
edge of slgnals and syitems, so that they are able to use effectively the rich variety of
anilyeis and synthesis techniques that are available. Thus signals and systems,is a_core
course ln the Electrical Engineering curriculum in most schools. In writlng this book
we have tried to preBent the most widely used techniques of signal and system analy'
sls ln an appropriate fashion for instruction at the junior or senior level in electrical
engineerlng" The concepts and technlques that form the core of the book are of fun'
damental lmportance and ghould prove useful also to engineers wishing to update or
extend thelr understanding of signals and eyetems through self-study,
xlll
The book is divided into two major parts. In the,first part. a comprehensive treatment of continuous-time signals and systems is presented. In the second part, the
results are extended to discrete-time signals and systems. In our experience, we have
found that covering both continuous-time and discrete-time systems together, frequently confuses students and they often are not clear as to whether a particular concept or technique applies to continuous-time or discrete-time systems, or both, The
result is that they often use solution techniques that simply do not apply to particular
problems. Since most students are familiar with continuous-time sigaals and systems in
the basic oourses leading up to this course. they are able to follow the development of
the theory and analysis of continuous-time systems without difficulty. Once they have
become familiar with this material which is covered in the tint five chapters, students
should be ready to handle discrete+ime signals and systems.
The book is organized such that all the chapters are distinct but closely related with
smooth transitions between chapters, thereby providing considerable flexibility in
course design. By appropriate choice of material. the book can be used as a text in several courses such as transform theory (Chapters l, 3, 4,5,7, and 8), coutinlsus-1ims
signals and systems (1,2,3,4, and 5), discrete-time signals and systems (Chapters 6,7,
8, and 9), and sipals and systems: continuous and discrete (Chapters 1,2,3,4,6,7,and
8). We have been using the book at Southern Methodist University for a one-semester course covering both continuous-time and discrete-time systems and it has proved
successful.
Normally, a signals and systems course is taught in the third year of a four-year
undergraduate curriculum. Although the book is designed to be self-contained, a
knowledge of calculus through integration of trigonometric functions, as well as some
knowledge of differential equations, is presumed. A prior exposure to matrix algebra
as well as a course in circuit analysis is preferable but not necessary. These prerequisite skills should be mastered by all electrical engineering students by their junior year.
No prior experience with system analysis is required. While we use mathematics extensively, we have done so, not rigorously, but in an engineering context. We use examples extensively to illustrate the theoretical material in an intuitive manner.
As with all subjects involving problem solving, we feel that it is imperative that a
student sees many solved problems related to the rqaterial covered. We have included
a large number of examples that are worked out in detail to illustrate concepts and to
show the student the application of the theory developed in the text. In order to make
the student aware of the wide range of applications of the principles that are covered,
applications with practical significance are mentioned. These applications are selected
to illustrate key concepts, stimulate interest, and bring out connections with other
branches of electrical engineering.
It is well recognized that the student does not fully understand a subject of this
nature unless he or she is given the opportunity to work out problems in using and
applying the basic tools that are developed in each chapter. This not only reinforces
the understanding of the subject matter, but. in some casesr allows foi the extension of
various concepts discussed in the text, In certain cases, even new material is introduced
via the ptoblem sets. Consequently, over 260 end-of-chapter problems have been
straightforward pPli3tions
included. These problems are of various types, some being
that the stuof the basic ideis presented in the chapiers, and are included to ensure
and other problems
dent understands the material fully. Sonre are moderately difficult,
problems
i"quir. that the student apply the theory he or she leamed in the chapter to
of practical imPortance.
the
ihe relative amount of "Design" work in various courses is always a concern for
and digital-filter
engineering faculty. ihe inclusion in this text of analog*;fi as othe-r design-related material is.in dir. ect response to that concern'
of all the
At the end of each chapier, we have included an item-by-item summary.
of all
irpott*t.orcepts ana formulas covered in that chapter *:X,1t-:^th:^llist
that
,"r*s iiscussed. This tist serves as a remindir to the student of materid
sPecial attention.
deserves
-systems.-The.focus
Tt roughout the book, the emphasis is on linear time-invariant
remainder of the book'
io CfruptJ. I is on signals. This material, which is basic to the
this chapter, we cover.a vari'
considers the mathemati*ii.pi.t""t",ion of signals. In
signals' transformations of
ety of .uUi".s such as p.ti.JiL tig,"ft' energy ind power
signals'
thl
--- indepindent variable, and elementary
(CT)
CU.pi.r 2 is devoted to the time-domain iharacterization of continuous-timeof conurith the classificatioo
Iinear time-invariant (LTIV' systems. The chaPter starts
of
tinuous-time systems anO tire'n introduces thi impulse-response-characterization
discussion of.slntems
and the convolurion integral. This is followed by a
equations' Simulation diagrams
characterized by linear constant-coeffici-ent differential
to introduce the state varifor such system, ur" pr"."ir[anA used as a stepping stone
with a discussion of stability'
able
- conclpt. The chapter concludes
io this point the focus is on the time-domain description of signals.and systems.
"f..ii*i
;;.;;;
i:
i.["""ri
irrv rpt".,
startingwithchaPter3'weconsiderfrequency-domaindescriptions.Webeginthe
signals' The
t a consiaeratioi "iit " ortUogiial representation of arbitrary
orthogonal rePresentation
"i"pt"i*i
Fourier series are then iniroJuced as a slecial cise of the
forperiodicsignals.PropertiesoftheFourierseriesarepresented.Theconceptoflineof
signals is given' The response
spectra for describing tfr" tt"qu1n"V content of such
concludes with a discussion
iinear ryst.m, to perilodic i"pii. it iitt*ted' The chapier
of the Gibbs phenomenon.
Chapter4beginswiththedevelopmentoftheFouriertransform.Conditionsunder
propertiesdiscussed' Appli'
which the Fourier tr.nrfor,n .*i.t. lie presented and its
modulation, multiplexing'
cations of the Fourier transform in areai such as amplitud-e
The usi of the transfer function in detersampling, and signal tilteriig a,e
is
"onsiaered'
is discu-"ed' The Nvquist sampling theorem
,".p6nr"
;;G;il.
Liiv;tti;s
"f
derivedfromtheimpulse-modulationmodelforsampling.Theseveraldefinitionsof
bandwidthareintroducedandduration.bandwidthrelationshipsdiscussed..-l-aplace
Ct.ft", 5 deals with ,i" LpU.. ,t*sform. Both unilateral and bilateral
are derived and examples
transforms are defined. n.p.atl"r of the Laplace transform
used to evaluate new laplace transare given to demonstrate t oL in.r" propertils are
of the transfer funcfoffip.ir. or to find ,tr. i'i""tt" Lif..lt transform. The concept
transform such as for the
tion is introduced and *tLi"pp-fii"tions of the Laplace
nl
solution of differential equations, circuit andysis, and control systems ale presented.
The state.variable representation of syeteme in the frequency domain and the solution
of the state equations using Laplace transforms are discu$ed.
The treatmentof cotrtinuous-time sipals atrd systens ends with Chapter 5, and a
course emphasizing only CT material can be ended at thir point. By the end of this
chapter, the reader ehould have acquired a good undentandiag of contiuuous-time signals and systems aod should be ready for the second half nf the book in which discretetime signals and eystems analysis are c,overed.
We itart our consideration of diecrete.tipe syBtems in Chapter 6 with a dlecueeiou
of elementary diegrete-time signals. The impulse-reeponse characterlzatiou of diEerete'
ti11e systems is presented and the convolutiotr sum for determining the regPonse to
arbitrary inputs is derived. The difference equatiou rePresenUtion of discrete-tine sye
tems and their eolution is given. As itr CT systeos, einulation diagrams are diesussed
as a means of obtainiag the state-variable representation of dissrete'tine systems'
Chapter 7 considerB the Fourier analysis of discrete-tine signals, The Fourier eeriee
for periodic sequences and the Fourier transform for arbitrary signals are derived. The
similarities and differencee between these atrd their cootinuous-tine couterParts 8re
brought out and their propertles and applications discu$ed. The relation between the
coutinuoue-tlme and discrete-time Fourier trsnsfotrrs of sampled analog slgpalo ie
derived and used to obtain the impulse.modulation model for samPlirg that ls consld'
ered in Chapter 4. Reconstruction of sampled analog slgnals uslng practlcal recon'
struction devices such as the zero-order hold ig considered. Sampllng rate converBion
by decimatlon and interpolatigpof sampled signals ie dlscussed. The chapter concludes
wlth a brief deecriptldri df[i/D ahit D/A coqvg[sjptt, i
Chapter E dlscusses lhe (p.transform of dlsgete'tHe slgnals. The derelopment fol'
lowe clooely that of Chapter5for tte Iaplace fransf6dn. Properties of the Z.traneform
are derived and thelr application in the analysis of diecrqte'time systems developed.
The solution of difference equations and the analysle of gtate-vadable systems using
the Z-transform are also dissussed, Flnally, the relation,between the Laplace and the
Z-transforms of sampled signals is derived aud the mapplng of the s'plane lnto the z'
plane'i8 discussed.
Chapter 9 introduceg the discrete Fourler trBnsform (DFT) for uralyzlng ftnite'
longth iequences, The properties of the DFT are derlved and the dlfferences wlth the
other transfotms dlscusged in the book are uoted. The interpretatiou of the DFf as a
matrix operation on a data vector is used to briefly note its relatlon to other orthogo'
nal traneforms. The application of the DFT to linear system analysis and to spectral
estimation of analog signale is discussed. TVo popular fast Fourier tralsform (FFT)
algorithms for the efficient computation of the DFI are preeented.
The final chapter, Chapter 10, congiders Eom€ techuiques for the deslgp of analog
and digttal 6lters, Techniquee for the deelgo of two low.pass analog flltern, namely, the
Butterworth and the Chebyshev filters, are given. The lmpulse invarlance and billnear
technlques for designing digital IIR filters are derlved. Deeign of FIR dlgital ftlters
uslng window functions is also discussed, An example to lllustrate the appllcatlon of
FIR filters to approximate nonconventional filtera ls prssented, Tbe chapter concludes
wlth a very brief overvlew of computer-alded techniques'
Pretacs
xvll
In addition, four appendices are included. They should prove useful
as a readily
variables
aud matrix
available sourse for some of the background material in complex
algebra necessary for the course, A somewhat extensive list of frequently'used formu'
las is also included.
we wish to acknowledge the many people who have helped us in writiag this book,
especially the students on whom much of lhis material was classroom tested, a[d the
reviewers whose comments were very useful. We have tried to incorporate mOst of
their comments in preparing this second edition of the book. we wish to thaEk Dyan
Muratalla, who typed a subbtantial part of the manuscript. Finally, we would like to
thank our wives and families for their Patienc€ druing the completion of this book.
S, Soltmaa
M.D, Sttruth
.,
lii-i
j
, .,,
'tf
rI
.f,
,r
'.
4t
i
I
Chapter
1
Representing Signals
1
.1
INTRODUCTION
Signals are detectable physrcal quantities or variables by mcans of which messages or
iniormation can be transmitted. A wide variety of signals are of practical importance
in describing physical phenomena. Examples include the human voice. television pictures, teletypC data, and atmospheric temperature. Electrical signals are the most eas'
ily measuied and the most simply represented type of signals. Therefore, many
engineers prefer to transform physical variables to electrical signals. For example,
light
ma'ny physical quantities. such as temperature, humidity, specch, wind speed, and
intensity, can bi transformed, usirig trinsducers, to time-valying current or voltage signals. Ellctrical engineers deal with signals that have a broad range of shapes, amplitudes, durations, and perhaps other physical properties. For example, a radar-system
designer analyzes higir-eneigy microwave pulses, a communication-system engineer
whols concemed wiitr signai detection and signal design anall-zes information-carrying signals, a power engineer deals with high-voltage signals, and a comPuter engineer
deals with millions of pulses per second.
..presented as functions of one or more independent
Mathematically. sifnals
"ie
variables. For eximple. time-varying current or voltage signals are functions of one
variable (time), the vibration of a reciangular membrane can be represented as a function of lwo spatial variables (.r and y coordinates), the electrical field intensity can he
looked upon as a function of two variables (time and space). and finally. an image signal can be regarded as a function of two variables (.r and.v coordinates). ln this introductory courie of signals and svstems. we focus attention on signals involving one
independent variable, which we take to be time. although it can be different in some
specific aPPlications.
2
Roprosentlng
.
Slgnals
Chapter
I
We'begin this chapter with atr htroduction to two classes of eignals that we are concemed with throughout the text, namely, continuous-time and discrete-time siguals,
Then, in Section 1.3, we detine periodic signals. Section 1.4 deals with the.iseue of
power and energy signals. A number of traruformations of the independent variable
are discussed in Section 1.5. In Section 1.6, we introduce several inportatrt elementary
sigaals that not ooly occur frequently in applications, but also serve as a basis for rep.
resenting other signals. Other types of signals that are of importance to engineers are
mentioned in Section 1.7.
1,2 CONTINUOUS.TIME
VS. DISCRETE.TIME
SIGNALS
One way to classify signals is according to the nature of the independent variable. If
the independent variable is continuous, the corresponding signal is called a continuous-thre signal and ie defined for a continuum of values of the iadependent variable.
A telephone or radio signal as a function of time and an atmospherlc preesure as a
function of altitude are examples of continuous-time slgnah. (See Figure 1.2.1.)
Corresponding to any instant l, and an infiniteslmally small posltlve real
number e, let ue denote the instants rr I and l, * e by ri and ri, respectively. If
r(ri) xOl) = x(4), we say that x(t) is continuous at, = ,r. Otherwlge it is discontlnuous Bt 11, and the amplitude of signal r(t) has a jump at that point. Slgnal rO is
eaid to be contlnuous if it ls continuous for all t, A eignal that has only a flnlte or a
countably lnffnlte number of discontinultles ls said to be plecewlse condnuous lf the
jump ln amplltude at each discontinuity ls flnlte.
There are many contlnuous-tlrae signale of loterest that are not coutlnuour, An exasr.
ple ir the rectangular pulse function rect(t/t) (eee Figure 1.2.2), which ls de8aed as
-
-
l
rect(t/t) = {
r,
[o'
lrl
a;
(1.2.1)
hl ,;
u(r)
I
(u)
tlgure l2.l
(b)
,
Exampleo of contlnuous.tlmo elgnalr,
S6c.
1.2
!l
Contlnuouo-Tlme vB. Dlscrete-Timo Signals
rsct
-rl2
Ilgure
122
0
A reaangular
-3-2-1
Flgure
(r/t)
pulse siggal.
0t2
L2.3 A pulse train.
This sipal is piecevise conthuous, since it is continuous everywhere except at I = ts 12
and the magnitude of the jump at these poins is 1. Another example is the pulee trail
shom in Figure 1.2.3. This sigpal is continuous at all , except , = 0, 1, t2, ... ,
At a point of discontinuity r,, the value of the sipal.r(r) is usually considered to be
undefined. However, in order to be able to consider both continuous aud piecevise
continuous signals in a similar manner, we will assigp the value
t
,tal =f,h(,i) + r(,i)I
(t2.2)
r(t)
at the point of discontinuity r = rr.
If the independent variable takes on only discrete values t : k[, where I is a fixed
positive real number and & ranges over the set of integen (i.e., & = 0, tl, t2, etc,),
to
the corresponding signal x(&[) is called a discrete-time sipal. Discrete-time signals
arise naturally in many areas of business, economics, science, and engineering. Examples are the amount of a loan payment in the && month, the weekly Dow Jones stock
index, and the output of an information source that produces one of the digits 1, 2, ...,
M every seconds. We consider discrete-time signals in more detail in Chapter 5.
I
Representing
4
PERIODI
Signals
Chapter I
APERIODIC SIGNALS
Any continuous-time signal that satisfies the condition
.r(t)=..1r*rrr.
n = 1.2.3,...
(
1.3.1)
I
where > 0 is a constant known as the fundamental period. is classified as a periodic
signal. A signal .r(r) that is not periodic is referred to as an aperiodic signal. Familiar
eiamples olperiodic signals arethe sinusoidal furrctions. A real-valued sinusoidal signal can be eipressed mathematically by a time-varying function of the form
(1.3.2)
x(t)=4sin(r,rnl+$)
where
A = amplitude
= radian frequency in rad/s
6 = initial phase angle with respect to the time origin in rad
oo
This sinusoidal signat is periodic with fundamental Period T = 2t /aotor all values of roo'
The sinusoidal time function described in Equation (1.3.2) is usually referred to as
a sine wave. Examples of physical phenomena that approximately produce sinusoidal
signals are the vottage ourput of an electrical alternator and the vertical displacement
attached tJ a spring under the assumPtion that the spring has negligible mass
of1
and no damping. Tne putse irain shown in Figure 1.2.3 is another example of a peri'
odic signal, witfi fundamental period T = 2. Notice that if r(r) is periodic with tundamentaiperiod I, then r(r) is also periodic with period 2I, 37,4T, . ... The fundamental
frequericy, in radians, liaaian friquency) of the periodic signal r(t) is related to the
fundamental period by the relationship
tn*
Zrt
ttO=7
(1.3.3)
'Engineers and most mathematicians refer to the sinusoidal signal with_radian fre'
qrJn.y ro* = 1,oo as the tth harmonic. For example, the signal shown in Figure l'2.3
h"t . iuot.*"nial radian frequency @o = rr, a second harmonic radian frequency
= 3t. Figure 1.3.1 shows. the first'
-, = Zn,and a third harmonic iadian friquenry
A, 0ro, and
se'cond, and third harmonics of signal x(t) in Eq. (1.3.2) for specific values_of
harmonic are distinct. In theory' we
O. Note that the waveforms coresPonding to each
-,
.rr (r) =
x2
+cos 2t,
Flgure
Lt.l
(r) = cos 4rl
Harnonically related sinusoids.
13
(r)
=
+cos 6u,
Sec.
1.3
Periodic vs. Aperiodic Signals
can associate an infinite number of distioct harmonic signals with a given sinusoidal
waveform.
Periodic signals occur frequently in physical problems. ln this section, we discuss the
mathematical representation of such sipals. In Chapter 3, we show how to represent
any periodic signal in terms of simple ones, such as sine and cosine.
Eranple
l3.l
Harmonically related continuous-time exponentials are sets of complex exponentials
with fundamental frequencies that are all multiples of a single positive ftequency r,ro.
Mathematically,
$r() = exp[lk<rrdl' k = 0, +1, -+2, ..
(1.3.4)
We show that for k * O, +t(t) is periodic with fundamental period 2rr/ltool or frudamental frequency I kr,rol.
In order for signal Qr(t) to be p€riodic wilh period T > 0, we must have
exp [/<roo(t
+
I)l
= exp[korot]
or, equivalently,
^=
'
2tt
(13.5)
-le;J
Note that since a signal that is periodic with period I is also periodic with period
any positive integer ( then all signals Q.(l) have a common period of 2rr/roo.
lI
for
The sum of rwo periodic signals may or may not be periodic. Consider the two periodic sigrals r(t) and y(t) with fundamental periods T, and Tr, respectively. We investigate under what conditions the sum
z(t)=ax(t)+by(t\
is periodic and what the fundamental period of this signal is
Since x(t) is periodic with period fr, it follows that
r(r) = r(, + /<f
if the sigual is periodic.
)
$imil61ly,
y(t)=y(t+lTr)
where k and I are iategers such that
z(t) -- ax(t + kT) + by(t + lT2)
In order for z(r) to be periodic with period T, one needs
ax(t
+ T) + bv(, + T) = ou(t + trr) + by(t + lTr)
We therefore must have
T=kTr=lTz
Repreeenllng
6
Slgnals
Ohapter
1
or, equivalently,
T,t
---l
_-
T2k
-
In other words, the sum of two periodic signals is periodic only if the ratio of their
respective periods can be expressed as a rational number'
ranple 1.82
We wlsh to determine which of the following sigrals are periodic'
(a)
,n
r'(r) = sin ?r
(b) rz0) =
'in?rto'llr
(c) .rr(t) = sin 3t
(d) xo(| = rr(r) - 2r!(r)
write:r(r) as.the sum of two
For there signals, 11(l) is periodic with period Tr = 3. We
sinusoids wiih periiii rri = L5tl3 and-T.-= 15fl' Since 13T2, ='l7r'i1lollon: that
= 15. rl(r) is periodic with period rl = 2r'll3. Since we cannot
is periodic with period
frnd integen k and I such that kT1 = lT3, it follows that ro(t) is not periodic'
t!
r'()
i,
Note that if x(r) and y(l) have the same period T, then z(r) = x(t) + y(O-is periperioOic wittr period T; i.e., linear operations (addition in this case) do not affect the
odicity of the resulting signal. Nonlinear oPerations on periodic sigrrals (such as
multiilication) produce peiodic signals with different fundamental Periods.The following example demonstrates this fact.
kanple r.$-l
l,et:(r) = oostrrrr and y(l) = cosr,ly'. Consider the signal a0) = :(t)y(t)'
Signal
x(t)
is
perioriic with periodic itr/'or, aad signat y(r) is periodic with period 2n/ur,The fact that
z(t): ,1rrrr1, has two componenti, one with radian frequency o2 -'o, and the other
wit-hradianfrequency(l,2+.r'canbeseenbyrewritingtheproduct:(t)y(t)as
cosorr coso2, =
|t o.tr, -
to,)t + cos(o2 + or)d
have a constant term (ll2) and a second-harmonic term
z,,ir). ln general,'nonlinear operations on periodic sigtals can produce higher order
irE
harmonics.
if
or1
=
t.
to2
=
ro,
ihen e(t)
wi[
and
Since a periodic sigrral is a signal of infinite duration that should start at, = -o
go on toi = o, it dlows that ilt practical signals are aperiodic. Nevertheless, the study
of the system response to periodic inputs is essential (as we shall see in ChaPter 4) in
the process of developing the system response to all practical inpus'
h
Sec.
1.4
1.4
7
Energy and Power Signals
ENERGY AND POWER SI NALS
Let x(r) be a real-valued signal. If r(t) represents the voltage across a-resistance R. it
ptoOucet a c.urrent i(t'1 = ,1'71*' The instantaneous Power of the sigoal is
hr1r1 = ,'(t)i?., and the energy expended during the incremental interval dr is
,ri)i ndl. in general, we do not know whetherr(r) is a voltage or a current signal, and
in oiaer to normatize power, we assume that R = 1 ohm. Hence, the instantatreous
power associated wittrsignal r(l) is r2(r). The signal energy over a time interval of
lenglh2L is defined as
(r.4.1)
dt
r., = [' lxktlz
l-1' '"
and the total energy in the signal over the range, € (--, -) can be defined as
E=
li,,,
The average power can then be defined
P=
I' ,l,r,rl, o,
(t.4.2)
as
li,n l+,1:,1,(,)t,d,]
(1.4.3)
Although we have used electrical sigrals to develop Equations (1.4.2) and (1.4.3), these
equatio'ns define the energy and power, respectively, of any arbitrary signal.:.(t)'
When the limir in Equation (1.4.2) exists and yields 0 I f,, < a, signal r(t) is said to
be an energy signal. Inspection of Equation (1.43) reveals that energy signals hav_e zero
1n,
power. Oriitre-ottrer trand, if the limit in Equation (1.4.3) exists and yields 0 < P
itren x(l) is a power signal. fower signals have infinite energy'
+c!
es statea earlier, p-riodic signals afe assumed to exist for all time from -o to
If it happens that these periodic signals_have finite
and, therefore, have intinite
"nJrgy.
u""iug" power (which they do in most cases), then they are power signals. In contrast'
bounded finite-duration signals are energy signals.
Example 1.4.1
Inrhisexample,weshowrhatforaperiodicsignalwithperiodl,theaveragepoweris
, = +( r()t,at
(1.4.4)
any
If r(r) is periodic with period I, then the integral in Equalion ( 1.4.3) is the same ovet
interval of length L Aliowing the limit to be taken in a manner such that 2L is an integral
iultipte of thJ period (i.e.,it = m7),we find thar rhe total energy of .r(l) over an inter'
val oi length 2L is rn times the energy over one period' The average power is then
p = rim
=
l*^
l,'l,o)1,d4
]7[, vurl'o'
I
-il.t
ti.
I
Reprcsentng
.t1(r)
Chapter
I
x2O)
,
0
0
(a)
o)
Ilgure
Ixanple
Slgnals
L41
Signals for Examplel.4.2.
1.,1J
Coosider ths nignels in Figure 1.4.1. We wish to determine whether these sigpals are
energy or power sigpals. The signal in Figure 1.4.1(a) is aperiodic with total eoergr
z= l.a,eryl-ald,=+
which is fiaite. Therefore this signal is an energy signal with energy
age power is
P=
A2 /2.
The aver-
li- (*l_,o, "*r-40)
=m#=o
and is zero as expected.
The energy in the sigpal in Figure 1.42(b) is found as
E=
HI
U_,o,
"
+
["
n, expl-zr]r,]
=
H
nf, * lO- expt-zr)]
which is clearly unbounded. Thus this sigral is not an energy signal. Its power can be found as
E=
li- riU_rn " + IL a' expl-ztl ")= +
so that this is a power signal
with average pwer A2/2.
kample L4a
Consider the sinusoidal signal
x(t)=Ssin(or/+Q)
This signal is periodic with period
-2t
u)o
S€a.
1.4
I
Energy and Power Slgnals
The average power of the signal is
r = |[o' errn (<rror + rb) dr
=
* f^E- j
cos(z'or
+ zotlat
A7
2
The last step follows because the sipal cos (2r,rol + 20) is periodic with period Tl2 ad
the area under a cqsine sigpal over any ioterval of length lI, where I is a positive integer,
is always zero. (You should have no trouble confrming this result if you draw tso conplete periods of cos (2oot + 2$)).
tr'.rqnrple 1.4.4
Consider the two aperiodic signals shovu in Figure 1.4.2. These two sigpals are eramples
of energy sigpals. The rectatrgular pulse shown in Figure 1.4.2(a) is stricily time limited,
since .r, (l) is identically zero outside the duration of the pulse. The other signel b aslmP
totically time limited in the sense that r;(t) -+ 0 as t -+ t.o. Such sigpals may also be
described loosely as "pulses." In either case, the average power equals zero. The energr
for signal .x,(t)
,r
E, =
=
For
H
..
J_,r?@a,
r:, A2 at = l,)
xr(),
E, =
=
!y-
0
a2
A2
exPl-zaLl)
=
tS ?(1
-
x2U\= A exp [-a
r1(r)
-tl2
f ,A' "*el- ultll at
rl2
0
(a)
(
Flgure
L4J
Sigpals for Examgle 1.4.4.
b)
lrll
't0
Representng
Slgnals
Chapter
1
Since E1-and E2 are finite, rr(r) aod t2(r) are energy signals. Almost all time-Iimited signals of practical interest are energy sigrals.
I.5
TRANSFORMATIONS OF THE
INDEPENDENT VARIABLE
A oumber of importanl operatiorui are often Perfofmed on sigpals. Most of thase operations involve transformatioas of the independent variable. It is imPortant that the
reader know how to perform such operations and understand the physical meaning of
each one. The three operations ve discrrss in ihis section are sffiing, reflecting, and
time scaling.
15.1 flre Shfffing Qreradon
!( -
Signal
6) represents a time-shifted version of .r(t); see Figure 1.5.1. The shift in
time is ro. If ro > 0, then the signal is delayed by 6 seconds. Physically, ts cantrot take
on negative vdues, but from the analytical viewpoint, r(, ,0), ,o < 0, rePresents an
advanced replica of .r(t). Sigpds that are related in this fashion arise in applications
such as radar, sonar, communication systems, and seismic sigral processing.
-
trranple 15.1
C,onsider the signal r(r) shown in Figure 1.5.2. We want to
can easily be seen that
(t+t,
lr.
'(')= 1-r*r,
[0,
plot:(r
-
2) and
:(
-l=rso
o<t=z
2<t=3
otherwise
To perform the time-shifting operation, rePlace t by t
-
2 in the expression for
r(t):
(Q-z)+t, -tst-2=o
ost-2=2
lr.
,(t-2) = 1_t,
_2)+3, 2<t_2=3
[0,
otherwise
.r
(,
- ,o)
'u+'l
ngure
l5.f
+ 3).
The shifting operation.
lt
Sec.
1,5
Translormatons ol the lndependent Variable
11
or, equivalently,
x(t
(r-t,
t=t=2
- 2)= 1 l,.
[0,
,. 1:',:i
othervise
-
The sigrral x(t 2) is plotted in Figure 15.3(a) and can be described as
units to the right oo the time axis. Similarly, it catr be shown that
(t
r(, + 3) = .|
+
t,
r(t)
ahiffed tvo
-4=rs -3
'j,, -i::: ;'
[0,
othersise
The sigpal r(, + 3) is ploned in Figure 153(b) and represents a shifted version of
shifted thee udts to the left.
r(t),
Eanple 153
Vibration sensors are mounted on the front and rear axles of
a moving vehicle
to pick up
vibrations due to the roughness of the road surface. The signal from the front seosor is r(l)
1Z)).
and is shown in Figure 15.4. The signal from the rear axle sensor is modeled asr(t
by
vehicle
of
the
speed
possible
the
to
determine
If the sensors are placed 5 ft apart' it is
eensor.
comparing the signal from the rear ade sensor with the sigpal from the front ade
Figure 1.55 illustrates the time{elayed version of .r(t) where the delay is 1Z) ms, or
-
Flgure 152 Plot of .r(t) for
Example 1.5.1.
r(r -
r(, +3)
2)
-2
(b)
(a)
Hgue
15J
The shifting of
:(t)
of Example 1.5.1.
12
Representlng
r(r
-
Signats
Chapter
,
(ms)
Flgure 15.4 Front axle sensor
signal for Example 1.5.2.
t
(ms)
figurc 155 Rear axle sensor
signal for Example 1.5.2.
1
120)
0.12 s.
rhe delay t between the
sensor sigpals from the front and rear axles is related tb
the distance d between the two axles and the speed o of the vehicle by
d=ut
so that
a=-dT
=#=sofi/s
karnple 1.63
A radar placed to detect aircraft at a range R of 45 nautical miles (nmi) (l nautical mile
= fi16.115 ft) transmits the pulse-train signal shown in Figure 1.5.6. If there is a target,
the transmitted signal is reflected back to the radar's receiver. The radar operates by mia-
*1 l*
Flgure
15.6
roo
Radar-transmitted signal for Example 1.5.3.
r (gs)
Sec.
1.5
Translormations ol the lndependeni Variable
13
Received
pulse
, (tts)
f*ssot
Ilgure 15.7 Transmitted and received pulse train of Example 1.53.
suring the time delay between each transmitted pulse 8nd the corresponding returq or
echo. The velocity of propagation of the radar sigral. C is equal to 161,S/5 omi/g
The round-trip delay is
":T=##=o.s56ms
Therefore, the received pulse train is the same as the transmitted pulse train, but shifted
to the right by 0.556 ms; see Figure 1.5.7.
1.6.2 the Reflection Operation
is obtained from the signal r(t) by a reflection about f = 0 (i.e., by
reversingr(t)), as shown in Figure 1.5.8. Thus, ifr(t) represents a signal out of a video
recorder, then x(-r) is the signal out of a video player when the rewind switch is
pushed on (assuming that the rewind and play speeds are the same).
The sigral
&ample
x(-r)
1.6.4
We want to draw.r(-r) and.r(3
can be written as
-
r)if .t(r)
is as shown in Figure 1.5.9(a). The signal
.r(-r)
-l
I
L
Flgure l.SJ
-2
-l
ilhe reflection operation.
r(t)
Representng
14
(t+t,
,0)={1,
[0,
The sigpat
r(-l)
is obtained by replacing r
x,-,, =
Slgnals
ChaPter
1
-1sr=0
o<t=z
othervise
by
-t
in the last equation so that
-l
''
I -:: I
{i'*
Lq
otherwise
or, equivalently,
l-t+t,
r(-r)={1,
0,
0sr=l
-2<tso
othersise
L
The sigpal:(-r) is ilhutrared in Figure 15.9(b) and can be described as:(r) reflected
about the vertical axis. Similarly, it caD be shown that
(q-L
3=t=4
.r(r-r)={t,
Lo,
1<r<3
otherwise
The signal r(3 r) is shown in FigUre 15.9(c) and can be viesed as r(t) reflected and then
shifted three urits to the right. This result is obtained as follows:
-
.r(3-t)=r(-(t-3))
.r(3
-
r(-, - 3)
,)
l34t-5-4-3-2-l
(d)
ngnre
Uu
Plots of
x(-t)
and
x(3
-
t) for Example lS.zi.
Sec.
1.5
15
Transtormations ol the lndep€ndenl Variable
Note that if we first shifi.r(I) by thrce units and then reflecl thc shifted signal, the result
is .r( - I - 3). which is shorvn in Figure 1.5.9(d). Therefore. thc operalions of shifting and
reflecting are not com]Irutative,
In addition to its use in representing physical phenomena such as that in the video
recorder example, reflection is extremely useful in examining lhc symmetry properties
that the signal may possess. A signal r(r) is referred to as an cvcn signal, or is said to
be even symmetric, if it is ideatical to its reflection about the trrigin-that is, if
r(-t)
=
(1.s.1)
1111
A signal is referred to as odd symmetric if
'r(-t) = -'1';
(1.s.2)
An arbitrary signal .r(r) can always be expressed as a sum of even and odd signals as
(1.5.3)
.r(r) = x.(r) +
r,(t)
where.r"(r)
is called the even
part of .r(t) and is given by (see Problem 1.14)
1
.r"(,) =
and
r,(r)
is called the odd part of
;[r(r)
x(t) and
.,,,(,) =
+.r(-r)]
(1.s.4)
is expressed as
i.[r(r) -,r(-
r)]
(1.5.s)
Erample 1.6.6
Consider the signal .r(t) defined by
r(r) =
[1. r > o
[0, r<o
The even and odd parts of this signal are, respectively,
r.(r)
I
=;.
all texceptr = t)
t:
r"(r) = {
l. ;
,<o
r>0
The only problem herc is thc value of thesc functions at , = 0. If we define x(0) = 112
(the definition here is consistent with our detinition of thc signal at a point of discontinuity), then
,.(o)=j
Signals
-r.()
and
r.(r)
and x,(0)=0
are plotled in Figure 1.5.10.
16
RopresenUng
Slgnals
xo(t I
ngure
f5.f0
Plots of .r.(l) and .r,(r) for
r(r) in Example
1.5.5.
f,kar,rple 15.8
Consider the sigral
,u, =
exp[-cr]'
td
,>0
,<0
The even part of the sigral is
r.(r): {f)e"*e1-,,1,
"
[ia"nt"'t'
=
The odd part
]a
,>0
r<0
expl-,lrl1
ofrG) is
x"g)=
[]
lz
a
"
[-i
",p1-"r1,
e
"'nl"']'
r>0
,<0
Signals .r.(t) and .r,(r ) are as shown in Figure 1.5.11.
x.(l)
Ilgure 15.11 Plots of r"(r) and x,(l) for Example
1.5.6.
Chapler
1
Sec.
1.5
Translormations ol th€ lndependeni Variable
( t.1r
-l
0
I
t
17
)
-l0] tt
0
(b)
(.1
3
(a)
Figure
15.12 The time-scaling operalion.
1.6.9 lbe li66gsating Operation
Consider the signals r(t), x(3t), and x(t/2), as shown in Figure 1.5.12. As is seen in the
figure, x(3r) can be described as x(r) contracted by a factor of 3. Similarly, x(t/z) catl
be described as.r(t) expanded by a factor of 2. Both.r(3t) and x(t/2) are said to be
time-scaled versions of -r(t). In general, if the independent variable is scaled by a parameter 1, then r(11) is a compressed version of r(r) if hl , t (thc signal exists in a
smaller time interval) and is an expanded version of .r(t) it lrrl . 1 (the signal exists in
a larger time interval). If we think of r(l) as the output of a videotape recorder, then
.r(31) is the signal obtained when the recording is played back at thtee times the speed
at which it was recorded,and. x(t/2) is the signal obtained when thc recording is played
back at half speed.
r.vnrnplo 15.7
Suppose we want
1.5.2. Using the definition of
r(t)
:(3t
-
-
6), where.r(t) is the signal shown in Figure
in Example 1.5.1 we obtain
lo plot the signal r(3r
O1
3r-5,
!-r=2
I,
2.r=:
-3, + 9,
!''='
0,
otherwise
=
3
6) versus r is illustrated in Figure 1.5.13 and can be viewed as r() compressed by a faclor of 3 (or time scaled by a factor of lB) and then shifted two units of
time to the right. Note that if .r(l) is shifted first and then time scaled by a factor of lB,
we will obtain a different signal: therefore, shifting and time scaling are not commutative.
The result we did get can be justified as follows:
A plot of r(3r
-
Representng
18
.r
(
-1,
-
Slgnals
Chapter
I
(r)
sl
I
.l
.r(3t
-
Hg[re 15.13 Plot of r(3,
,
B-l
6) =
:(3(t
Example 15.7.
-
-
6) of
2))
This equation indicates that we perform the scaling operation first and then the shifting operation.
Exanple 15.8
We oflen encounter signals of the type
r(r) = I -
.4
exp[-ol]cos(.ot + Q)
Figure 1.5.14 shows.r(r) for typical values of ,4, o and roo. As can tre see1, rhis 5igtal eventually goes to a steady stale value of I as I becomes infinite. In practice, it is assumed that
the signal has settled down to a final value when it stays within a specified percentage of
its final theoretical value. This percentage is usually chosen to be 5% and the time ,, after
which the sigral stays within this range is defined as the settling time ,,. As can be seen
from Figure 1.5.14, r, can be determined by solving
I + A exp[-ot,] = 1.05
so that
,,
ffgore
fJ.14 Sigpal:(t)
for Example 15.8.
Sec.
1.6
19
Elementary Signals
t' = -!ornlo'osl
LA l
Let
r(r) = I
-
2.3
exp[-
10.356t] cos[5t]
We will find l, for t(t), x(t/2) and r(2t).
For x(l), since A = 2.3 and c = 10.356. we get rr = 0.3697
Since
x(t12)
=I
-
2.3
s.
exp[-5.178r] cos[2'5t]
and
x(Zt)
: | - 2.3 expl- 2O.7l2tl cos Il0r]
we get ,r = o.7394 s and t, = 0.1849 s for .r (r/2) and x(?t) respectivcly. These results are
expected since.r(t) is compressed by a factor of 2 in the first casc and is expanded by the
same factor in the second case.
tn conclusion, for any general signal x(r), the transformation aI + B on the indep€ndent variable can be performed as follows:
.r(or+p)=r(o(,+F/o))
(1.s.6)
where a and p are assumed to be real numbers. The operations should be performed
in the following order:
1. Scale by cr. If c is negative, reflect about the vertical axis.
2. Shift to the right by p/a if p and o have different signs, and to the left by F/o if F
and
c
have the same sign.
Note that the operation of reflecting and time scaling is commutative, whereas the
operation of shifting and reflecting or shifting and time scaling is not.
ELEMENTARY SIGNALS
Several important elementary signals that occur frequently in applications al5o serve
as a basis for representing other signals. Throughout the book, we will find that representhg signals in terms of these elementary signals allows us to better understand the
properties of both signals and systems. Furthermore, many of thesc signals have fea'
tures that make them particularly useful in the solution of engineering problems and,
therefore, of importance in our subsequent studies.
1.6.f
The Unit Step Function
The continuous-time unit step function is defined as
[trttt)=to'
and is shown in Figure 1.6.1.
r>o
,<0
(1.5.1)
Representing
20
L6.l
tlgure
Signals
Chapter
1
Continuous-time unit
step function.
This signal is an important signal for analytic studies, and it also has many practical
applications. Note that the unit step function is continuous for all t except at , = 0,
where there is a discontinuity. According to our earlier discussion, we define
u(0) = 112. An example of a unit step function is the output of a 1-V dc voltage souroe
in series with a switch that is tumed on at time t = 0.
Erample 1.6.1
The rectangular pulse signal shown in Figure 1.6.2 is the result of an on-off switching operation of a constant vohage source in an electric circuit.
In general, a reclangular pulse that extends from -a to +a and has an amplitude A can
be written as a difference betwecn appropriately shifted step functions, i.e.,
A rcct(t/?a\ = Alu(t + a) - u(t
-
a)l
(t.6.2)
In our specific example.
2recr(r/21 = 21u(t
+ l)
-
a(t
-
1)l
Elqqrmple 1.69
Consider the signum lunction (written sgn) shown in Figure 1.6.3. The unit sgn function
is defrned by
s$nl =
2 rect
{r I,
r>0
r=0
r<0
(r.6.3)
(r/2)
Flgure 1.62 Rectangular pulse
signal of Example 1.6.1.
Ssc.
1.6
21
Elementary Signals
r8n (, )
Ftgure
l.6J
The signum function.
The signum function can be expressed in terms of the unit step function as
sgnr=-1+2a(r)
The signum function is one of the most oflen used signals in communication aud in control theory.
1.6.2 lhe Ramp Function
The ramp function shown in Figure 1.6.4 is defined by
(t.
,tr)=to,
,>
o
r<o
(r.6.4)
The ramp function is obtained by integrating the unit step function:
I
ur-)h = r(t)
The device that accomplishes this operation is called an integrator. In contrast to both
the unit step and the signum functions, the ramp function is continuous at t = 0. Time
scaling a unit ramp by a factor o corresponds to a ramp function with slope a' (A unit
ramp function has a slope of unity.) An example of a ramp function is the linear-sweep
waveform of a cathode-ray tube.
Erample 1.63
Lrtx(l) = u(t+2) -Zu(r + 1)+Zu(r)- u(t -2) -2u(r - 3) +2u(t -
4). Lety(t)
denote its integral. Then
Flgure 1.5.4 The ramp function.
22
Representing
y(t) = t(t + 2) - zt(t + l) + zt(t)
Signal y(t) is sketched in Figure 1.6.5.
Signals
Chapter
1
' r(t - 2l - 2r(t - 3) + 2r(t - 4\
Flgure l.5S The signal used in
Example 1.6.3.
1.63 lhe SamplingFunction
A
function frequently encountered in spectral analysis is the sampling function
Sa(.r), defined by
S"t)=Y
(1.6.s)
Since the denominator is an increasing function of r and the numerator is bounded
( | sinx | < 1), Sa (r) is simply a damped sine wave. Figure 1.6.5(a) shows that Sa (r) is
an even function ofx having its peak at r = 0 and zero-crossings at x = tnzr. The value
of the function at x = 0 is established by using I'H0pital's rule. A closely related function is sinc r, which is defined by
sinc.r
sln-r
= nx
-
Sa(rr-r)
(1.6.6)
and is shown in Figure 1.6.6(b). Note that sinc x is a compressed version of Sa (.r); the
compression factor is n.
1.6.4 The Unit Impulee Function
The unit impulse signal 6(t), often called the Dirac delta function or, simply, rhe delta
function, occupies a central place in signal analysis. Many physical phenomena such as
point sources, point charges, concentrated loads on structures, and voltage or current
sources acting for very short times can be modeled as delta functions. Mathematically,
the Dirac delta function is defined by
.["
r0)a0) dt = x(o\, tt < o < t2
(1.6.71
provided that x(l) is continuous at I = 0. The tunction 6O is depicted gaphically by a
spike at the origin, as shown in Figure 1.6.7, and possesses the following properties:
Sec.
1.6
n
Elementary Signals
Sa
(x)
sinc
(r)
(b)
Figure 1.6-5 The sampling function.
Ilgure 1.6.7 Representation of the
unit impulse tunction 6(l).
l.
6(0) -+
co
2.60)=0,,+0
3. I 6(t)dt=l
4. 6(t) is an even function; i.e.,6(t)
:
5(- t)
As just defined, the 6 function does nol conform to the usual definition of a function. liowever, it is sometimes convenient to consider it as the limit of a conventional
function as some parameter e approaches zero. Several examples are shown in Figure
1.6.8; all such funitions have the following properties for "small" e:
24
!..
Pt0l
_S 0
22
Represenffng
Slgnals
o,=
.
tt rr\2
;/
tt2u)
e
"(*
Chapter
I
-2e
t
tigure
l.6J
Engineering models for 6(r).
1. The value at r = 0 is very large and becomes infinity as I approaches zero.
2. The duration is relatively very short and becomes zero as e becomes zero.
3. The total area under the function is constant and equal to l.
4. The functions are all even.
Eramplo 1.6.4
Consider the function defined as
p0) =
"ts6.'(*..#)'
This function satisfies all the properties of a delta function,
Po)="rrB
as can be
shown by rewriting it as
:w#r
so thal
1- p(0) =
Jl$,
2.
(l/e) = o. Here we used the well-known
For values of t
*
linu't
liq (sinr)/t = t.
0,
,(*,rT)'
p(,) =
"rijm.
=("rg.
"r["u*
The second limit is bounded by l, but the
(-r.*i;'1
fi'.t limit
vanishes as e
P(,)=0, t+o
3. To show that the area under p(r)
['
-o(o
is unity, we note that
at =
=
"'s
:
l- (''*]'i"')' "
In 1- sin'(")
r' r,
I
--
+
0+; therefore,
Sec.
1.6
Elementary Signals
where the last step follo
Itl
a
Since (see Appendix B)
sln'r
r_
--;- dr = t
T.
I
p(t)dt = |
it follows that
4. It is clear that p(t) = p(-t)i
therefore,
p(l)
is an even funcrion.
Three important propenies repeatedly uried when operating with delta functions are
the sifting property, the sampling property. and the scaling property.
Sifting Property.
The sifting property is expressed in the equation
J"x(r)6(r
- ,rrr,=
{Xlt'' :;I;""
This can be seen by using the change of variables
J"ros(r
r
- 6)dt = l,':::,n
=t
-
,o
(1.6.8)
to obtain
+,0)6(r)d"
=r0o), \<to<t2
by Equation (1.6.7). Notice that the right-hand side of Equation (1.6.8) can be looked
at as a function of ro. This function is discontinuous at r0 = t, and ro ,2. Following our
notation, the value of the function aa \ ot tzshould be given by
:
ttt
| ,(r)t( - ti
Jtt
dt =
1
,r<^1,
,o
: t or
to
=
tz
(1.6.9)
The sifting property is usually used in lieu of Equation (1.6.7) as the de6nition of a
delta function located at ro. In general, the property can be written as
,{,)=f11116(r-t)dr
(1.6.10)
which implies that the signal .r(t) can be expressed as a continuous sum of weighted
impulses. This result can be interpreted graphically if we approximate r(r) by a sum of
rectangular pulses, each of width A seconds and of varying heights, as shown in Figure
1.6.9. That is,
i(,)= i
t'-o
r(&A)rect((r -kL)/L\
Representing
r(:a)
recr
((r
-
Chapter
1
2AyA)
lo
----l
Signals
Figure 1.6.9
signal r(r).
a
Approximation of
which cante written as
iot
=
oi.'(ml[]*.t1tr -
ka)/^)]tkA
- (k - 1)AI
Now, each term in the sum represents the area under the /<th pulse in the approximation iO. We thus let A -+ 0 and replace kA by t, so that /<A (k l)A = dt, and the
summation becomes an integral. Also, as A -r 0, 1/A rect((t - tA)/A) approaches
6(t r), and Equation (1.6.10) follows. The representation of Equation (1.6.10), along
with the superposition principle, is used in Chapter 2 to study the behavior ofa special
and important class of systems known as Iinear time-invariant systems.
-
-
-
Sampling Property. If x(t)
x(r)6(t
-
is continuous at r0, then
to)
:
r(to)6(t
-
t)
(r.6.11)
Graphically, this property can be illustrated by approximating the impulse signal by a
rectangular pulse of width A and height 1/A, as shown in Figure 1.6.10, and then allowing A to approach zero, to obtain
l,$ ,Cl I
r(r0t rcct
tt, -
rectllr
- h)lL) = :(ro)D(r - ro)
,u)/a)
Ilgure L6.10 The sampling
property of the impulse function.
Sec.
1.6
Elementary
Signals
27
Mathematically. two functions /, (s(r)) and /r(s(r)) are equivalent over the interval
(t,, rr) if, for any continuous function y(l),
ett
I
Jt
Therefore, .r(l)S (r
-
y(,)/'(s(,)) dt =
ro) and.r(ro)6(r
'l'-y(r)x(r)60
|
J\
-
fl,
|
J"
y(t)fr(6(t)) dt
,0) are equivalent, since
fl'
- t)dt =y(ro)x(o) = J,,| y(r)x(to)6(t
- rldt
Note the difference between the sifting property and the sampling ProPerty: The
right-hand side of Equation (1.6.8) is the value of the function evaluated at some Point.
*-her.ur the right-hand side of Equation (1.6.11) is still a delta iunction with strength
equat to the value of .r(t) evaluated at I = lo.
Scaling Property. The scaling proPerty is given by
E(ar
+ D) =
#rt. i)
0.6.12)
This result is interpreted by considering E(l) as the limit of a unit area pulse p(l) as
some parameter e iends to zero. The pulse p(at) is a compressed (expanded) -version
By
of. p(ti if a > I (a < 1), and its area is l/lal. (Notc that lhe arca is always positive.)
tu[ii! tn" limit as e -+ 0, the result is a delia function with strength l/la I' We show this
by co-nsidering the two cases a > 0 and a ( 0 separately. For a > 0, we have to show
that
1",,Q)t1o,
+ b)dt --
. t)*, ,, .
I'l,alu(,
-
,l
.
,,
Applying the sifting property to the right'hand side yields
1,r-q\
a \a I
To evaluate the left-hand side, we use the transformation of variables
r-al*b
Then dl = (lla)dt,and therangerr
hand side now becomes
ltltzbecomesal,'r b<t < at2+ D'Thelefr
t)'("1
+ b)dt
= J''*t'\
["r()6('r
[''no '1t:a l''a
-'-Jt''-''''\''
I /-b\
=;'\
"
which is the same as the right-hand side.
When a < 0, we have to show that
)
''"
28
Representing
l,',,rU)6(ar
+ b)d,=
l" fr16111, .ur)*,
Signals
Chapter
1
,,.--!.,,
Using the sifting property, we evaluate the right-hand side. The result is
I /-b\
E'[7/
For the left-haad side, we use the transformation
I
= at +
b, so that
at=!d., = -!a,
a
lal
andtherangeofrbecomes
l,',,,{,)t1,,
-lolq+ b1r < -lalr,
+
b) dt
+ D,resultingin
il:,'::,(f) rt,l pi*
=hl-':,::'(#)",,'
=
1 l-b\
=EI'|.7/
Notice that before using the sifting property in the last stepr we interchanged the limis of integration and changed the sign of the integrand, since
-lalq+b<-lalq+b
lit-arnple 1.65
Consider the Gaussian pulse
p(l.. =
The area under this pulse is alwap
1;
I
f-t21
t6?,*L;pl
that is,
l"#'*l#)"='
It can be shown that p(l) approaches 6(l) as e -r 0. (See problem 1.19.) l*t a > I be any
cotNtant. Then p(ar) is a compressed version ofp(l). It can be showtr that the area under
p(ar) is \la, and as e approaches 0, p(ar) approaches 6(ar).
llrerrple
1.0.6
Suppose we want to evaluate the following integrals:
a.
.t
I
t-2
ra
0+t2yt1t-z1ar
b. J-2
I (r+12)6(r-3)d,
Sec.
1.6
Elementary Slgnals
t3
c. I
Jo
d.
exp[r -216(a
29
- gdt
ll
J_-6(t)dt
a- Using the sifting property yields
?l
J_r(,+,')to-3)dr=o
since
b.
l = 3 is not in the interval -2 < t <
Using the sifting property yields
i_r(,
since
c.
* r1oc -
I = 3 is within the interval
3)dt = 3 +
32
= t2
-2<t<4.
Using the scaling property and then the sifting properly yields
f
"ro[
-
216ra
-
4)dt
=/ *oU =
d.
7.
l*Ptol
:
z1la1r
-
zlar
I
Consider the following two cases:
Case
l: t <o
In this case, point
gral is zero.
r:0is
not within the interval
-€ < r (
l,and the result of the inte-
Cose2:t>0
In this
case,
r = 0lies within the interval
-co
(r(
l, and the value of the integral is 1.
Summarizing, we obtain
f_,r"r*
=
{i;
;:3
But this is by definition a unit step function; therefore, the functions 6(t) and z(l) form an
integralderivative pair. That is,
fi,1,y =
u1,'1
(1.6.13)
t_ 6(t)dt = z(t)
(1.6.r4)
The unit impulse is one of a class of functions known as singularity functions. Note
that the definition of 6() in Equation (1.6.7) does not make sense if D(t) is an ordinary
function. It is meaningful only if 6O is interpreted as a functional, i.e., as a process of
assigning the value r(0) to the signal r(t). The integral notation is used merely as a con-
Signals
Representing
'
Chapter
1
venient way of describing the properties of this functional, such as linearity, shifting.
and scaling. We now consider how to represent the derivatives of the impulse function.
1.6.6 Derivatives of the Impulee Furction
The (first) derivative of the impulse function. or unit d0ublet, denoted by 6'(l), is
defined by
I
J
x(r)6'(r
t,
- \)dt = -r'(4,), rr ( r,r ( rz
provided that r(t) possesses a derivative x'(1,) at t
strated using integration by parts as follows:
['r(r)6'(r J,,
h)dt
= J,,['r()d[6(r =
r (,) s (, _ ,r)
-
(1.6.15)
t,. This result can be demon-
ro)]
_
l"
l,',,'
,ur6
(t _ t,,)dt
=0_0-x,(to)
since
1.
2.
3.
6(t) = 0 for t #
x(t)6',0
rl
-
0. It can be shown that 6'(r) possesses the following properties:
to) = .rGo)D',(t
-
| 6'(t -
tn)dr
:
5'(ar.
=
r'(,. :)
J_-
D)
#
6(t
to)
-
-r',(o)5(r
-
rn)
-(r)
Higher order derivatives of 6(t) can be defined by extending the definition of 6'(t1.
For example, the n th-order derivative of 6(t) is defined by
f"r{r;u,",U
-
ts)dt
=
(-t)'r,",((,), lr ( r,, (
provided that such derivative exists at r =
shown in Figure 1.6.11.
,o.
r,
(1.6.16)
The graphical representation of 6'(t) is
Sec. 1 .6
31
Elementary Signals
6'(, )
Iigure
l.6.ll
Representation
of
D',(r).
Exanple 1.6.7
The current through an inductor of l-mH is i('l)
voltage drop across the inductor is given by
a1t1
=
ro-!
fi[10
= -2 x l0=
-2 x
2
l0-2
exp[-2r]u(r)
cxp
l-
-
= l0 exp[- zrlu(t)
-
6(t) ampers' The
s(r)l
2rlu(t\ + 10 2exp[-2r]6(r)
cxp[-2r]a(r) + 10-280)
-
-
10-28'(l) volts
10-2E'(I) volts
where the last step follows from Equation (1.6'l l).
Figures 1.6.12(i) and (b) demonitrate the behavior ofthe inductor current i(l) and the
voltage z(t) respectively.
v
(r)
2
(a)
Flgure 1.6.111 o(t)
x
(b)
afi du(t)ldt
for Example 1.6.7.
l0 -2 erp [-2 r l u (r)
Representing
Note that rhe derivarive of
tion" i.e.
r(r)z(r)
Signals
Chapter
1
is obtained using the product rule of differenria-
d,
*!x(t)u(t)l = x(r)b (r) + x' (t) u(t)
whereas the derivative of
.rO6(r)
*4
is
kt,lut,)r = ,4 r,tolool]
This result cannot be obtained by direct differentiation of the product, because 6(r) is
interpreted as a functional rather than an ordinary function.
I'.vomple [.6S
We will evaluate the following integrals:
I' ,(,- zrt'(-!,.l)a,
(b)
rexp[-z]D'(, - 1)d,
/_t
@)
For (a), we have
[',a
-
zru'(-i, .
i)" : I_,ru - zrt'(t -l)at
= I-,ru-zn'(,-))a,
=
[[i.'(, - i). *t )]"= i'(, - i).,
For (b), we have
Jr
rexp[-z]t"(t
- t)dt= (4r - a)exp[- all,-r = o
1.7 OTHER TYPES OF SIGNALS
There are many other types of signals that electrical engineers work with very often.
Signals can be classified broadly as random and nonrandom, or determiniitic. The
study of random signals is well beyond the scope of this text, but some of the ideas and
techniques that we discuss are basic to more advanced topics, Random signals do not
have the kind of totally predictable behavior that determinitti. .igo"rc do. Voice,
music, computer output, TV, and radar signals are neither pure sinusoids nor pure periodic waveforms. If they
then by knowing one period of the sigaal, we iould pre1vere,
dict what the signal would look like for all future time. Any signai that is capabll of
carrying meaningful information is in some way random. In other words, in order to
Sec.
1.8
Summary
3i,
contain information, a signal must, in some manner change in a nondeterministic
fashion.
Signals can also be classified as analog or digital signals. In science and engineering,
the word "analog" means to act similarly, but in a different domain. For example, the
electric voltage at the output terminals of a stereo amplifier varies in exactly the same
way as does the sound that activated the microphone that is feeding the amplifier. In
other words, the electric voltage o(l) at every instant of time is proportional (anatog)
to the air pressure that is rapidly varying with time. Simply, an analog sigpal is a physical quantity that varies with time, usually in a smooth and continuous fashion.
The values of a discrete-time signal can be continuous or discrcte. If a discrete-time
signal takes on all possible values on a finite or infrnite range, it is said to be a continuous-amplitude discrete-time signal. Alternatively, if the discrete-time signal takes on
values from a finite set of possible values, it is said to be a discrele-amplitude discretetime signal, or, simply, a digital signal. Examples of digital signals are digitized images,
computer input, and signals associated with digital information sources.
Most of the signals that we encounter in nature are analog signals. A basic reason
for this is that physical systems cannot respond instantaneously to changing inputs.
Moreover, in many cases, the signal is not available in electrical fonn, thrs requiring
the use of a transducer (mechanical, electrical, thermal, optical, and so on) to provide
an electrical signal that is representative of the system signal. Transducers generally
cannot respond instantaneously to changes and tend to smooth out the sipals.
Digital signal processing has developed rapidly over the past two decades, chiefly
because of significant advances in digital compuler technology and integrated-circuit
fabrication. In order to process signals digitally, the signals must be in digital form (discrete in time and discrete in amplitude). If the signal to be ;irocessed is in analog form,
it is first converted to a discrete-time signal by sampling at discrete instanB in time.
The discrete-time signal is then converted to a digital signal by a process called
quantization.
Quantization is the process of converting a continuous-amplitude sigral into a discrete-amplitude signal and is basically an approximation procedure. The whole prooedure is called analog-to-digital (A/D) conversion, and the corresponding device is
called an A./D converter.
1.8 SUMMARY
a
a
Sigrals can be classified as continuous-time or discrete-time signals.
Continuous-time signals that satisfy the condition r(t) = x(l + 7) are periodic with
fundamental period 7"
The fundamental radian frequency of the periodic signal is rclated to the fundamental period I by the relationship
(r)0
2n
-- T
The complex exponential .r(t) = exp [jorot] is periodic with period T = 2t
all oo.
/o\ lot
Representing
.
Signals
Chapter
l
Harmonically related continuous-time exponentials
:o(t) = exp[l&toot]
I-
are periodic with common period
2t /oto,
The energy E of the signal x(r) is defined by
E=
ITI [',lrurl,o,
The power P of the signal .r(t) is defined by
lgl j[',r,<,r,0,
P=
The sigral r(r) is an energy signal if 0 1 E 1 a.
The signal .r(r) is a power signal if 0 P o.
The signal x(t t) is a time-shifted version of xO. If ,0 > 0, then the signal is
delayed by ro seconds. If to < 0, then .r(r ,0) represents an advanced replica ofr(l).
The sigpal x(-t) is obtained by reflecting r(l) about I 0.
The signal .t(ct) is a scaled version of .r(r). If c > l, then x(u) is a compressed version of x(t), whereas if0 < c < l, thenr(ol) is an expanded version ofr(r),
The signal .r(t) is even symmetric if
( (
-
a
a
-
r(-t)
'r(t) =
The signal .r(l) is odd symmetric
:
if
-r(-r)
x(r) =
Unit impulse, unit step. and unit ramp functions are related by
u(r)
lt
= J_| 5 (r) dr
r(t) =
tt
J_-u(r)dr
The sifting property of the 6 function is
f"r(r)E( - 'o)d,:
.
The sarnpling property of the
E
{r('o)' l;S:"0
function is
r(t)6(t
- t ) : r(6)6(r :
16)
Sec.
1.9
1.10
35
Problems
CHECKLIST OF IMPORTANT TERMS
Sampllng lunction
Scallng operation
Shtltlng operation
Slgnum tunctlon
Slnc functlon
Unlt lmpulse tunctlon
Unlt ramp lunctlon
Unlt step functlon
Aperlodlc slgnals
ConUnuous.tlme slgnals
Dlecretetlme slgnals
' Elementary slgnals
Energy slgnals
Perlodlc slgnals
Power slgnals
Rectangular Pulse
Reflectlon operatlon
1.10
PROBLEMS
1.1. Find thc fundamental period
cos
Iof
each
ofthe following
(rrr), sin (2rrr ). cos (3nr), sin
(4rll.
"o.
signals:
(l
,). t'"
(;
,),
-,(T,),',"(T,),*.(X,),.,"(?,),*,('1,)
1.2. Sketch the follorving signals:
(a)
r(,)='*(;r+zo')
=t+e1' Ost=2
(t+2
ts-2
(c).r(r)= {o
-2=t<2
[,-z z<,
(d) r(r) = 2 exp[-rl, 0' l s l' and -r(t + l) = '111; 1tlr
(b) r(t)
uL
show that if
1.4
and:r(t)
Show that if
the same period 7.
x(l)
is
'
tlren it is also periodic with period nT, n = 2,3, ... .
have period I, then -rr(r) = arlt) + h'rr(r) (a. D constant) has
periodic with period
:r()
u11
I,
Use Euler's form' exp[lr'rl] = cosro, + i sinr,rl. to show that exp[iroll is periodic with
period I = 2r/o.
1.6, Are the following signals periodic? If so' find their periods.
15.
=.,,(1,) * r*'(8{,)
I 71,]1{ cxPL,
[.5n I
(b) .t(t)
(a) r(r)
=
(c) x(t) =
"*o[i o
e,l
[7rrl
t5 I
e*pli e t.l + exn[u t]
[5zrl
?,]*
(d).r(t) = exPli
lrr I
"*Pl.6,1
/3rr \
/3\
(e) .r(t) = 2sin(:* ,/ + cos\or/
g6
.
Repres , rting
Signals
Chapfer
1
ro is a periodic signar with period r, show that x(at),a > 0, is a periodic
with
p'id r/a, and x(t/b), b > 0, is a periodicsignal with period br.venty thesesignal
rlsurts for
1.7. If
:(t)
= 5inr,o = b = 2.
Determine whether the foflowing signals are power or energy signars or neither. Justi$
your answers.
It
(e).r()=4.1nr, -@<r<e
O) r(1 = A[u(t - a) - u(t + a)l
(c) .r(t) = r(t) - r(t - 1)
(d)
(e)
rO
= exp[-ar]a(r), a>
r(t) = tuo
(I) :(t) = 21r;
(g).r0)=Aexplbtl, D>0
L9.
0
Repeat Problem 1.8 for the following signats:
(a) r(r) =
r..(;,
O) :0) = exp[-2lrl]sin(rrr)
(c) r(r) = exP[4lrl]
(d).tO =
(e)
.r0):
"*[r?],
r*"(+r"..(?,
(f).r()=1'
r<0
exp[3r], 0 s,
L10. Show that if .r(r)
is periodic
with period
I,
then
[""<oa'l=tlfi
where P is the average power of the signal.
LlL IJt
:(t)= -r*r,
,,
2,
0,
(a) Sketch.tO.
(b) Sketch.r(r
-1sr<0
O=t<z
2=t<3
otherwise
-2),x(t+ 3),r(-3, -zl,anax(Jt* j)*afradtheanalyticarexpres-
sions for these functions.
LtL
Repeat Problem 1.11 for
r(t):21 a2,
2r-2,
Ll3.
Sketch the following signals:
(8) rr(r) = u(r) + 5z( 1) tu(t 2l
O) +(t) = r(r) r(r 1) u(t 2)
-
-
-
(c) .rr(r) = exp[-r]z(r)
(d) r4(r) = ?tt(t) + 6(, -
-
1)
-
-
-
-lsr<O
0sr<1
Sec.
1.10 Probloms
(e) xr(r) = u(t)u(t - a),
(f) r50) = u(t)u(a - t),
(g) .tz0) = a(cosr)
Ll4
gz
a>0
a>
0
ftl :,tr>,(r + ])
,, ,,(-; * l),,t, - rt
() rr()xr(2 - t)
(a) Show that
I
x"(t)=ilx(t)+.r(-r)l
is an even signal.
(b) Show that
1
*"(t)-i[.r0)-.r(-r)]
is an odd signal.
Llli.
Consider the simple FIvI stereo transmitter shown in Figure Pl.l-5.
(e) Sketch the signals L + R and L R.
(b) If the outputs of the two adders are added. sketch the resulring waveform.
(c) If signal L R is inverted and added to signal L + R, skctclr thc resutting waveform.
-
-
fi(,)
l
I
L+R
0
-l
L_R
l
I
0
-l
Flgure P1.15
L16
For each of the signals shown in Figure P1.16, write an expression in terms of unit step and
unit ramp basic functions.
Ll7. If the duration of x(t) is defined as the time at which.r() drops to l/e of the value at the
origin, find the duration of the following signals:
(r) rr0) = Aexpl-tlTlu(t)
(b) rz(t) = rr(3r)
Representing
38
.tr
(,)
x2
Signals
Chapter
1
(I)
:d(,)
rs (r)
r6 (r)
o+b
-o-b
I
Ftgure PL16
Ll&
(c) r3(t) = xJtl2)
(d) .ro(t) = blt)
The signal x0) = rca(/2)
is transmitted through the atmosphere and is reflected by different objects located at different distances. The received signal is
v(r) =.r0) + o.s:(r
Signal
y(t)
-
)
* o.x,1, -
r>>2
is processed as shown in Fig. P1.18.
y(t) for I = 10.
Sketch zO for T = 10.
1.19. Check whether each of the following can be used
(a)
(b)
n,
Sketch
function:
(a) pr(t) =
1
lm zr"
"..r,tr1"l
as a mathematical model
of a delta
Sea.
1.10
39
Problems
Flgure
(b) pr(,) =
(c) p,(r)
Is
Pl.lt
J**r[;rt']
=!,\#.7
(d) po(r):
H
+;ri"
(e) p50) = lim
e
(o
,1":l
poo) =
!,$
exp[-elrll
Evaluate the following integrals:
[. (3,- ])ut, - rra,
or f' tr - ,)'(3,-;)"
o J" [",nt-, * ,l *.in(f ,)]r(,- ;)"
n,
*
ry
*.in(],)]r(,
{ar
f'
[*nt-r
(e)
/-
exp[-sr + l]6'(,
-
-;)"
s)d,
The probability that a random variable.r is tess than o is found by integrating the Probability density function /(.r) to obtain
p(x
= c) = l"-
rcl*
Given that
,f(r) = 0.26(.t + 2) + 0.38(r) + 0.26(.r
find
(a) P(.r
(b) P(r
s -3)
s l.s)
- l) + 0.1[u(.r -
3)
-
a(r
-
5)]
49
:-
?jr:
.t
IZL
r
f
rr
,"r,riu
jtt f g* q .:*f,
g;,
J, . .{i r$ l,
t
[,1, .i
;*i::i.+
(d) P(.r < 6)
The velaity of I g of
'
Representing
Slgnals
Chapter I
mass is
?r(r)
= exp[-(, + l)lu(, +
l)
+ 6(,
- l)
(a) Plot o(r)
(b) Evaluate the force
t(t)
(c) If there
=
nfiO<t\
is a spring connected to the mass with constant
f/t)
=k
1
.1
1
{f,:r,
;L=,
End the force
tt
J_-o(r)dt
123. Sketch the fint and second derivatives of the following
(a) .r(t) = t (r) 'l- 5t (, - l) - zu(t - 2l
@) r(1 = r(ri - r(t - l) + 2u(' - 2)
(c) .r(r) =
* = I N/m,
signals:
llo
COMPUTER PROBLEMS
The integral
l"','xg)Y()at
can be approximated by a summation of rectangular strips, each of width Ar, as folloss:
i
Here, Ar = (t,
-
r(nAt)y(nAr)A,
[" {r)v(t)dt=
J.,
,. I
trl/N. Write a program to verify thar
J#"*[#]
can be used as a mathematical model for the delta function by approximating the follow-
ing integrals by a summation:
l-i o)"
ru /_,0+ rrffiexn[-#I]"
r"r /', t, + rr ffiexn
cr
J't,+r)\ffiexpl-#).
Repeat Problem 1.24 for the following integrals:
(al
(b)
tcr
J',expt-rl
A*+"
/',exn[-r;
j+#A +1,dl
exp[-r1
j+#i=frdt
J2
Ch apler 2
Continuous-Time SYstems
2.1
INTRODUCTION
Every physical system is broadly charactcrizc<l by its ability to acccPt an input such as
force, pressuie, displacement, etc., and lo produce an outPut in
voltage,
to this input. Fbr cxample, a radar receiver is an electronic system whose
,".poir. "ut."ni
input is the reflection of an electromagnetic signal from the target and whose output is
a ideo signal displayed on the radar screen. Similarly. a robot is a system whose input
the
is an elect-ric coniroi signal and whose output is a motion or aclion on the Part-of
robot. A third example is a filtcr, whose input is a signal corruPted by noise.and interference and whose output is the desired signal. In brief, a systcm can be viewed as a
process that results in transforming input signals into output signals'
We are interested in both continuous-time and discrete-timc systems. A continuoustime system is a system in which continuous-time input signals are transformed into
continuous-fime output signals. Such a system is representcd pictorially as shown in
Figure 2.1.1 (a). wheie r(r) is rhe input and y(r) is the output. A discrete-time sptem
(See Figure
is i system thai transforms discrete-iime inputs into discrete-time outputs.
sysand
discrete-time
2.l.1ib)). Continuous-time systems are lreated in this chaptcr.
tems are discussed in ChaPter 6.
In studying the behavioi of systerns, the procedure is to modcl mathematically each
element t-hat-comprises the syitem and then to consider the interconnection of elements. The result is described mathematically either in the time domain, as in this
chapter, or in the frequency domain, as in Chapters 3 and 4'
In this chapter, we show that the analysis of tinear systems can be reduced to the
study of the response of the system to basic input signals'
41
42
Continuous-Time
(a)
Flgue
)_.2
ZLl
Systems
Chapter z
(b)
Examples of continuous-time and discrete-time systems.
CI-,ASSIFICATION OF CONTINUOUS-TIME
SYSTEM
Our intent in this section is to lend additional substance to the concept of systems by
discussing their classification according to the way the system interacts with the input
signal. This interaction, which defines the model for the system, can be linear or nonlinear, time invariant or time varying, memoryless or with memory, causal or noncausal, stable or unstable, and deterministic or nondeterministic. For the most part, we
are concerned with line.rr, time-invariant, deterministic systems. In this section, we
briefly examine the properties of each of these classes.
2.2.1 Linear and Nonlinear Systernn
When the system is linear, the superposition principle can be applied. This important
fact is precisely the reason that the techniques of linear-system analysis have been so
well developed. Superposition simply implies that the response resulting from several
input signals can be computed as the sum of the responses resulting from each input
signal acting alone. Mathematically, the superposition principle can be stated as follows: Let y,(l) be the response of a continuous-time system to an input r,(r) and yro
be the response corresponding to the input xr(t). Then the system is linear (follows the
principle of superposition) if
1. the response torr(r) +.rrO isy,(l) + yr(l); and
2. the response to ax,(t) is ay,(l), where a is any arbitrary constant,
The first property is referred to as the additivity property; the second is referred to as
the homogeneity property. These two properties defining a linear system cao be combined into a single statement as
ax,(r)
+
pxr(t)
+
ayr(t)
+
pyr(t)
(2.2.1)
where the notation x(r) -+ y0) represents the inpuuoutput relation of a continuoustime system. A system is said to be nonlinear if Equation (2.2.1) is not valid for at least
one set of r,(l) , xr(t), a, and B.
hanple
2.2.1
Consider the voltage divider shown in Figure 2.2.1 u/ith Rr = Rz. For input xO and ourput y(t), this is a linear system. The inpuUoutput relation can be explicitly rrritten as
ro) =
;;f4,
sy
=f,,1t1
S*.2.2
,13
Classificaton of Conlinuous-Time Systems
Rl
R,
+
x (t) +
R2
y(t)
tigure
2"2.t
System for Exanple
2.2.1.
i.e., the transformation involves only multiplication by a constant. To prove that the system is indeed linear, one has to show that Equati oa (2.2.1) is satisfied. Consider the input
.r()
:
ar1(t) + b4Q). The corresponding output is
r(r) = ].ro
= | t-,O + Dxz(r)l
= a].r,(r) + blxrQ)
= ayr(t) + byr(t)
where
y,(r) =
j.r,(r) and yr() = )xr()
On the other hand, if R, is a voltagedependent resistor such that Rr = Rrr(t), then the
system is nonlinear. The input/output relation can then be writtcn as
r0) =
;;fr
=
--{')r(r) + I
o'0)
For an input of the form
x(t)=*,1r;+bxr(t)
the output is
,tt=;frfi!6,f,,,a
This system is nonlinear because
att(t) + bxr(t) + o xlt)
-rr(r)+
arr()+bx2()+l
for some .t, (t),.rr(t), a, and
D
(try.r,(t) = :z(t)
and a
,r(l)
*,
-rr(r)+l
I
=
1,
b = 2\.
Example 2.2J
Suppose we want to determine which of the following systems is linear:
(a)
y(O:
K+
Q2.21
Continuous-Time
(b)
y(r) =
Systems
exp[r(r)l
Chapter 2
e.z.s)
For pan (a), consider the inpul
r(r) = ar,G) +
brr(t)
(2.2.4)
The corresponding ouput is
y<i = K*fuxr(t) +
bxr(t)l
which can be written as
ili = xa f r,1t) + Kb fix,(t)
= atlt) + byr(t)
where
y,t.) = x
fit,g)
and
y,O = xfrr,(t)
so
lhat the system described by Equation (2.2.2) is linear.
Comparing Equation (2.2.2) ur.th
a1) =
tff
we conclude that an ideal inductor with input i(r) (crrrrent through the inducror) and output zr(t) (voltage across the inductor) is a linear system (element). Similarly, we can show
that a system that performs integration is a linear system. (See problem 2.1(f).) Hence, an
ideal capacitor is a linear system (element).
For part (b), we investigare the response of the system to the input in Equation (22.4):
y(r) = exp[a.r,(t) + hxr(t)l
=
exp [ar,
(t)] exp[6.12O]
+ ayr(t) + byz[)
Therefore, the system characterized by Equation (2.2.3) is nonlinear.
kample 23.3
Consider the RL circuit shown in Figure 2.2.2, ffis circuit can be viewed as a continuoustime slatem with input r(t) equal to voltage source e(r) and wirh output y(l) equal to the
current in the inductor. Assume that at time r0, iLGo) = y(ro) = /0.
Applying Kirchhoffs current law at node a, we obtain
%rO*f.d.(r)=s
S@,.
2.2
Classlllcation ol Continuous-Time Systems
45
Rla
lrz tr) = v<rl
t
r(t) - e(t) +
R2
Flgure 2.2.2 RL circuit for
Example 2.2.3.
Since
a.1o=
tff
it follows thar
,T*#*,.0)=f
so that
ry).rffi,,1,1
= rdin;"r,t
aP.
=
or
"ii+rr(r)
77fi;;'(r)
(22s)
The differential equation, Equation (2.2.5), is called the inpur/ourput differential equation
describing the system. To compute an explicit expression fory(r) in terms of .r(r), we must
solve the differential equation for an arbitrary inpur r() applied for, > ,0. The cornplete
solution is of the form
y(r)
:
yto).*[-Affu
-
t,
-
,o)]
*3"J f '-o[-ff ,t'
-t)]'r(t)dt;
t]ro
(226)
According to Equation (2.2.1), this system is nonlinear unless y(ro) = 0. To prove this, consider the input x(t) = arr(l) + Btr(t). The corresponding outpur is
y(,) = y(,0)
*r[-
#ih;i
(,
-,,)
]
.
au-*i I" *'[- #'h'
.
.ff r I"'*[-#a
+ cy,(l) + pyz(r)
(r
(r
-'l)]'r'('l)
-'l)]r'(r1
d'l
d'l
Continuous-Time
Systems
Chapter 2
This may seem surprising, since inductors and reslstors are linear elements. However, the
system in Figwe 2.2.2 violates a very important prop€ny of linear systetns, namely, that
zero input should yield zero output. Therefore, if yo = 0, then the system is linear.
The concept of linearity is very important in systems theory. The principle of superpositioE can be invoked to determine the response of a linear system to an arbitrary
input if that input can be decomposed into the (possibly infinite) sum of several basic
signals- The response to each basic signal can be comPuted separately and added to
obtain the overall system response. This technique is used repeatedly throughout the
text and in most cases yields closed-form mathematical results, which is not possible
-for nonlinear systems.
Many physical systems, when analyzed in detail, demonstrate nonlinear behavior.
In such situatioDs, a solution for a given set of initial conditions and excitation can be
found either analytically or with the aid of a computer. Frequently, it is required to
determine the behavior of the system in the neighborhood of this solution. A common
technique of treating such problems is to approximate the system by a linear model
that is valid in the neighborhood of the operating point. This technique is referred to
as linearization. Some important examples are the small-signal analysis technique
applied to transistor circuits and the small-signal model of a simple pendulum.
2.2.2 Time-Va4ring and Time-lnvariant Systeme
is said to be time invariant if a time shift in the input signal causes an identical time shift in the output signal. Specifically, if y(t) is the output corresponding to
as the output when .r(l
to) is
input r(r), a time-invariant system will have y(t
the input. That is, the rule used to compute the system output does not depend on the
time at which the input is applied.
The procedure for testing whether a system is time invariant is summarized in the
following steps:
A system
- ti
-
l.
Let yr(t) be the output corresponding to.rt(l).
2. Consider a second input, rr(0, obtained by shifting
r,(l),
xr(t)=r,(l-to)
and find the output yz(r) corresponding to the input xr(t).
3. From step 1, find y,(t ro) and compare with yr(t).
ro), then the system is time invariant; otherwise it is a time-varying
4. lf yr(t):
system.
llt -
nra-ple
-
2.2.4
We wish to determine whether the systems described by the following equations are time
invariant:
(a) y0) = cos:(r)
dY(t) _ry()
,-"
+ r(r), , > 0, /(0) = 0
rDr _______ =
dt
S€o.
2.2
47
Classification ol Continuous-Timo Systems
Consider the system in part (a). y(r) = cos.r(l). From the steps listed before:
l.
For input x1(l), the outPut is
2'
yr(r) = cos.r: (t)
consider the second input' rr(l) = :r(t - to) The corresponding output
(2.2.71
is
yr(l) = cos.rr(t)
= cos:r(t
-
(2.2.8)
,o)
3.
From Equation (2.2.7)
4.
Q'29)
Comparison of Equations (2.2.8) and (2.2.9) shows thal the system y(l) = cosr(') is
yr(,
-
ro)
= cos.rr(t
- to)
time invariant,
Now consider the system in Part (b).
l.
If the input is:,(r), it
can be easily verified by direct substitution in the differential
equation thar the output y,(t) is given by
y,t,r =
2.
consider the input xr(t) = rr ('
""'
=
=
3.
22.5
-
(2.2.101
lo)' The corresponding output is
- *
*',."
| ; - l..+l:;,0
|
l-,^
(2.2.111
",
*rl-'i* EP1l,,r"r,'
From Equation (2.2.10),
,r(r
4.
I *p[- i * rf,,<,v"
-
to)
=
['""p[-tt-"t'
+
l)x,(t)dr
+
v'1ttl
(2.2.r2)
Comparison of Equations (2.2.11) and (2.2.12) leads to the conclusion that the system
is not time invariant.
Systems with and without Memor?
For most systemst the inputs and outputs are functions of the independent-variable. A
system is said to be memoryless, or instantaneous, if the present value of the outPut
depends only on the preseni value of the input. For example, a resistor is a memoryvoltless system, iince with input r(r) taken as the current and orltput y(r) taken as the
age, the input/output relationship is
Y(t) = Rr(t)
the
where R is the resistance. Thus, the value of y(r) at any instant depends only on
value of .r(r) at that instant. on the other hand. a capacitor is an example of a system
I
Contlnuous-Tlme
Systems
Chapter 2
with memory. With input taken as the current and output as the voltage, the inpuUoutput relationship in the case of the capacitor is
Y(i = ]Cf--x@ar
where C is the capacitance. It is obvious that the outPut at any time t depends on the
entire past history of the inPut.
If a system is memoryless, or instantaneous, then the input/output relationship can
be written in the form
y(t) =
r(x(r))
(2.2.13)
For linear systems, this relation reduces to
Y(0 = k0)x(t)
and
if the system
is also time invariant, we have
Y(') = &.r(')
where&isaconstant.
An example of a linear, time-invariant, memoryless system is the mechanical
damper. The tinear dependence between force fO and velocity o(t) is
a(t) =
1
,f(t)
where D is the damping constant.
A system whose response at the instant r is completely determined by the input sigto t) is a finite-memory system
nals over the past T seconds (the intewd from r
having a memory of length T unis of time.
-I
&ample2.25
The output of a communication channel y(t) is related to its input
fl
T,)
o,x(t
YQ) =
l,
r(l)
by
-
is clear that the ourput y() of the channel at time r depends not only on the input at
time ,, but also on the past history of r(r), e.9.,
It
y(o) = a*(o) + o1x(-T) +
"'+ aler")
Therefore, this qntem has a finite memory of T = Eaxi(ii).
2.2.4 Causal Syetnme
A system is causal, or nonanticipatory (also known as physically realizable), if the output at any time ro depends only on values of the input for I < !0. Equivalently' if two
inputs to a causal system are identical up to some time r0, the corresponding outPuts
Sec.
2.2
49
Classilication ot Contrnuous-Time Systems
must also be equal up to this same time since a causal systeln cannot predict if the two
inputs witl be different after l, (in the future)' Mathematicallv' il
.r,(t)
=.rr(t)l r(trr
and the system is causal. then
y,(t)=y2Q):t<t,,
A system is said to be noncausal or anticipatory il it is not causal'
iausal systems are also referred to as physically realizablc s!'slcrns'
Example 2.2.6
In several applications. rve arc interested in the value of a signal .\ (, ). not al ptcsent timc
p. Thc signal .t'(t)
t. bur at some time in the future. t + rr. or at some lime in thc pit\I. , p) is rhe delal't'd version of
= 3(1 + rr) is called a prcdicrion of .r(l) rvhile the signal l (t svstem is an kleal deloy.
while
second
pretlictor
the
.r(l).
' The first sysrem is called an ideal
Clearly the ircdictor is noncausal since thc output dcpcnds on luture values of the
input.Wecanalsoveri[ythismathematicallyasfollows.Considcrthcinputs
,s5
[t
.rr(r)=lgxp1_r) r>5
and
.,r(t) =
so
[t
to
r
<
5
r>5
that,rl(r) and.tr(r) arc idcntical uP to rir = 5
Supposc a = 3. The corrcsponding outputs are
r.,(,) =
{:*pt_ (, * 3)l
r>2
and
[t t=2
.,',rr)=to
r>2
:
ll'
r''(r) for all t < 5. Bul 1'r(3) = e rP(-6) while.v'(3) =
is causal.,v1(r)
Thus the system is noncausal.
The ideal delay is causal sincc irs outpur depcnds onlv ttn Pilsl vir lues of the input signal.
If thc sysrem
Example 2.2.7
We are often requirccl to dctcrnrinc lhc irvcrilgc valuc ol :t sigrrirl itl cach time instanl l'
we do this hy deiining thc r.r,,rirr.q ar.r,rr,.q(..r,'(r ) oI signul.r (r )..r" (, ) can b€ compulcd in
several waYs. for examPlc.
."(,)=.f/,r(r)r/r
12.2.t11
50
Continuous-Timo
4,.1a.,
*"1t1= !rl'4
L,et
r,(t)
Systems
Chapter 2
e.z.ts)
and .rr(t) be two signals which are identical for r lo but are differenr from each
=
rn. Then, for the system of Equation (2.2.14),
other for , >
:i"00) =
+t.
I
i
xr(t
)dr
(2.2.16(a))
t',
x2$)dr =.ri'(to)
Jn-,
Thus this system is causal.
For the system of Equation (2.2.15)
t
ri'(ro) =
lr[)'i
*,r,,0,
(22.16(b))
rj'(,,; =
i[_i"nr*
(2.2.16(c))
which is not equal to
since.r,(l) and.rr(t) are not the same for, >
,0.
This s,'stem tr.,1r.t.;org, nsncafisel.
2.2.6 Invertibility and Inverse Systems
A system is invertible if by observing the output, we can determine its input. That is,
we can construct an inverse system that when cascaded with the given system, as illustrated in Figure 2.2.3, yields an output equal to the original input to the given system,
In other words, the inverse system "undoes" what the given system does to input x(t).
So the effect of the given system can be eliminated by cascading it with its inverse system. Note that if two different inputs result in the same output, then the system is not
invertible. The inverse of a causal system is not necessarily causal, in fact, it may not
exist at all in any convenlional sense. The use of the concept of a system inverse in the
following chapters is primarily for mathematical convenience and does not require that
such a system be physically realizable.
Iigue ZZ3
Concept of an inverse system.
t-ernple 2.28
We wanl to determine if each of the following systems is invertible. If it is, we will construct the inverse slntem. If it is not, we will find two input signals lo the system that have
the same output.
(a) y(t) = 2r0)
Sec,
2.2
51
Classification of Continuous'Time Systems
(b) y(t) = cos:(t)
lt
(c) y(t) = I x(t)dt;
) _-
y(-t)
=6
(d) Y(t) = .r(t + l)
For part (a). system y(r) = 2r(r) is invertible with the inverse
z(r) = jy(r)
This idea is demonstrated in Figure 2'2.4'
yUl = 2x(tl
.r
;11y= j.r'{r) = x(r)
0)
Iigure
224
For part (b), system
same output.
Inverse system for part (a) of Example 2.2.8.
y(l) = cos:(r)
For part (c),systemy(r)
=[
is noninvertible since
xQ)dr,
y(--)
r(t)
and
x(t) + 2zr give the
= 0, is invertible and the inverse
s],stem
is the differentiator
z(t) =
For part (d). system
y0)
d
o,y()
: :(l + l) is inverrible and the inverse system is the one.unit delay
z(t)=y(t-l)
In some applications, it is necessary to perform preliminary processing on
the
received sign;l to transform it into a signal that is easy to work with. If the preliminary
prooessing is invertible. it can have no effect on the performance of the overall system
(see Problem 2.13).
2.2.6 Stable Systems
One of the most important concepts in the study of systems is the notion of stability.
Whereas many different types of stability can be defined, in this section, we consider
only one type, namely, bounded-input bounded-output (BIBO) stability. BIBO stability involvii the behivior of ihe output response resulting from the apPlication of a
bounded input.
Signat x(i) is said to be bounded if its magnitude does not grorv without bound, i.e.,
l.r(l)l < A < -, for all t
A system is BIBO stable if, for any bounded input r(t), the rcsponse y(l) is also
bounded. That is,
lx(r)l
<r,<-
imPlies l.v(r)l
<ar<'
Coniinuous-Time
52
Systems
Chapter 2
Exanple 2J.9
We want to determine which of these systems is stable:
(a) y(t)
:
exp
[r(r)]
r'
(b) y(t) =
J-_*(,)0,
For the system of part (a). a bounded inpur
put y0) with magnitude
x()
such that I.r(l) |
<
B. results in an out-
ly()l = lexp[.r(r)]l = explr(r)ls exp[8]< c
Therefore, the oulput is also bounded and the system is stable.
For part (b), consider as input r(r), the unit step function z(t). The output y(t) is then
equal to
y(t)=)_-u(r)tk=r(t)
Thus the bounded input a(r) produces an unbounded output
r(l)
and the system is not stable.
This example serves to emphasize that for a system to be stable, all bounded inputs
must give rise to bounded outputs. If we can find even one bounded input for which
the output is not bounded. the system is unstable.
2,3
LINEAR TIME-INVARIANT SYSTEMS
In the previous section we have discussed a number of basic system properties. Two of
these. linearity and time invariance, play a fundamental role in signal and system analysis because of the many physical phenomena that can be modeled by linear time-invariant systems and because a mathematical analysis of the behavior of such systems can
be carried out in a fairly straightforward manner. In this section, we develop an important and useful representation for linear time-invariant (LTI) systems. This forms the
foundation for linear-system theory and different transforms encountered throughout
the text,
A fundamental problem in system analysis is determining the response to some
specified input. Analytically, this can be answered in many different ways. One obvi'
ous way is to solve the differential equation describing the system, subject to the specified input and initial conditions. In the following section, we introduce a second
method that exploits the linearity and time invariance of the system, This development
results in an important integral known as the convolution integral. In Chapters 3 and
4, we consider frequency-domain techniques to analyze LTI systems.
2.8.1 The Convolution Integral
Linear systems are governed by the superposition principle. Let the respoNes of the
system to nro inpus x, (t) and rr(t) be y, (r) and y2(t) respectively. The system is linear
if the response to the input x(l) : arx, (t) + aSrQ) is equal to y(t) = atyt(t) + aSrQ).
Sec.
2.3
53
Linear Tim+lnvariant Systems
More generally, if the input r(t) is the weighted sum of any set of signals x,(t). and if
the response to r,(l) is y,(r), if the system is linear, the outPut y(r) will be the weighted
sum of the responses y,(l). That is. if
x(l) = a,x,(t) + arxr(t) + "' + arxn(t) =
)
a,x,(t)
we will have
v(t) = a,yr(t) + asr$) +
"'
+ arvyl(,) = )o,v,(r)
In Section 1.6, we demonstrated that the unit-step and unit-impulse functions
can
be used as building blocks to rePresent arbitrary signals. In fact, the sifting property of
the 6 function,
x(r) =
/"
x(t) 6(, - r)dr
(2.3.1)
shows that any signal r(r) can be expressed as a continuum of weighted impulses.
Now consider a continuous-time system with input x(t). Using the suPerPosition
property of linear systems (Equation 2.2.1), we can exPress output y(t) as a linear com'
bination of the responses of the system to shifted impulse signals; that is,
v(r) =
[_rn,
h(t,r)d1
(2.3.2)
-
r).
where l(r, r) denotes the response of a linear system to the shifted impulse 6(t
r) is the output of the system at time , in response to input 6(t ")
In olher words,
apptied at time z. If, in addition to being linear, the system is also time invariant, then
,r(r, r) should depend not on z. but rather on , 7; i.e., h(t, r\ = h(t r). Tbis is
because the time-invariance property implies that if ,?(r) is the resPonse to 6(t). then
r). Thus. Equation (2.3.2) becomes
the response to 6(t r) is simply h(t
i(,
-
-
-
-
.v(r) =
/' r(r) h(t - r)dt
(2.3.3)
The function tr(r) is called the impulse response of the LTI system and represents the
output of the system at time , due to a unit-impulse inPut occurring at I = 0 when the
system is relaxed (zero initial conditions).
The integral relationship expressed in Equation (2.3.3) is catled the convolution
integral ofsignals r(r) and ,l(r) and relates the input and output of the system by means
of the system impulse response. This operation is represented symbolically as
y(t)=x(t)*h(t)
(2.3.41
One consequence of this representation is that the LTI system is completely charac'
terized by its impulse response. It is important to know that the convolution
y(t)=y(1)*fi(1)
Continuous-Time
does not exist fior all possible signals.
two signals x(t) and ll(t) to exist are:
fie
Syst€ms
Chapt€r 2
suflicient conditions for the convolution of
Both.r(r) and h(r) must be absolutely integrable over the interval (--,0].
2. Both x(r) and tr(r) must be absolutely integrable over the interval [0. o).
3. Either.r(t) or /r(l) or both must be absolutely integrable over the intewal (--, -).
l.
The signal
r(t)
is called
absolutely integrable over the interval [a. b]
l' l,til o, . *
if
(2.3.s|
For example, the convolutions sin rl * cos r,l. exp[r] * exp [r], and exp [t] * exP[-r] do
not exist.
Continuous-time convolution satisfies the following important proPerties:
Commutativlty.
x(t)*7111= h(tl"x(t)
This property is proved by substituticn of variables. The property implies that the roles
of the input signal and the impulse resPonse are interchangeable.
Assoclatlvlty.
r(t) x fi,(t) * hr(t): [.r(t) x lr,(t)] * nr(t)
:
x(t) * [&,(t) * nr(t)]
This property is proved by changing the orders of integration. Associativity implies that
a cascade combination of LTI systems can be replaced by a single syslem whose
impulse response is the convolution of the individual impulse responses.
Dlstrlbutlvtty.
x(t) * lh,(t'1 + hz?)l = k(t) .
&r
(r)l + [x(r) * /rz(t)]
This property follows directly as a result of the linear jrroperty of integration. Distributivity states that a parallel combination of LTI systems is equivalent to a single system whose impulse response is the sum of the individual impulse responses in the
parallel configuration. All three properties are illustrated in Figure 2.3.1.
Some interesting and useful additional properties of convolution integrals can be
obtained by considering convolution with singularity signals, particularly the unit step.
unit impulse, and unit doublet. From the defining relationships given in Chapter l. it
can be shown that
r(r) * 5(r) =
[rn,
6(r
- r)dr :
x(r)
(2.3.6)
Therefore, an LTI system with impulse response & (r) = 6(l) is the identity system. Now
.r(r) * u(r)
=
f-_x@u1t
- r)dr - f__rnro,
(2-3.7)
Sec.
2.3
55
Llnear Time-lnvariant Systems
t,
,,,,_-F-]----,,,,
(r)
(a)
.r(r)
!,(I
*,,,----f*,flJl*,u,
)
(b)
h
t(tl
.r(r)
v(t)
h
ttt) + hzU)
v(tl
h10l
(c)
Figure
2J.1
Properties of continuous'time convolution.
Consequently, an LTI system with impulse response
tor. Also,
h(t\ = ,,1,1 is a perfect
x(r)x6'(r)= [ r(t)s'( -r)dr=.r'(r)
J_-
integra-
(2'3'S)
so that an LTI system with impulse response /r(l) = 6'(t) is a perfect differentiator' The
previous discussions and the discussions in Chapter I point out the differences between
the following three operations:
-r(t)D(t
-
4)
I x(t)6(r -
:
n(a)6(,
a)dt
-
-
4)
x(a)
r(r)*6(r-u)=x(t'a\
The result of the first (sampling property of the delta function) is a &function with
strength x(a). The result of the second (sifting property ot thc delta function) is the
value of the signal .r(t) at , = a, and the result of the third (convolution property of the
delta function) is a shifted version of :(t).
Erample 2.9.1
Irt
the signal x(l) = a6(r) +b5(, - ro) be input to an l.T'l systcm with impulse response
ft(r1 = 711*r,-cr]a(r). The input is thus the weighled sum rtf trvo shifted &functions.
56
Contnuous-Tlme
Syslems
Chapter 2
Since the system is linear and time invariant, it follows that the output, y(l), can be
expiessed as the weighted sum of the responses to these &functions. By definition, the
response ofthe system to a unit impulse input is equal to tr@ so that
y(t)=ah(t)+bh(t-to)
= aKexpl-ctlu(r) + bKexp[-c(r
Example
-
ro)lu(r
-
rJ)
233
The output y(t) of an optimum receiver in a communicatiotr system is related to its input
r(r) by
y(i=[r(r)s(I-t+.t)d.t,
Jt-T
where s(r) is a known signal with duration
tion (2.3.3) yields
h(t
- r) =r(I-,
= 0,
L
+
0s,=r
es.s)
Comparison of Equation (2.3.9) with Equa-
r),
0< I - t < T
elsewhere
or
&(t) =511
- 11' 0<r<r
=0,
elsewhere
Such a system is called a matched flter. The sptem impulse response is s(r) reflected a.nd
(sysrem is marched ro s(r)).
shifted by
I
Exampb 2A.9
Consider the system described by
,G)a,
v(i = l[''-'
t Jr-i
As noted ertier, this system computes the runnitrg average of signal r(r) over the interval
\t-Tlz.t+Tl2).
We now let r1t) = 6(r) to find 'he impulse response of this systeF as
oo=il,')l,oto,
(I
l-
T
T
=,fr 2-'-2
[
0
--<t<-
otherwise
where the last step fotlows from the sifting property, Equation (1.6.E), of lhe impulse function.
Eqorrple 2*3.4
Consider the LTI system with imputse response
ftO=exp[-ar]uO, a>0
SEc.
2.3
Lin€ar Time-lnvariant Systems
lf the input to the system
57
is
=
.r(t.1
exp[-Dr]u(r).
b
*
a
then the output y(t) is
y(t) =
)
r)]r(r - r)rlr
_.exp[-btla(r)exp[-a(r -
Nole thar
rr(t)u(-t)
l, 0(t(r
= 0, otherwise
=
Therefore
y(t) =
=
|
exp[-at] expl(a
Jn
I
;:
t[exP(-br)
-
- b\rldr
exp(-ar)lrr(r)
x:rqrnple 2.8.6
[.et us find the impulse response of the system shown in Figurc 2.3.2
if
h'(t) = exPl-2tlu(t)
h'(t) = ZexPl- tlu(t)
,rr(t) =
exp
[- 3rla0)
na(t) = a5'1';
By using the associative and distributive propcrties ofthe impulso response it follows that
&() for the system of Figure 2.3.2 is
h(r) = hr(t) * hr(t) + hr(t) * hu[)
= [exp(-r)
-
exp(-Zr)lu(t) + l2exp(- 3r)u(t)
where the last step follows from Example 2.3.4 and the fact lhat
Figure
2.3.5.
,r
2.3.2
(I) * 6(r) = r(r).
System of Example
Conlinuous-Time
58
Systems
Chapter 2
Example 2.9.6
The convolution has the propcrty that the arca of the convolutioh integral is equal to the
product ot thc areas of the two signals entering into the convolution. The area can be computed by integrating Equation (2.3.3) over the interval -.:. < t < o, giving
L. !(t\dt = l" .J' .,t,lU,
-
rttttttt
Interchanging the orders of integration results in
l__ruro,
[
.'otlf .ott -,Yt)a',
=I
.r(t;[area under /r(r)ldr
=
:
area
underx(t) x area under ft(l)
This result is generalized later when we discuss Fourier and l:place transforms, but for
the moment, we can use it as a tool to quickly check the answer of a convolution integral.
2.8.2 Graphicd Interpretation of Convolution
Calculating -r (r ) " ft (t) is conceptually no more dilficult than ordinary integration when
the two signals are continous for all t. Often, however, one or both of the signals is
defined in a piecewise fashion, and the graphical interpretation of convolution
becomes especially helpful. We list in what follows the steps of this graPhical aid to
computing the convolution integration. These steps demonstrate how the convolution
is computed graphically in the interval ti-t = t
= ri, where the interval [],-t, t ] is chosen such that the product.r(z)ft(t - r) is descnbed by the same mathematical exPression over the interval.
1.
For an arbitrary, but fixed value of t in the interval [ti-r, tJ, plot .t(7),
/z(t - z), and the product g(t, r) : .r(r)h(t - r) as a function of 7. Note that ft(t - r) is
a folded and shifted version offi(r) and is equd to r,(-") shifted to the right by, seconds.
Step
Step 2. Integrate the product g(1, r) as a function of z. Note that the integrand
g(t, r) depends on l and r, the latter being the variable of integralion, which disappears
after the integration is completed and the limits are imposed on the result. The integration can be viewed as the area under the curve represented by the integrand.
This procedure is illustrated by the following four examples.
Exanple 2.3.7
Consider the signals in Figure 2.3.3(a), rvhere
x(t) = 4
exp[-t], 0<t<co
h(,) =
+,
Figure 2.3.3(b) shows
x(r), h(t
0sr<T
- rl, ind xQ'1h(t -
z) with t < 0. The value of t always
Sec.
2.3
59
Linear fime-lnvariant Systems
r(r)=/exp[-tl
(a)
xU\ h(r
- rl
_t__
(b)
(c)
(d)
x(r) r i(t)
lc-r-exp[-r])
(e)
Flgure
233
ple2.3.7.
GraPhical interpretation
of the convolution for Exam'
Contlnuous-Time
Sy8tems
Chapter 2
equals the distance from rhe origin of .r(r) to the shifted origin of&(-z) indicated by the
dashed line. We see thal the signals do not overlap: hence. the integrand eqtrets zero, and
r(t) arr(t) = 0,
,<0
When0 s r s f. as shown in Figure 2.3.3(c), the signals overlap for 0 Tst,sorbecomes
=
the upper limit of integration. and
L.i o,
/ a expl-t1
=1t,- l+exp[-r]] o<rsr
x(t) * h(t)=
Finally. when I
> I,
.r(r)
a
in Figure 2.3.3(d). the signals overlap fot t
as shown
ftO =
tt
1,.
,oexp[-t]
=4r
7{ -
']'
-
T
=
"s
r, and
o,
| + exp[-r]]exp[-( - fl].
t>T
The complete result is plotted in Figure 2.3.3(e). For this example. the plot shows that convolution is a smoothing operation in the sense that.r(t) * ft(r) is smoother tban either of
the original signals.
Exanple 2.8.8
Let
us determine the
convolution
Y
(t) =
\
rect (t l2al s rcct (t
/h)
Figure 2.3.4 illustrales the overlapping of the two rectangular pulses for different values
ofr and the resulting signal 1'(). The result is expressed analytically as
Y(t):
< -za
(o,
t
l;h;.
;!,=.';o
[0.
t >2a
or. in more compact form.
.vo={2r-l'l'
=
2n
14
Itl
su
=u
L(t/2a)
This signal appeam frequently in our discussion and is called the triangu(ar signal. we use
the notation A0/2a) to denote thc rriangular signal that is of unit heightjcenGred around
r = 0. and has base of length
j
rkr.
Sec.
2.3
6'l
Unear Tlme-lnvariant Systems
x(r)
-ta
t
-a0
-a0a
to-2a
-2o
1t 1-a
-at O o
-4 <, <0
-o0,
0(l(a
-a0a
a 1t (2a
-a0
v
(t)
t3a
, -2a
-2aO2at
llgure
23.4
Graphical solution of Example 2.3.8.
Eranple 23.9
Let us compute the convolution 'r(')
and h(t) are as follows:
' h(')' where 'r(t)
h(t)
r (r)
-
7)' We can
Figure 2.3,5 demonstrates the overlapping of the rwo signals x(7) and ft(t
<
z) is alwap zero. For -2 s t
-1, the Prod'
see th-at for, < -2, the Product r(r)h(,
is
area
+
the
+
2:
lherefore.
,
uct is I triangle with base , 2 and heigbt
-
yg1=l$+2)'z
-23t<-r
Continuous-Tlmo
62
,-
|
t"i
(b)
t<-2
,-10,
(c) -l < r<0
(d)
Y
-f
Flgure
For
,+l t
Ot-I,
(e) ,>l
235
-2
-l
-23,
s-l
,+l
0srsl
(t)
0
I
(f)
Graphical interpretation of :G) * lt0) for Example 2.3'9.
-l s , < 0. the product is shown in Figure 2.3.5(c)' and the area i8
y(t):r-!2,
For0<r< l, the product
is a
-1=,<0
reaangle with base
Y(t)=l-t'
For t
Chapter 2
,+10
t-l
, ,+l -l
Systems
1-
os,<l
> I, the product is always zero. Summarizing,
0,
we have
r< -2
ry:
v(,) =
t and height 1; therefore, the area is
-z=, < -r
I
-i'
-1<r<o
1
-r,
0sr<l
0,
r2
,=1
Ssc.
2.3
63
UnearTime-lnvadant Syst€ms
Exanple 2.3.10
The convolution of the two signals shown in the following figure is evaluated using graph'
ical interpretation.
From Figure 2.3.6, we can see that for t < 0, the product
.t
(r)ft (r
-
t)
is alwayszero
for
allctheref6re,y(r)=0.Foro=t<l,theproductisatrianglewithbase'ardheightc
therefore, yO = t2/2.Eot I = t <z,lhe area under the product is equal to
y(r) = t
-
fl(,
- lf
1=t<2
+ l(2 -,)'zl,
For2sr<3,theproductisatriangtewithbase3-randheight3-4therefore,y(t)=
(3 - t)212. For I >3, the product r(7) h(t - r) is atways zero. Summarizing, we have
t-l
,-
1
(a)
t-l
I
,< o
I
a3.6
,+l
o) 0<r<I
(d)
a+l
Convolution of
t
(f)
,(r)
and
t+l
2<r<3
t2
?
(€) ,>3
Hgure
,
t-l
, 2 t+l
(c) | s t<2
t-l
l
,t(r) in Example 2,3.10.
a
..
Continuous-Time
0,
l<0
t2
0<r<l
1'
*-r-),
v(t) =
(3
-
t)z
Chapt€r 2
l=t<2
2st<3
2'
,>3
0,
2.4
Syslems
PROPERTIES OF LINEAR, TIME-INVARIANT
SYSTEMS
The impulse response of an LTI system represents a complete description of the characteristics of the system. In this section, we examine the system properties discussed in
Section 2.2 in terms of the system impulse response.
2.4,1 Memotyleee LTI Syetome
In Section 2,2,3, we defined a system to be memoryless if its output at any time depends
only on the value of the input at the same time, There we saw that a memorytes, timeinvariaat system obeys an input/output relation of the form
y(l) = Kx(t)
for some constant K. By setting
has the impulse response
r0) = O(r) in Equation (2.4.1),
h(t'1
2,42
=Y
51,1
(2.4.r)
we see that this system
(2.4.2)
Caueal LTI Systems
As was mentioned in Section 2,2,4, the output of a causal system depends only on the
present and past values of the input. Using the convolution integral, we can relate this
property to a corresponding property of the impulse response of an LTI system. Specifically, for a continuous-time system to be causal, y(t) must not depend on r(r) for z ,,
From Equation (2.3.3), we can see that this will be so if
)
h(t1
=g for l<0
(2.4.3)
In this case, the convolution integral becomes
y(t)= I x(t)h(t-t)dr
J--
r'
- Jn| helx(t-.r)dt
(2.4.4)
Sec.
2.4
Propertes ol Llnear, Time-lnvarlant Systeme
As an example. the system ft(t; = ,,1r; is causal, but the system h1t'1
is noncausal.
In general. x(t) is called a causal signal if
=
61,
+
lr,). lo
>
0.
x(t)=g' l<0
The signal is anticausal if r(l; = 0 for I E 0. Any signal that does not contain any singularities (a delta function or its derivatives) at t = 0 can be written as the sum of a
causal part x*(t) and anticausal part r-(t), i.e.,
r(t)=1'1,;*t-,,,
r(t) = exp[-4 can be written as
r(t) = exp [-t] a(t) + exp [-r]z(-t)
For example. the exponential
where the first term represents the causal part ofx(l) and the second term represents
the anticausal part ofr(r). Note that multiplying the signal by the unit step ensures that
the resulting signal is causal.
2.1,8 Invertlble LTI Syeteme
Consider a continuous-time LTI system with impulse response fi(t). In Section 2.2.5,
we mentioned that a system is invertible only if we can design an inverse system that,
when connected in cascade with the original system, yields an output equal to the system input. If &, O represents the impulse response of the inverse system, then in terms
of the convolution integral, we must, therefore, have
Y(t) = h,(t) t ft(r) * x(t) :,1'1
From Equation (2.3.6), we conclude that ft,(t) must saiisfy
hr(t) * fi(1) = i(r) * /,,0) = 60)
As an example, the LTI system
a0 - to)'
Q.4.5)
ll,(t) = 6(, + lo) is the inverse of the system ft(t) =
2.4.4 Stable LTI Syetene
A continuous-time
system is stable if and only if every bounded input produces a
bounded output. In order to relate this Foperty to the impulse response of LTI sptems, consider a bounded input x(r), i.e., lx(l)l <
for all l, Suppose that this input is
applied to an LTI system with impulse response &O. Using Equation (2.3.3), we find
that the magnitude of the output is
I
l.v(r)
I=
s
l1'
ool,o ")d,l
J" taf'll
a
lx(r
l'.lnotla,
- t)ldt
(2.4.6\
Continuous-Timg
66
Therefore, the system is stable
Sysl€ms
Chapt€r 2
if
f *ln<,\a,
.-
(2.4.7\
i.e., a sufficient condition for bounded-input, bounded-outPut stability of an LTI sys'
tem is that its impulse resPonse be absolutely integrable'
This is also a necessary condition for stability. That is, if the condition is violated,
For
we can find bounded inputs for which the corresponding outPuts are unbounded..
instance, let us fix r and choose as input the bounded signat x(r) = sgnUr(t z)] or'
equivalently, .r(r z; = sgp[ft(r)]. Then
-
-
ili
sgrr[ft(t)]dt
--
|
=
f -tn<"\ a,
-nO)
Clearly. if n(r) is not absolutely integrable,
=
As'an exampte, the system with
with fto) = z(r) is unstable.
y(t)rrill be unbounded'
exp[-rlu(r) is suble' whereas the system
'(')
Exanple 2.4.1
we will determine if tbe system with impulse responses as shown are caueal or noncausal,
with or without memory' and stable or unstable:
hlt)= 1s*r1-'ra(t) + exp[3'lr(-0 + 6(' - l)
(ii)
= -3exP[2r]u(t)
(iii) 'rz(t) = 550 + 5)
"3(r)
tilsrl
(iv) Iro(r) = ls
systems (i), (iii) aniliv) are noncausal sincs for r < 0, Ir,() * 0, i = 1, 3,4. Thur only system (ii) is causal.
Since &() is trot of the form K 6(r) for any of the systems, it follo*r that dl the systems
(D
have memorY.
To determine which of the systems are stable, we note that
[- -ln,f,lla= l,' rexpt-zrlar
+
l- -lrrrtlla, = t- *xp@ptis
unbounded.
['-ln,<ola,
Jo
exp[rd
*, = E
=s
and
f--lao<ola'=
J-
ro{da
= zo
Thus Systems (i), (ii) and (iv) are stable. while System (ii) is unstable'
Sec.
2.5
2.5
Systems Described by Ditlerential
Equations
67
SYSTEMS DESCRIBED BY DIFFERENTIAL
EQUATIONS
The response of the RC circuit in Example 7.2.7 was described in terms of a differential equation. In fact, the response of many physical systems can be described by differential equations. Examples of such systems are electric netrvorks comprising ideal
resistors, capacitors, and inductors. and mechanical systems made of small sPrhgs,
dampers. and the like. In Secrion 2.5.1. we consider systems with linear input/output
differential equations with constant coefficients and show that such systems can be
realized (or simulated) using adders, multipliers, and integrators. We shall give also
some examples to dcmonstrate how to determine the impulse response of LTI systems
described by linear, constant-coefficient differential equations. In Chapters 4 and 5, we
shall prcsent an easier, more straightforward method of determining the impulse
response of LTI systems. namely, transforms.
2.6.1 Linear, Constant-Coefficient Ilifierential Equations
Consider the continuous-time systcm described by the inpuUoutput differential equation
,r#, *
=
Zo,r' ip
F_ "!to:t
where the coefficients a,, i = 1,2, ..., N - 1, bi, i= 1, ..., ,DI are real constants and
(2.s.t)
N > M.In operator form, thc last equation can be written
(r" -
i
as
o,o,)t(t)= (Sr,r,),r,r
where D represents the differentiation operator that transforms -v(t) into
y'(r). To solve Equation (2.5.2), one needs the N initial conditions
y(ro). y'00), ...,
(2.s.2)
is derivative
y('-')(ro)
where ro is some instant at which the input.r(l) is applied to thc system and yi(t) is the
ith derivative of /(r).
The integer N is the order or dimension of the system. Note that if the ith derivative of the input r(t) contains an impulse or a derivative of an impulse, then, to solve
Equation (2.5.2) for, > ,0, it is necessary to knorv the initial conditions at time t = 6.
The reason is that the output.1,(t) and its derivatives up to order N - l can change
instantaneously at time r = Io. So initial conditions must be taken just prior to time lu.
Although we assume that the reader has some exPosure to solulion techniques for
ordinary linear differential cquatitrns, rve rvork out a first-ordcr case (N: 1) to review
the usuaI method of solving linear, constant-coefficient differential equations.
Example 2.6.1
Considcr the first-order LTI system that is described by the first-order differential equation
4v$) + ay(,) = bx(t)
dt
(2.s.3)
68.
Conilnuous-Time
Systems
Chapter 2
where a and D are arbitrary constants. The complete solution of Equation (2.5.3) consists
of the sum of the particular solution, yr(r), and the homogeneous solution, yrO:
y(t)=yr(t)+yr(t)
(2.5.1)
The homogeneous differential eriuation
dv(t\
riz+oY(t)=o
has a solulion in the form
Yr(t) =
c exPl-atl
Using the integrating factor method, we frnd that the particular solution is
,,111
'
= Ju[exn[- a(t-,r)lbx(t)dr, ,=ro
Therefore. the general solution is
y(r)=Cexp[ -ar1 + ['exp1-
a(r-r)lbr|)d1
,>h
(2S.5)
Note that in Equation (2.5.5), the constant C has not been determined yet. In order to
have the output completely determiued, we have to know the initial condition y(lo). Let
y0J = yo
Then, from Equation (2.5.5),
Yo
= CexP[-aro]
Therefore,forr>6,
y(r) = y6exp[-a(,
If. for t <
to,
:(t)
-
ro)]
+ ['exp[-a(r
Jh
- t)lbtr)dt
= 0, then the solution consists of only the homogeneous part:
y(t) =yoexp[-a(r
- ro)], r(lo
Combining the solutions for I > to and , < lo, we have
y0) = yoexp[-a(
- r0)l* {/'*r1-rO -,r)lbik)&}a(, -,0)
e.s.6l
Since a linear system has the property that zero input produces zero output, the previous system is nonlinear if yo # 0. This can tre easily seen by lelting r(,) = 0 in Equation
(25.6) to yield
y(r) =
Ifyo = 6, 16.
2.62
yo exp
[-a(, -
,o)]
system is not only linear, but also time invariant. (Verify this.)
Basic System Conponents
Any finite-dimensional, linear, time-invariant. continuous-time system given by Equation (25.1) with M < N can be realized or simulated using adders, subtractors, scalar
multipliers, and integrators. These components can be implemented using resistors,
capacitors, and operational amplifiers. .
Sec.
2.5
Syst€ms Described by Difler€ntal Equations
,u,{f-*
r tt)
=
reot
+i
69
xkt at
ro
Flgure
2S.l
The integrator.
The Integrator. A basic element in the theory and practice ofsystem engineering is the integrator. Mathematically, the input/output relation describing the integrator. shown in Figure 2.5.1, is
y(t) = y\i +
[' xe)dr,
Jh
!E
ro
e.s.l)
The input/output differential equation of the integrator is
dv(rl
,(r)
-i'=
(2.5.8)
If y0d = 0, then the integrator is said to be at rest.
Adders, subtractors, and scalar
Multiplierc.
trated in Figure 2.5.2.
.r, (r) + 12
.:1(l)
(r)
These operators are illus-
r, (r)
:s (l)
-
.r1(r)
x2(()
(a)
,,,,-{IFye)-Kx(')
(c)
Flgure 2.52 Basic components: (a) adder, (b) subtracror, and (c) scalar
multiplier.
Erample 2.62
We will find the differential equation describing the syslem of Figure 2.5.3. [.et us denote
the output of the first summer as zr,(t). that of the second summer as ur(r) and that of the
first integrator as y, (r). Then
4x(t)
Dr(t) = y'(t) = .yr(r) + 4y(t) +
Differentiale this equation and note that yi() = ?rr(r) to ger
which on substituting
y'(t) = ai(\ = or(r) + 4y'(t) + 4x'Q)
u,(l) = -y(t) + 2r(r) yields
y"(t) = 4y'(t) - y(,) + 4x'(t\ + Lt(t)
(2.5.9)
(25.10)
(2.5.11)
Coniinuous-Time
70
Hgure
253
2.5J
Systems
Chapter 2
Realization of the system in Example 2.5.2.
Simulation Diagrane for Continuous"Time Systeme
Consider the linear, time-invariant system described by Equation (2'5.2). This system
can be realized in several different ways. Depending on the application, a particular
one of these realizations may be preferable. In this section, we derive two different
canonical realizations; each canonical form leads to a dilferent realization, but the two
are equivalent. To derive the fint canonical form, we assume that M = N and rewrite
Equation (2.5.2) as
DNO
-
+ aN-t1an-r! - bn-,r) +
Drr) + aov - Dox : 0
Dn4)
+ D(ary
-
"'
(2.s.12',)
Multiplying by D-n and rearranging gives
|
= bxx + D-r(bN-rr - a,v-rI) + "'
* ,-trv-t)(D,.r - a.1,) + D-r(b*
-
oO)
(2.s.13)
from which the flow diagram in Figure 2.5,4 can be drawn' starting with output y(t) at
the right and working to the left. The operator D-r stands for integrating & times.
Another useful simulation diagram can be obtained by converting the Nth-order differential equation into two equivalent equations. Let
("'.
a,Di)u(t): x(t)
(2.s.14)
(i.','')'1a
(2.s.ls)
,?,
Then
v0) =
Sec.
2.5
71
Systems Described by Ditlerential Equations
Flgure
2S.4
Simutation diagram using the frrst canonical form.
To verify that these two equations are equivalent to Equation (2.5,2), we substitute
Equation (2.5.15) into the left side of Equation (2.5.2) to obtain
(,, *
\,,u)t<o:
(io
r,r".'
* 5]
,,
i
,,r"."),1,1
(i ''('".' * i'''o"-"))'<'l
= (i
'''')'t'r
=
The variables o('v- r)(r), ..., ?(t) that are used in constructing .v(r ) and .r(t) in Equations
(2.5.14) and (2.5.15), respectively, are produced by successively integrating u(n)(t)'Th"
iimulaiion diagram corrisponding to Equations (2.5.141and (2.5.15) is given in Figure
2.5.5. We reteito this form of rePresentation as the second canonical form'
Note that in the second canonical form, the input of any inlegrator is exactly the
same as the output of the preceding integrator. For example, if the outPuts of two successive integratbrs (counting from the right-hand side) are dcnoted by a. and a., 1,
respectively, then
a;(t) = a",. r(t)
This fact is used in Section 2.6,4 to develop state-variable representations that have
useful properties.
Exanple 254
We obrain a simulation diagram for the LTI system described hy the linear constanl-coe[ficient differential equation:
o
st
(.1
E
o
tr
I
!
tr
o
(.l
o)
!)
c)
tr
t
E
(!
hb
a!
t
tr
o
(!
E
tt)
vl
arl
C'
a,
E
E!
lz
72
S6c.
2.5
Systems Described by Ditlerontial Equations
v'(r) + 3v'(r) + 4lQ) = LY"(t)
73
-
3x'(t) + x(t
)
l(t) is the output, and x(r) is the input to the system. To gct the first canonic form.
we rewrite this equation as
where
Dzy
(t) =
b(t) + D-tl-3.r(r) - 3y(r)l + D-'z[.r'(r) -
+y
(t)]
Integrating both sides twice with resPect to , felds
y(t)
=u(t) + D-'[-3.r0) -
3y(r)l + D-2[.r1r)
- qy$)l
The simulation diagram for this repres€ntation is given in Figure 2.5.6(a). For the second
canonic form. we set
u'(r) + 3o'0) + 4u(t) =:(1)
and
v(t) =
21)',(t)
-
3a'
(rl + a(t)
which leads to the simulation diagram of Figure 2.5.6(b).
In Section 2.6, we demonstrate how to use the two canonical forms just described to
derive two different state-variable representations,
2.6.4 Finrring the Impulse Response
The system impulse response can be determined from the differential equation describing the system. In later chapters, we find the impulse response using transform techniques. We defined the impulse response lr(r) as the response y(r) when r0) = 6(t)
and y(t) = 0, cc < , < 0. The following examples demonslrate the procedure for
determining the impulse response from the system differential equation.
-
Example 2.8.4
Consider the system governed by
Setting.r(t) = 6(l) results
2y'(t)+ty1t1 =3v111
in the response y(t) = h(t). Thereiorc. l(t)
should satisfy the
differential equation
2h'(\ + ah?) =
36(t)
(2s.t6)
The homogeneous part of the solution to this first-order differential equation is
h(t) = c
"*or-rrurr't
(2.5.r7)
We predict that the panicular solution is zero, the motivation for this being that h(t) cannot contain a delta function. Otherwise. /r'(r) would have a derivative of a delta function
that is not a part of the right-hand side of Equation (2.5.16). To find the constant C' we
substitute Equation (2..5.17) into Equation (2.5.15) to get
z
!,C
expl-zrlu0)) + 4cexp[-Z]a(r) = 3 E(,)
Simplifying this expression results in
74
Continuous-Time
Systems
(b)
Flgure
25.6
Simulation diagram of the second-order system in Example
2.53.
2C
expl-2tl6(,) = -j6(,)
which. after applying the sampling property of the 6 function, is equivalent to
2C 6(t) = 3511;
Chaptsr 2
Sec.
2.5
Systems Described by Ditlerentlal Equations
so that C
=
75
1.5, We therefore have
h(t) =
In general, it can be shown that for
(2.5.2) is of the form
1.5
exp[-2t]z(t)
r(t) :
6(t), the particular solution of Equation
Coto0r,
he() =
{r
M
z
N
M<N
where 6{i)1r) is the ith derivative of the 6 function. Since, in most cases of practical
interest, N > M, it follows that the particular solution is at most a 6 function.
E=n'rrple 2.6.6
C-onsider the first-order system
r'(t)+ry1t1
=211,1
The system impulse response should satis$ the follovirrg rlifferential equation:
h'(t)+3h(t)=26(,)
The homogeneous solution of this equation is of the form C,exp [-31]a(l). Let us assume
a particular solution of the form ftr(l) = Cr60). The general solution is therefore
h(t) = Crexp[-3r]u(t) + c260)
Substituting in the differential equation gives
C,[-3exp ( -3t)z(r) + erp(-3r)6(r)]
+ Cz6'(t) + 3[C,
Equate coefficients of 6(l) and
&function to get
exp
a()
(-3r) u(t) + Cr6(t)l
:
26(t)
on both sides and use thc sifting prop€rty of the
Cr=
2,
Cz= 0
This is in conformity with our previous discussion since M < N in rhis example and so we
should expect that Cz = O. We therefore have
h(t) = Ze*O1-r,tur,,
(25.18)
f,'.rarnple 2.6.0
Consider the second-order system
y(t)+2y'(,)+2y(t): i/0) +3r'0)
+ 3r(,)
The characteristic roots of this differential equation are equal to - 1 + 71 so that the homogeneous solution is of the form [C,exp(-l)cost + C;exp(-r)sin llzO. Since M = N
this case, we should expect that ir(t) = Ca60). Thus the impulse response is of the form
n
.:it, t:,lrji;g'i:i
t4
,T
'I
tq
* il
,': ,rr li; ,i rll
r.T
,{
.4,
fl li
Continuous-Time
Systems
Chaptor 2
ft(r) = [C,exp(-t)cosr + Crexp(-t)sint]u(r) + Cr6(r)
so
that
h'(t)=[C,
-
C,lexp(-t)cosru(t)
- lcz+ CJexp(-t)sintz(t)
+ Cr60) + Ca6'(r)
and
h'
(t) = - 2gr"*p ( - t)cos t u () + 2C,exp ( - l)sint
+ (Cz - C,)D(r) + Cr6'0) + CaE"(r)
u
()
We now substitute these in the system differential equation. Collecting like terms in 6(r),
8(t) and 8(t) and solving for the coefficients C, gives
Ct=1, Cz=0. and Cr=l
so that the impulse response is
a0) = exp[-tlcos, u(t) + 6(r)
In Chapters 4 and
easier manner.
5, we use
2,6 STATE-VARIABLE
(2.s.1e)
transform methods to find the impulse resPonse in a much
REPRESENTATION
In our previous discussions, we characterized linear time-invariant systems either by
their impulse response functions or by differential equations relating their inputs and
outputs. Frequently, the impulse response is the most convenient method of descriF
ing a system. By knowing the input of the system over the interval - @ < , < @, we can
obtain the output of the system by forming the convolution integral. In the case of the
differential-equation representation, the output is determined in terms of a set of initial conditions. If we want to find the output over some interval lo - I ( 11, w€ rlltst
know not only the input over this interval, but alss a certain number of initial conditions that must be sufEcient to describe how any past inputs (i.e., for , < ,o) affect the
output of the system during that interval.
In this section, we discuss the method of state-variable representation of systems.
The representation of systems in this form has many advantages:
I. It
provides an insight into the behavior of the system that neither the impulse
response nor the differential-equation method does.
2. It can be easily adapted for solution by analog or digital computer techniques.
3. It can be extended to nonlinear and time-varying systems.
4. It allows us to handle systems with many inputs and outputs.
The computer solution feature by itself is the reason that state-variable methods are
widely used in analyzing highly complex systems.
We define the state of the system as the minimal amount of information that is sufficient to determine the output of the system for all t > tr, provided that the input to
the system is also known for all times , > ,0. The variables that contain this informa-
Sec.2.6
State-VariableFlepresentation
tion are called the state variables. Given the state of the.system at ,0 and the input ftom
,0 to ,r, we can find both the output and the state at r]lNote that t"his definition of the
state of the system applies only to causal systems (systems in which future inputs cannot affect the output).
z.AJ
State Equations
Consider the single-input, single-output, second-order, continuous-time system
described by Equation (2.5.11). Figure 2.5.3 depicts a realization of the system. Since
integlators are elements with memory (i.e,, they contain infornration about the past
history of the system), it is natural to choose the outputs of integrators as the state of
the system at any time ,. Note that a continuous-Iime system of dimension N is realized by N integrators and is, therefore, completely represented by N state variables. It
often advantageous to think of the state variables as the components of an Ndimensional vector referred to as state vector v(r). Throughout the book, boldface lowercase
letters are used to denote vectors, and boldface uppercase letters are used to denote
matrices. In the example under consideration, we defrne the components of the state
vector v(t) as
is
ar(t) = y(t)
Expressing
ar(t)
=
a'r(t)
= -n "'r1t) - art;r(t) + box(t)
v'(l) in terms of v(t) yields
[;;l;l]
The output
uiQ)
y(l)
=
[-'.. -: ] [;:l]l . [],].,,,
(2.6.1')
can be expressed in terms of the state vector v(r) as
y(r)=r1
,[;j[:]]
(2.6.2)
ln this representation, Equation (2.6.Ij is called the state equation and Equation (2.6.2)
is called the output equation.
In general, a state-variable description of an N-dimensional. single-input. single-output linear, time-invariant system is written in the form
v'(,)=Av(r)+bro
(2.6.3)
y(t)=cv(t)+dx(t)
(2.6.4)
where A is an N X N square matrix that has constant elements. b is an N X 1 column
vector, and c is a I x N row veclor. In the most general case. A, b. c, and d are func'
tions of time, so that we have.a timc-varying system. The solution of the state equation
for such systems generally requires the use of a computer. In this book, we restrict our
attention to time-invariant systems in which all the coefficients arc constant.
Contlnuous-Time
78
Systems
Chapter 2
+
v(tl
ffgre Z6.f
RLC circuit for
Example 2.6.1.
Exanple 2.8.1
Consider the RLC series circuit shos'n in Figure 2.6.1. By choosing the voltage a(r6s the
capacitor and the current through the inductor as the state variables, we obtain the fol'
lowing sute equations:
cuf;
= wtt)
'ry=:(r)-Ro'o-u'(r)
v(t)
:
u,(t)
In matrix form, these become
v'(r) =
If we assume
[-', ]'"],u,. i:1,u,
LzJ
Lz t)
v(') =
[t
v'(r) =
t-? -?]'t'r . [f]'t't
Y(t) =
[1
o]vo
that C = I /2 and L =R=l,wehave
0]v(t)
2.62 Tlne-Dornain Solution of t:he State Equatione
consider the single-input, single-output, linear, time-invariant, continuous-time system
described by the following state equations:
v'(r)=a"1r;+b:o
(2.6.s)
y(t)=cv(t)+dx(t)
(2.6.6)
The stare vecror v(r)is an explicit function of time, but it also depends implicitly on the
initial srate v(ro) = vu, the initial time ru. and the input r(t). Solving the state equations
Se.
2.6
State-Variable
Repr€sentation
79
means finding that functional dependence. wc can then compute the outPut y(r) by
using Equation (2.6.6).
As a natural generalization of the solution to the scalar first-order differential equation, we would Jxpect the solution to the homogeneous matrix-differential equation to
be of the form
v(r) = exp [At]vo
where exp [Ar] is an N X N matrix exponential of functions of time and is defined by
the matrix power series
exp[Ar] =
I + A, *
+
^'*.* ^'f
"' + Ar i;
.
(2.6.7)
where I is the N X N identity matrix. Using this definition, we can establish the following properties:
exp[Al,]exp[Arr]
= exp[-Ar]
(2.6.8)
exp[A(r, + ,z)] =
[exp[Ar]l-'
Q-6.9)
To prove Equation (2.6.8), we expand exp[Ar,] and exp [Atr] in power series and
multiply out terns to obtain
exp[At,]exp[Atr] =
[t
[,
* nr, *
*
*, *
= exp[A(tr +
BY setting tz
: -\
^'*.*
"'+ l*l;i
n'*.+ A'4
+
+
"'+
l.
"-f
.. .]
tr) ]
= ,, it follows that
exp[-Ar]exp[At] = I
so that
exp[-Ar] = [exp[tu]l-'
posis well known that the scalar exponential exp[al] is the only function which
with
functionssesses the property that its derivative and integral are also exponential
scaea amititudes. This observation holds true for the matrix exponential as'well. We
require that A have an inverse for the integrat to exist. To show that the derivative
of ixp[Al] is also a matrix exponential, we differentiate Equation (2.6.7) with respect
to , to get
It
fiexplst)=
:
*X.n'* Tn' * "' * \,' Ao * "'
+"' + Art' .']"
* n, * n'7,.
0+A
[,
"'iI
=n[I+A,+A2'i.*n'f *
"*"-;i.
']
80
Conlinuous-Time
Systoms
Chapter 2
Thus,
ftexplt^tl=
explArlA = aexplArl
Nou multiplying Equation (2.65) on the left by exp [-Al]
exp[-Ar][v'(t)
-
A
"(t)]
and rearrangingterms,we obtain
= exp[-At]bxo
Using Equation (2.6.10), we can write the last equation
,4
(2.6.10)
as
(""R[-a4r(r)) = exp[-Ar]hr(r)
Integrating both sides of Equation (2.6.11) between
exp[-Ar]vO + exp[-tuotvo =
ro
(2.6.1t)
and I gives
['exp1-A"]hx(t)dr
J4
(2.6.12)
Multiplying Equation (2.6.12) by exp [At] and rearranging terms, we obtain the complete solution of Equation (2.6.5) in the form
vO = exp[A(r
- ro)]vo + f exptl( - t)lbx(r)dr
(2.6.t3)
The matrix exponential exp [At] is called the state transition matrix and is denoted by
O(t). The complete output response y(t) is obtained by substituting Equation (2.6.13)
into Equation (2.6.6). The result is
y(r)
=co(r-
ro)vo+
['"o1r-t)bx(r)dr + dx(t), t>h
th
(2.6.14)
Using the sifting property of the unit impulse 6(r), we can rewrite Equation (2.6.14) as
y(t):cO(-6)vo+ |rl lctD(t-r)b+d6(r-r)lr(t)dt,
Jro
t>to
(2.6.15)
Observe that the ccmplete solution is the sum of two terms. The first term is the
response when the input.rO is zero and is called the zero-input response. The second
term is the response when the initial state vo is zero and is called the zero-state
response. Further inspection of the zero-state response r€veals that this term is the convolution of input .r(r) with cO(r) + d 6(r). Comparing this result wirh Equation (2.3.3),
we conclude that the impulse response of the system is
,(,)= Ico(t)b+d6(,) r>o
otherwise
t0
(2.6.16)
That is, the impulse response is composed of two terms. The first term is due to the
contribution of the state-transition matrix, and the second term is a straight-through
path frorq input to output. Equation (2.6.16) can be used to compute the impulse
response directly from the coefficient matrices of the state model of the system.
S€c.2.6
8l
State-VadableRepresentation
Erample2.63
Consider the linear. time-invariant. continuous-time system described by the differen-
tial
equation
v"(tr
+
y'{t) -zy(t1 =
111y
The state-space model for this system is
,,(,) =
[: _ l]
y(r) =
[t 0]
,u,
.
[?] ,u,
v0)
so that
"=[: -l]' '=[l]'
and
c=r,
or
To determine the zero-input response of the system to a specified initial-condition vector
",= [;]
we have to calculate
O(t). The powers of ihe matrix A are
* =l-i -l]
"'= t
so that
o0, =
[l :]. [;
'.,
;]. [
L-r'
: :] "'= [-,: ,;]
; ] t ; #'1.
fl.
T) [.,'
L" +".1.L,j" *".J.'
l+r'-r*a*... ,-1*i-ut'*.1I
=l u + f 1,0 *... r t +3r,'
-1,t*]i,"*
l" fFt4t2tr5
._l
The zero-input response of the system is
vo) =
rr
,
[, -, ),'.,,:
ii,- , ,.'r,j-.ri,.:;,',: ] tl
=t+tz_:_;.
The impulse response of the system is
Continuous-Timo
s2
,. u-;-;."'
|
,r(r)=u 0l I
s
lr-,'+tr-lrrto+"'
Systems
Chapter 2
*r. tl [rl
+3r,' -2,' *fit *'l L,l
,-'i. *'i -
r
-
t
tj 5,
=t- t2z+rur+"'
Note that
for.r() = 0, the slate at time t is
v(') = o(') vo
I
_t
t4
- ll
*. .l
t* '"r*z
-1"-u +f-l1to
.l
Exanple 2.63
Given the continuous-time sYstem
l--r o ol
ltl
<,r
',u,= s _i ;l 'ar. [1]
L
y(r) =
I-t 2 0l vo
wecomPutethetransitionmatrixandtheimpulseresponseofthesystem.ByusingEqua.
tion (2.6.7). we have
.', =
[i
ool [-r 0 ol
r ol* lo-a,
o ,-l Lo -l 1].
1
t:
2
0
0
00
6tz -atz
?rz
+ ..'
-?J2
- 4,+ 6t2 +... 4, - Er2 + "'
-t+2t2+... l-2t2+"'
Using Equation (2.6.16), we find the system impulse response to be
Irl
no)=t-r 2 ot oG)lil=r-,,*3jP+"'
Lr-l
(2.6.13), (2.6.15), and (2.6.16) that in order to-determine'
is clear from Equations
-have
to first obtain exp [Ar]. The preceding two examples demonv(r), y(r), or ft (t), we
It
S€c.2.6
5tl
State-VariableRepreseniation
strate how to use the power serics method to find O(r) = exp [Atl. Although the
method is straightforward and the form is acceptable, the major problem is that it is
usually not possible to recognize a closed form corresponding to this solution. Another
method that can be used comes from the Cayley-Hamilton theorem, which states that
any arbitrary N x N matrix A satisfies its characteristic equation, that is,
det(A
-,u) :0
The Cayley-Hamilton theorem gives a means of expressing any power of a matrix A
in terms of a linear combination of A for m = 0, 1, ..., N - l.
Exanpte 2.6.4
Given
^=[i ;]
it follows that
det(A-[)=,t2-7i+6
and the given matrix satisfies
Az-7A+6I:o
Therefore, A2 can bc expressed in terms of A and I by
A2 = 7A
- 6I
(2.6.171
Also. Ar can be found by multiplying Equation (2.6.17) by A and then using Equation
(2.6.17) again:
- 6A:7(7A - 6I) - 6A
= 43A - 421
A3 = 7A2
Similarly, any power of A can be found as a linear combination of A and I by this method.
Further, we can determine A-r. if it exists, by multiplying Equation (2.6.17) by A-r and
rearranging terms to obtain
1
A-'=;[7r -
A]
It follows from our previous discussion that we can use thc Cayley-Hamilton theorem to write exp[Ar] as a linear combination of the terms (Ar)i, i = 0, 1,2, ...,
N - 1, so that
exp[Ar]
If A
=
Ar- t
)
i=0
r,(r)
l'
(2.6.18)
has distinct eigeuvalues ,1,. we can obtain 7,(t) by solving the set of equations
N-t
exp[I,r]=)r,ft)'r.i i=1,...,ts
i=0
(2.6.19)
U
Continuous-Time
Systems
Chapter 2
For ihe case of repeated eigenvalues, the procedure is a little more complex, as we will
learn later. (See Appendix C for details.)
&anple
2"6.6
Suppose that we want to 6ad the transition matrix for the system with
[-r I
A=l' I -3
ol
ol
o o -3_J
L
nsing the Cayley-Hamilton method. First, we calculate the eigenvalues of A as.l, = -2,
t', = -3, and 13 = -4. It follows from the Cayley-Hamilton theorem that we can write
erp [Al] as
erp[Ar] = 7o(r)t + n(r)A + 7r()A2
where the coefficiens 7oO, n(l), and 'y2() arc the solution of the set of equations
expl-Al = ro(t) - 21 r(l + 41 rQ)
exp[-3r] = .yo(t) - 3rr(r) + hr(t)
exp[-4r] = ro(t) - 4tr(r) + 16.y2(r)
ftom which
exp[-4t]
q1
-
1o(t) =
3
1,(t) =
iexp[-4tlI
,r1t1 =
i(exn[-
4tl
Sexp[-3t] + 6 exp[-a]
6expl-3t] +
-
rexp[-2t]
2expl-lt] + exp[-2.r1)
Thus, exp [At] is
exp
[r o ol [-s r ol l-ro
[Ar] = 1o(,)lo r ol+.y,1r11 r -3 ol+1,1111-o
Loorl
Lo o-3J Lo
l"*f-oO
*).a1-21
00
oeJ
0
exp
EwarrFle 2.0.8
us repeat Example 2.6.5
10 0l
-rrexpl-tt'l +rrexpl-al0
-lexefefl + )expt-zrl l "*,-oO + lexpl-t:l
[-et
-5 0l
for the system with
[-r o ol
A=l o -4 4l
Lo
-r
o_j
[-
3t]
Sec.
2.6
State.Variable
Bepresentation
85
= - I, and ,1, = ,lr = -2. Thus,
O0) = exp[Arl = 7o(r) I + 7,(r) A + 7r(r) Ar
This matrix has ,1,
The coefficients yo(t), yr(l), and 7r(l) are obtained by using
exp[,rrl = 7o(r) + 7,(),t
However, when we use
+ yr(tl[2
I = -1, -2, and -2 in this equalion,
(2.6.20)
we get
- ]r0) + r20)
exp[-2rf = ro(t) - z.rlt) + 4.\120)
exp[-2t] = ro(t) - 21,(t) + 41r(t)
exp[-4 ='yo(r)
Since one eigenvalue is repeated. we have only two equations in threc unknowns. To completely determine 7o(t), 1(t), and 7r(t). we need anolher equation. which we catr gener-
ate by differentiating Equation (2.6.20) with respecl
t exp
to,l to obtain
[,ltf = 7,() + 2yr()A
Thus, the coefficients 70(l), 7r(r), and yr(t) are obtained as the solution to the following
three equations:
exp[-tl = ro(t)
- 1'O + 1r(t)
exp[-2t] = ro(t) - 21 lt) + 412()
t expl-?tl : 1,(t) - 41r(l)
Solving for 7(t)
lelds
- 3exp[-2t] - 2rexpl-2rl
rr(r) = 4 exp[-r] - 4 exp[-2tl - 3rexp[-2t]
rz(t) = exp[-t] - exp[-2t] - texp[-2t]
ro(r) = 4exP[-t]
so that
ool [t o o-l
ftool
[-t
o(r)=ro(r)10 I 0l+r,(r)l 0 -4 al+1r(r)10 12 -16
Loool Lo-rol
[o 4 -4)
I
[-exp[-r]
o
[O
=|
o
I
exp[-2r] -2texpl-2rl 4texpl-2tl
-4exp[-r] + 4exp[-2,] + 4texpl-2rl)
-rexp[-2r]
0
I
Other methods for calculating O(t) also are available. The readcr should keep in mind
that no one method is easiest for all applications.
The state transition matrix possesses several properties, some of which are as follows:
1. Transition property
o(r,
-
lo) =
o(r,
-
tr)o(lr
- to)
Q'6'21)
Continuous-Time
86
Systems
Chapter 2
2. Inversion property
- r): O-t(t -
lo)
(2.6.22)
O(t-to)=O(,)O-'(,J
(2.6.21)
O(ro
3. Separation property
These properties can be easily established by using the properties of the matrix exponentiat exp[Ar], namely, Equations (2.6.8) and (2.6.9). For instance, the transition
proprty follows from
o0z-b)=exPlA(q-to)l
= exp[A(lz - t' + t, - to)]
: exp[A(, - t,)]exp[A(rr
: .b(tz - tr)o(rr - ,o)
-
,o)]
The inversion prop€rty follows directly from Equation (2.5.9). Finally, the separation
property is obtained by substituting ,z = t and ,r = 0 in Equation (2.6.21) and then using
the inversion property.
2.63
State Equatione in First Canonical Form
In Section 2.5, we discussed techniques for deriving two different canonical simulation
diagrams for an LTI system. These diagrams can be used to develop two state-variable
representations. The state equation in the first canonical form is obtained by choosing
as a state variable the output of each integrator in Figure 2.5.4. In this case, the state
equations have the form
y(t)=o,(t)+bn.r(r)
oi1t1 - -on-ry(r) + ?,r(l) + bn-rr(r)
oi[) = - ar-ry() + zr(t) + bn-rx(t)
:
: -aty0) + o,v0) + brx(r)
oi(): -aov[)+box(t)
oi-,(t)
(2.6.24)
By using the first equation in Equation (2.6.24) to eliminate y(t), the differential equations for the state variables can be written in the matrix form
i(t )
ai;('i.)
a1
,t
'"i r)
,U,,
-att-t I O "'01
-ax-z 0l "' Ol
: : : : :l
-o,
--ao
il
;;':
0 o "'Ol
ur
(r)
az()
rriv-
r(t)
- ","(r) -
bn-,
br
-, b,
_
-
-
bo-
an
-rbr
ar
-rbn
r(r)
(2.6.25)
arbn
aobr
tains ones above the diagonal, and the first column of matrix
A
consists of the nega'
Sec.
2.6
Slai€-Variable
Representation
rives of the cocfficients a,. Also. the output
tor v(t)
87
y(t) can bc writtcn in tcrms of the state vec-
as
v(,)
t1o
=
,
+
[:ll]
bNx(,, (2626)
Observe that this form of state-variable rePresentation can be written down directly
from the original Equation (2.5.1)
Example 2.6.7
The [irst-canonical-form state-variable representation of the LTI system described by
zy"(t) + Ay'(t) + 3y0) = Ax'(t\ + zx(t)
t;ir:rt
=
t; lr;;r;tl.
.v(r)=rr
kample
rit',,,
,[;;t]l
2.6.8
rhe Lrr
"""*
o"*;l;ol
,nur+ v'(r) + av(r) = r11; *
haslhennt**"'i;l;:""'i1
L;tBl:
r or[o,61r I
5''1',
zr
[-l I ;]L;:lll.L-11.."'
Irr,(r)l
y(r) =
[r o oll,,i,i l*,t,1
L,,,r(r)l
2.6.4 State Equations in Second Canonical Form
Another state-variable form can be obtained from the simulation diagram of Figure
2.5.5. Here, again, the state variables are chosen to be the output of each integratorThe equations for the state variables are now
Continuous-Time
Systems Chapter 2
uiQ) = ur(t)
a''(t) = tt'(t)
oh-r()=
0,v(0
= -an-run() - an-ran-r(t) - ... - aoo,(t)
y(t) = boar(t) + brar(t) + ..' + Dx-r on(t) +
?r.ln(r)
6"(r(t)
-
auur(t)
-
arar(t)
In matrix form, Equation (2.6.27) can be written
Ir,(,)l
4l ',t'l I -
" 1,r,,-l
0 I o
001:0
-'.' -
a,r-, o,r(r))
as
0-
;;;:i
-a2
_-%
-ar
-att_t_
t3].[l..,
Ir,o)
y(r)
:
[(bo
-
+.ro
asby)(b1- arbp)...(b,,-r
(2628,
I
- a*-,bill "1" | + 0,".r(r)
Lr,t,ll
(2.6.2s)
This representation is called the second canonical form. Note that here the ones are
above the diagonal, but the a's go across the bottom row of the N x N transition
matrix. The second canonical state representation form can be written directly upon
inspection of the original differential equation describing the system.
Elrarnple 2.63
The second canonical form of the state equation of the system described by
y'(t)
-
zy"(t) +
y'() + at() = 1'1,1* trr,,
is
si
i
L,io)l L-+ -r
[;i[;i.l=
,(,) = [-3
ol [o,(r)
r ll ,,(,)
2l Lu,(r)
l.[l]..,,
Ir,,(r)l
-1 zll a,@ |
+
r(t)
Lr,(,ll
The frrst and second canonical forms are only two of many possible state-variable
representations of a continuous-time system. [n other words, the state-variable representation of a continuous-time system is not unique. For an N-dimensional system,
Sec.
2.6
Slate-Variable
Representation
39
there are an infinite number of state models that represent thal systcm. Howcvci, aii
N-dimensional state models are equivalent in thc scnse that lhey ltavc cxactly thc sarnc
input/output relationship. Mathenratically, a set of state equation'i \\'ith strtc vector
v1i; can be lransformed to a new sct with state v:ctor q(r) by using a transtorlrlation
P such that
q(t) =
P
(2.6.30)
v(t)
where P is an invertible N x N matrix so that Y(r) can be obtaincd trom q(t). It can be
shown (see Problem 2.34) that the new state and output equations arc
q'(r) = Ar q(t) + b,-r(t)
y(r) = cr q(l) + d, r(t)
(2.6.31)
Ar=PAP-r, b,=Pb, c,=6P-r,7, =ri
(2.6.33)
(2.6.32)
The only restriction on P is that its inverse exist. Since there are an infinite number of
such matrices, we conclude that rve can generate an infinite numhcr of equivalent Ndimensional state models.
If we envisage v(r) as a vector with N coordinates. the transformation in Equation
transformation that takes the old state coordinates and
(2.6.30) ..pr"."--nt.
".*rdinate
mapS them to thc new statc Coordinates. The new state model can have one or more
of the coefficients A,, b,, and c, in a special form. Such forms result in a significant simplification in the solution of certain classes of problems: examples of these forms are
the diagonal form and the two canonical forms discussed in this chapter'
Example
2.6.1O
The state equations of a certain system are given by
[;;ll]
=
[: :r [;:[]l . [l].,,,
We need to tind the state equations for this system in terrns of lhc new state vadablcs qt
and qr. where
[;lll]
=
[l l][;:l;l]
The equation for the state variable q is given by Equation (2 611)' rvherc
^,
=
'nt-'= [l
=[l
ilt;;ltl-ll
[r
rl
iltiilliil
lz :l
_[o ol
-Lo
,)
90
Continuous-Time
Sysiems
Chapter 2
and
br=Pb=
tllltll= til
E=ernple 2.6.11
Let us find the matrix P that transforms the second-canonical-form state equations
[;;8]
=
: ll[l[;i].
t
[?].u,
into the fint-canonical-form state equations
[;;8]=
[-i iJ[;:[i]. []1.,,,
We desire the transformation such that PAP-|
= Ar or
pA = Arp
Substituling for A and Aq, we obtain
3:,:^
i:lt :
ll= [-l il1::,
i:]
Equating the four elements on the two sides yields
-2pp= -lpn *
pzr
pt - 3pn= -3pnt
pu
-2Pa = -\Pn
Pa - 3Pd= -2Pn
The reader will immediately recognize that the second and third equations are identical.
Similarly, lhe first and fourth equations are identical. Hence, two equations may be discarded. This leaves us with only two equations and four unknowns. Note, however, that
the constraint Pb = br provides us with the following two additional equalions:
prz=7
pz=2
Solving the four equations simultaneously yields
,=[-3 ]l
Exanple 2.6.12
If A, is a diagonal malrir with entries 1,, il can easily be verified that the transilion malrix
exp[A,t] is also a diagonal matrix with entries exp [,t,rl and is hence easily evaluated. We
can use this result to find lhe fransition matrix for any other representation with A =
PA,P-r, since
Sec.2.6
91
Stat€-VariableRopresentation
exp[Arf =
I + Ar +
], n',' * "'
= I + PArP-rr +
1l;tAiP-'r'+...
=PII*n,,* ,1ntu' * ]r-' = P exp [ArrlP-l
For the matrices A and A, of Example 2.6.10, it follows that
exp
IA,t]
0
I
-_ l-exp(6r)
exp(2r)l
o
L
so that
exp
IAt]
0 I
[exp(6r)
L 0 exp(z)l
_ lf eu + e'l t* - "'1
_
2lee
- eL
eu +
e')
2.6.6 StabilityConeiderations
Earlier in this section, we found a general expression for the state vector v(r) of the
system with state matrix A and initial state vo. The solution of this system consisS.bf
two components, the first (zero-input) due to the initial state vo and the second (zerostate) due to input r(r). For the continuous-time system to be stable, it is required that
not only the oulput, but also all signals internal to the system, remain bounded when
a bounded input is applied. If at least one of the state variables grows without bound,
then the system is unstable.
Since the set of eigenvalues of the matrix A determines the behavior of exp [Al]' and
since exp[At] is used in evaluating the two components in the expression for the state
vector v(l), we expect the eigenvalues of A to play an important role in determining
the stability of the system. Indeed, there exists a technique to tcst lhe stability of continuous-time systems wlthout solving for the state vector. This tcchnique follows from
the Cayley-Hamilton theorem. We saw earlier that, using this thcorem, we can write
the elements of exp [At], and hence the components of the state vector, as functions of
the exponintials exp[,1,t], exp[Arr], ..., exp [,trt], where t,,i = 1.2...., N, are the eigenvalues of the matrix A. For thesc terms to be bounded, the real part of rlu i: 1,2, ...,
N, must be negative. Thus, the condition for stability of a continuous-time system is
that all eigenvalue$sf the state-transition matrix should have negative real parts.
The foregoing conclusion also follows from the fact that the eigenvalues of A are
identical with the roots of the characteristic equation associated with the differential
equation describing the model.
92
Continuous-Time
Syslems
Chaptor 2
Example 2.6.13
Consider the continuous-time system whose state matrix is
[z -rl
n=Lo
-rJ
The eigenvalues of A are
\ = -2 and lz = l, and hence. the system is unstable.
Exanple 2.&14
Consider the system described by the equations
,,(,) =
[-1 -:]"u,. [l],u,
y0) =
[
llv(r)
A simulation diagram of this system is shown in Figure 2.6.2. The system can thus be considered as the cascade ofthe two systems shown inside the dashed lines.
The eigenvalues of A are lr = 1 and Az = -2. Henc€, the system is unstable. The transition matrix of the system is
(2.6.v)
exp[Arl = yo(r)I + zr0)A
where yo(l) and
1(l)
are the solutions of
explrl
:
1o(l) + 1,(l)
exp[-21 = 1o(t)
-
21,(r)
Solving these two equations simultaneously yields
ro(0 = JexPPl* |"*P1-21
r,(r) = |exptrl
-
|expt-zrl
Substituting into Equation (2.6.y), we obtain
exPl't1=
l-exPl'l
L-exp[r] +
erp[-2r]
o
' ^ '-l
exp[-z]J
[Jt
us now look at the responlte of the sptem to a unit-step input when the system is
initially (at time 6 = 0) relaxed, i.e., the initial state vector vo is the zero vector. The output of the system is then
y1r) =
=
f
cexptA(r
- r)lbr(t)dt
(; - ; exet- t:t)u(t)
The state vector at any time l > 0 is
\o
e.l
q)
c.
E
E1
C
OJ
q,)
oo
(!
!
o
.A
6l
\o
N
tu
E!
E,
93
94
Continuous-Timo
v0):
f
erptn (r
Systems
Chapter 2
- t)l b.r(t)dr
- l)z(r)
I
t/2
exp[r]
expl
aD4l))
l<ttz [1exp[r]
It is clear by looking at y(r) that the output of the system is bounded, whereas inspection
of the state variables reveals that the internal signals in the system are not bounded. To
pave the way to explain what has happened, let us look at the inpuuoutput differential
equation. From the output and state equations of the model, we have
,,,,__::f
,:i,:ii!,,u,._,,1,,i?u,,,,,,,,
= -zy(t) + x(t)
The solution of the last first-order differential equation does not contain any terms that
grow without bouad. It is thus clear that the unstable term exp [ll thal appeaIs in the state
variables o,(t) and ur(t) does not appear in the output yO. This term has, in some sense.
been *cancelled out' at the output of the second system.
The preceding example demonstrates again the importance of the state-variable
representation. State-variable models allow us to examine the internal nature of the
system. Often, many important aspects of the system may go unnoticed in the computation or observation of only the output variable. In short, the state-variable techniques
have the advantage that all internal components of the system can be made apparent.
2,7
SUMMARY
A continuous-time system is a transformation that operates on
a
I
a
a
a
a
a continuous-time
input signal to produce a continuous-time output sigral.
A sptem is linear if it follows the principle of superposition.
A system is time invariant if a time shift in the input signal causes an identical time
shift in the output.
A system is memoryless if the present value of the output y(r) depends only on the
present value of the input r(t).
A system is causal if the output y(10) depends only on values of the input x(l) for r s ro.
A system is invertible if, by observing the output, we can determine the input.
A system is BIBO stable if bounded inputs result in bounded outputs.
A linear, time-invariant (LTI) system is completely characterized by its impulse
re.sponse i (t).
The output y(l) of an LTI system is the convolution of the input r(r) with the
impulse response of the system:
Ssc.
2.7 Summary
95
/(r) = .r(,) t, h(t) = J|_._ xft)h(t - t)ttt
.
o
.
The convolution operation gives only the zero-state responsc of the system.
The convolution operator is commutative, associative. and distributive.
The step response of a linear system with impulse responsc /r(t) is
,14
e
o
=
['_- n6'1a,
J
An LTI system is causal if h(t) = 0 for t < 0. The system is stable if and only if
.
f.ln{"ila, -.
An LTI system is described by a linear, constant-coefficient, differential equation
of the form
(r, * !.
o,o,)r(t)= (5 r,r)..r,r
o A simulation
.
.
diagram is a block-diagram rePresentation of a system with oomPonents consisting of scalar multipliers (amplifiers), summers. or integrators.
A system can be simulated or realized in several different ways. All these realizations are cquivalent. Dcpcnding on the application, a particular one of these realizations may be preferable.
The state equation of an LTI system in state-variable form is
v'(t) = 4 v(r) + br(,)
o
The output equation of an LTI system in state-variable form
is
v(r) = c v(,) + dr(,)
r
.
The matrix O(r) = .*O 14r] is called the statc-transition matrix.
The state-transition matrix has the following properties:
- tn) = tD(tz - t')O(t, Inversion ProPerty: O(ru - ,) = O-'(t - to)
Separation property: O(, - h) = o(l)O-t(1,)
Transition property: O(t,
o
The time-domain solution of lhe state equation is
y(r) = co(r
.
o
t,)
-
to)vu
+ ['co(r.- t)bx(r)dr + d't(r). l>lo
rD(l) can be evaluated using the Cayley-Hamilton theorem. which states that any
matrix A satisfies its own characteristic equation.
The matrix A in the first canonical form coniains ones above the diagonal, and the
first column consists of the negatives of the coefficients a, in Equation (2'5.2)'
!.
r
.
2.8
The matrix A in the second canonical form contains ones above the diagonal, and
the a,'s go across the bottom row.
A continuous-time system is stable if and only if all the eigenvalues of the transition
matrix A have negative real parts.
CHECKLIST OF IMPORTANT TERMS
Caueal syotem
Cayley4lamllton theorem
Convoluton lntegral
Flrst canonlcal lorm
lmpulse responso ot llnear syetem
Impulse r€sponse ol LTI system
!ntegrstor
lnvsnse aystem
Unear system
Unear, Ume.lnyadant system
Memoryloee oysiem
2.9
Multpller
Output equatlon
Scalar multlpller
Second canonlcal lorm
Slmulatlon dlagram
Stable system
Stale-transluon mstdr
State varlable
Subtractor
Summer
Tlmc'lnyarlant systgm
PROBLEMS
2.L
Determine whether the systems described by the following input/ourpur relationships are
linear or nonlinear, causal or noncausal, time invariant or time variant, and memorylass
or with memory.
(a)
(b)
(c)
(d)
y(t) = 7:(t) + 3
vG) = bz(t) + 3x(,)
Y(t) = Ar(tl
y(t) = AaQ)
(e) y(r) =
{:,;l;,,
;::
tt
(O .y(t) =
l__x(tldr
t'
(oy(r)=lorft)a,.,>0
(h) Y(t) = r(t - 5)
(l) y(r) = exp [.r(t)]
0) v(r) = x(t) x(t - 2)
(k) y(r)
=
il,')i,,nro,
0 ++zy(t)=bz(t)
Sec.
aA
2,9
Probl€ms
97
Use the model for 5(t) given by
s(,) = l$-l
Lj.
recr
(,/a
)
to prove Equation (2.3.I )
Evaluate the following convolutions:
(a) rect (t - a/a)t 6(, - 1r)
(b) rect (t/a) * rect(tla)
(c) rect (/a) * a(r)
(d) rect (t/a) * 5gn(t)
(e) u(t) * x(r)
(I) t[u(t) - a(, - l)l* a(t)
(gl rect(t/al * r(1'1
(h) r(t) * [sgn(t) + u(-t - l)l
(i) [u(t + l) - u(t - l)lsgn(r) r a(r1
0) u(t) * 6'1;;
2.4
Graphically deiermine the convolution of the pairs of signals shown in Figure P24.
(b)
(c)
Figure P2.4
Continuous-Time
Use the convolution integral to find the response
response ft() to input r(t):
(a)
(b)
(c)
(d)
(e)
(f)
a6.
.r(t)
:
exp[-rlzo
h(t)
h(t)
h(t)
h(t)
h(t)
fr(t)
r(t) = t exp [-tlzo
.r(t) = exp[-4u0) + u(t)
.r(t) = z(t)
-rO = exP[-at]zo
.r(t) = 6(, - l) + exP[-t]z(t)
Sysiems
y(t) of an LTI
Chapter 2
system with impulse
= t*'1-or"r',
=
"1'7
= u(t)
= eapl-Altt () + 6(r)
= u(t) - expl-ulu(t - b)
= exP [-u]z(t)
The cross correlation of two different signals is defined as
R
,0)
:
f--,Ol
Y<,
-
t)dr =
I- ,n * t) Y(rldt
(a) Show that
R,y(r):r(t)*y(-t)
Show that the cross correlation does not obey the commutative law.
Show that R r(r) is symmetric (R rG) = Ry,(-r)).
Find the cross correlation between a signal .r(t) and the signal y(t) = r(t
B/A = 0,O.1, and 1, where.r(r) and z(t) are as shown in Figure P2.7.
O)
(c)
e7.
- l) + z(t) for
Figore HL7
a&
The autocorrelation is a special case of cross correlation with
R,(r) =
y() : :(t). In this case.
t"
R,0) = | r(r)r(r + r)dr
t__
(a) Show that
R,(0) = 5,
the energy of
r(r)
(b) Show that
R,(r) s
R,(0)
(use the Schwarz inequality)
z(t) = :0) + y(t) is
R.() = R,(r) + Ry(r) + &,0) + R,(r)
(c) Show that the autoconelation of
Sec.
2.9
Problems
99
a9. . Consider an LTI sysrem whose impulse response
output of the system. respcclively. Show that
is
R,(t) = R.(0 * h(t) *
al0.
rl(r). Ler r(, ) and
y()
be the input and
h(-t)
The input to an LTI system with impulse response i (r) is the complex exponenlial
exp !'r,rll. Show that the corresponding ourput is
y(t) = expljutl H(a)
where
H(@) =
2ll.
r'
l__h(t)
exp[-yrorldr
Determine whether the continuous-time LTI systems characterized by the following
impulse responses ale causal or noncausal, and stable or unstable. Justify your answers.
(a)
(b)
(c)
(d)
(e)
A(t) = exp[-3tl sin(t)r(r)
,,(r) = exp [4tlu( -t)
ft(t) = (-r) exp[-rla(-r)
(') = exP[-lzl]
'',r(r)
= l(r - ztlexp[-lzrll
(f) ,(r) = rcctlt,lzl
(g) ft(t) = 6O + exp[-3t]u(t)
(h) ,(r) 6'(t) + exp [-2rl
(i) fi(,) = 6'(,) + exp[-l2rl]
0) ,t(r) = (l - r) rect(r/3)
:
212
For each of the following impulse rcsponses, determine whethcr it is invertible. For those
that are, find the inverse system.
(al h(t) = 511 a 21
(b) l'(t) = uo
(c) ft(t) = 611 - 3;
(d) n(,) = rcct(t/ )
(e) It(r) = exp [- rlz (r )
2.13. Consider the two syslems shown in Figures P2.13(a) and P2.l-3(b). System I op€rates on
r(l) to give an output /t (r) that is optimum according to some dcsired criterion. System II
first operates on r(t) with an invertible operation (subsystem I) to obtain z0) and then
operates on z(t) to obtain an output y20) by an operation that is optimum according to
the same criterion as in system I.
(a) Can system II perform better than system I? (Remember the assumption that system
I is the optimum operation on r(t).)
(b) Replace the optimum operation on z(r) by two subsystems, as shown in Figure
P2.13(c). Now the overall system works as rvell as system I. Can the new system be
better than system II? (Remember thal system II perfornts the optimum operation
on z (r ).)
(c) What do you conclude from parts (a) and (b)?
(d) Does the system have to be linear for part (c) to be true?
Continuous-Time
100
r--'------l
Syst€ms
Chapter 2
---------'l
System
System I
ll
I
I
I
.r
(r)
i----------J
I
I
I
t'
I
(b)
(a)
z
vt0l
)r(,)
(r't
(c)
Figure
L14
8
.13
Delermine whether the system in Figure P2.14 is BIBO slable.
Figure EL14
h,(t) = expl-2rlu(tl
hr(tl = exPl-7tlu(r)
,rr(t) = exp[- t]u(t)
440)
:
,rs() =
2"15.
D(,)
exP
[- 3tlu(t)
and outPul y(r) of a linear, time-invariant system are as shown in Figure
P2.15. Sketch the resF)nses to the following inputs:
The input
r(r)
(s) :0 + 2)
O) 2r(r) + 3x(-l)
(c) .r(, - l/2) - x(t + t/21
... &(t)
(o,
a
S€c.
2.9
Problems
101
ffgure PZIS
Lt6" Find the impulse
response of the
inilially relaxed system shown in Figure
P2.16.
i-----;------l
.t(r)'
l,(, )
/(r)
=
r,R
(r)
ll
l___________-J
Flgure HLf6
217. Find the impulse
response of the initially relaxed system shown in Figure P2.17. Use this
find
result to
the output of the system when the input is
/
(e) r. (,
-
0\
2/
or,(. l)
(c)
rect
0/0), where d = l/RC
i;1
r(r)
)=
= u(r)
uc
(r)
ri
rl
l-----------J
Ftgure YLIT
Continuous-Time
102
Systems
Chapter 2
2.18. Repeat Problem 2.17 for the circuit shown in Figure P2.18.
I
I
I
x(r)
.
€(r)
v
I
I
L- _--________J
(r) = rh (r)
I
I
Bigure
HLlt
2.19. Show that any system that can be described by a differential equation of the form
W.p*"or #=2r,<offf)
is linear. (Assume zero initial conditions.)
220. Show that any system that can be described by the differential equation in Problem 2.19
is time invarianl. Assume that all the coefficients are constants.
22L A vehicle of mass M is traveling on a paved surface with coefficient of friction k. Assume
that lhe position of the car at time ,, relative lo some reference, is y(t) and the driving force
applied to the vehicle is r(r). Use Newton's second law of motion to wrile the differential
equation describing the system Show that the system is an LTI system. Can this system
22L
be time varying?
Consider a pendulum of length I and mass M as shown in Figwe V|'.X\.T\e displacement
from the equilibrium position is ld; hence, the acceleratioh is Id'. The input x(t) is the
force applied to the ma$s M tangential to the direction of motion of the mass. The restoring force is the tangential component Mg sin 0. Neglect the mass of the rod and the air
resistance. Use Newton's second law of motion to write the differential equation describing lhe system. Is this system linear? As an approximation, assume that dis small enough
,. Is the system now linear?
that sin ,
:
Mass
ll
Figue P222
Sec.
2.9 Problems
tgg
2.23. For the system realized by the interconnection shown in Figure P2.23. find the differential
equation relating the input .r(r) lo the oulput y(r).
Flgure P2.Zl
2.4.
For the system simulated by the diagram shorvn in Figtre Y2.24, (lctermine the differential equation describing the system.
tigure
P2.24
2.25. Consider the series RLC circuit shown in Figure P2.25.
(a) Derive the second-order differential equation that dcscribes the system.
(b) Determine the [irst- and second-canonical-form simulation diagrams.
1U
Continuous-Tims
Systems
Chapter 2
r (t)
Itgore P:225
2,26. Givet an LTI system described by
y-(t) + 3f(t)
,Jr.
r(r)
- y'(,) - zy(t) = 31'111- ,r,,
Find the first- and second-canonical-forrr simulation diagrams.
Find the imprrlse resfpnse of the initially relaxed system shown in Figure p2.27.
= u(r)
y (r) = up(r)
,u,--.-|-lr,*n
l-----rar
ii
Flgure HL27
L&
Find the state equations in the first and second canonical forms for the system described
by the dilferential equation
y"(t) + 2.sy'(t) + y(t) =.r'0) +
r(,)
2.$. For the circuit
shown in Figure P2.29, choose the inductor current and the capacitor vottage as state variables, and write the state equations.
L=2H
Ifgure HL29
230. Repeat Problem
2.28 for the s]'stem described by the differential equation
f(t) + f(t) - zy(t): x,(t) - u(t)
Sec.
Z3l.
2.9
Problems
105
Calculate exp[Arl for the following matrices. Use both the serics-cxpansion and CayleyHamilton methods.
[-r o ol
(a)A=ltt 0 -z 0l
L o o -31
[-r
2 -11
tt
(b)A=l o -r ol
L o o -ll
[-r 1 -ll
tcll=l o t -rl
L o'-31
2.32
Using state-variable techniques, find the impulse respoirse for the system described by the
differential equation
y"(t) + 6y'(t) + 8y(r) = r'(r) + r(r)
Assume that the system is initially relaxed, i.e., y'(0) = 0 and y"(0) =
243.
Use state-variable techniques to find the impulse response of lhc system described by
y'(t) + 7y'(t) + lzy(t) = /(r)
23.
0.
-
3r'(r) + 4.r(I)
Assume that the system is initially relaxed, i.e., y'(0) : 0 and y(0) = 0.
Consider the system describcd by
v'0) = A v(t) + b.r(t)
y0)=cv(t)+dr0)
Select the change of variable given by
z(t)
:
P
v(l)
where P is a square matrix with inverse P-1. Show that the new state equations are
z'(t) = Ar z14 + b' r(t)
y(t) = cr z(t) + drx(t)
where
Ar = PAP-r
br=Pb
ct = cP-l
dt= d
235. Consider
the system described by the differential equation
y"(t) + 3y'(t) + 21'() =.r'(r) -.r(r)
(a) Write the state equations in the fint canonical form.
(b) Write the state equations in the second canonical form.
(c) Use Problem 2.34 to find the matrix P which will transform the firsl canonical form
into the second.
(d) Find the state equations if we transform the second canonical form usiitg the matrix
p=l'
L-l
t-l
-ll
Chapter 3
Fourier Series
3.1
INTRODUCTION
As we have seen in the previous chapter, we can obtain the response of a linear system to an arbitrary input by representing it in terms of basic signals. The specific signals used were the shifted 6-functions. Often, it is convenient to choose a set of
orthogonal waveforms as the basic signals. There are several reasons for doing this.
First, it is mathematically convenient to represent an arbitrary signal as a weighted sum
of orthogonal waveforms, since many of the calculations involving signals are simpliIied by using such a relresentation. Second, it is possible to visualize the signal as a
vector in an orthogonal coordinate system, with the orthogonal waveforms being coordinates. Finally, representation in terms of orthogonal basis functions provides a convenient means of solving for the response of linear systems to arbitrary inpus. In this
chapter, we will consider the representation of an arbitrary signal ove: a finite interval
in terms of some set of orthogonal basis functions.
For periodic signals, a convenient choice for an orthogonal basis is the set of harmonically related complex exponentials. The choice of these waveforms is appropriate. since such complex exponentials are periodic, are relatively easy to manipulate
mathematically. and yield results that have a meaningful physical interpretation. The
representation of a periodic signal in terms of complex exponentials, or equivalently,
in terms of sine and cosine waveforms, leads to the Fourier series that are used extensively in all fields of science and engineering. The Fourier series is named after the
French physicist Jean Baptiste Fourier (1768-1830), who was the first to suggest that
periodic sigrals could be represented by a sum of sinusoids.
So far, we have only considered time-domain descriptions of continuous-time signals
and systems. In this chapter, we introduce the concept of frequency-domain reprisen106
Sec.
3.2
Orthogonal Bepresentations ot
Signals
1O7
tations. We learn how to decompose periodic signals into their [requency components.
The results can be extended to aperiodic signals, as will be shorvn in chapter 4.
periodic signals occur in a rvide range of physical phenomcna. A few examples of
vertical dissuch signals alre acoustic and electromagnetic rvaves of most types, the
placem-ent of a mechanical pendulum, the periodic vihrations o[ musical instruments,
and the beautiful pattems of crystal structures.
In the present lhapter, we discuss basic concepts, facts, and tcchniques in connection with itourier serLs. Illustrative examples and some importirnt engineering appliin Section 3'2'
cations are included. We begin by considering orthogonal basis functions
In Section 3.3, rve consider l.rioaic signals and develop procedurcs for_resolving such
3'4, we
iignals into a iinear combination of complex exponential functi.ns. In Section
represcnted in terms of a
di-scuss the sufficient condilions for a periodic signal to bc
all
Fourier series. These conditions are known as the Dirichlet conditions. Fortunately,
any
with
the periodic signals that we deal with in practice obey these conditions. As
propproperties'
These
other mathema-ticat tool, Fourier series possess several useful
helP_s us. move eas
erties are developed in Section 3.5. Understanding such properties
itf frorn the time domain to the frequency domain and vice ve rsa. In Section 3.6, we
periodic
ule tne properties of the Fourier series to find the response of LTI systems to
phenomenon are dissignals. The effects of truncating the Fourier series and the Gibbs
a disconcrissed in Section 3.7. We will iee that whenever we attempt to reconstruct
form of
in
the
iinuou, signal from its Fourier series, we encounter a strange behavior
go away even
signal overshoot at the discont in uities. This overshoot effecl does not
,r-h"n *a increase the number of terms used in reconslructing the signal.
3,2 ORTHOGONAL REPRESENTATIONS
OF SIGNAL
engt-
orthogonal representations of signals are of general importance in solving many
*.rin"g proUf.*s. Two of the ,"uion, this is so are that it is mathcnlatically convenient
i. ,"pr".,I* arbitrary signals as a weighted sum of orthogonal waveforms, since many
and
of thl calculations involving signals aie simplified by using such a representation
ihat it is possible to visualizi thi signal as a vector in an orthogonll coordinate system,
with the orthogonal waveforms being the unit coordinates'
Asetofsign'also,,i=0,-t-1,-r2,...,issaidtobeorthogonalovcraninterval(a,D)if
I r,,,,.1r, = {f-'
',
-
k)
= Er6(l
-oo
where Sf (r) stands for the complex conjugate of the signal and 6(/
Kronecker delta function, is defined as
t=k
[t6(r-k)=10
t+k
(3.2.1)
-
&)' called the
(3.2.2)
Fourier
108
Series
Chapler 3
If $,(l) corresponds to a voltage or a current waveform associated with a l-ohm resistive load, then, from Equation (1.4.1), Ek is the energy dissipated in the load in b - a
seconds due to signal QrG). If the constants E1 are all equal to I, the 6i(r) are said to
be orthonormal pignals. Normalizing any set of signals g,(t) is achieved by dividing
each signal by
V4.
Exanple 32.1
The signals Q,,(r) = sir,at,nr =
1,2,3,..., form an orrhogonal set on the interval
-rr<r<zrbecause
ar
dr
[", o^<,lo: <o = /-" Ginzrrxsinnr)
=iL,*r^ - ")tdt -;f,
cas(m + n)tdt
_[", m=n
[0, m* n
Since the energy in each sigml equals tr, the following set of signals constitutes an ortho< ,rr:
normal set over the interval
-t <,
sinr sin2r
\r;'
-G-
sin 3r
' \F'"
Example 8.2.2
The signals go() = exp[i
interval (0, I) because
(2rkt)/Tl, k =
J' o,{,)0,'o),, = I,'
=
'and
hence, the signals O
intewal
0<r<f.
0,
*1,
=2,...,
form an orthogonal set on rhe
*rlffil *, ['94] "
{l
ll-D expl]2r
kt)/Tl constitute an orthonormal set over the
"iI
trernple 893
The three signals shown in Figure 3.2.1 are orthonormal, since they are mutually orthogonal and each has unit energy.
Orthonormal sets are useful in that they lead to a series representation of signals in
a relatively simple fashion. Let 0,(r) be an orthonormal set of signals on an interval
a < t < D, and let.r(l) be a given signal with finite energy over the same interval. We
can reprqsent x(t) in terms of [0rl by a convergent series as
Sec.
3.2
Orthogonal Representations of Signals
0t(r)
1@
dlltt
Q2ltt
Ilgure
321
Three orthonormal signals.
.r(r)
= ) c,g,(r)
i- -=
(3.2.3)
where
co
= | xQ)g!(t)dt, k -0.'+1,+2....
(3.2.4)
Equation (3.2.4) follows by multiplying Equation (3.2.3) by Qf (r) and integrating the
result over the range of definition ofx(t). Note that the coefficients can be computed
independently of each other. If the sct g,(t) is only an orlhogonal sel, then Equation
(3.2.4) takes the form (see Problem 3.5)
(3.2.s)
',=Il"''ooi(r)dt
The series representation of Equation (3.2.3) is called a generalized Fourier series
ofx(l), and the constants c,, i = 0, *1, *2,..., are called the Fouricr coefficients with
respect to the orthogonal set [0,(r)].
In general, the representalion of an arbitrary signal in a series expansion of the form
of Equation (3.2.3) requires that the sum on the right side be an infinite sum. In prac.
tice, however, we can use only a finite number of terms on the right side. When we truncate the infinite sum on the right to a finite number of terms, we get an approximation
i(r) to the original signal r(r). When we use only M terms, the representation enor is
M
enU) = x(r)
- r=l
) c,6,0)
(3.2.6)
The energy in this error is
E,(M) =
It can be shown that for
rh
rb
M
J.lrrQ)l'a = J lrG) - )c,6,(r)l'zdr
(3.2.7)
aoy M, the choice of co according to Equation (3.2.4) minimizes the energy in the error er. (See Problem 3.4.)
Certain classes of signals-finite- length digital communication signals, for example-permit expansion in terms of a finite number of orthogonal funclions l0r(r)1. In
Fourier
110
Series
Chapter 3
this case, i = 1,2,... , N, where N is the dimension of the set of signals. The series representation is then reduced to
r0)
= xro(')
(3.2.E)
where the vectors x and O0) are defined as
I = [c,, cr, ... c1y]r
o(r) = [0,(,),0z(r), ...0r(r)]'
and the superscript Idenotes vector transposition. The normalized energy
the interval
a< t<bis
E,=
I
->,:_,
=
over
.h
coci
=
ofr(t)
t ,* c,$,(t'1lzdt
lx(t)l2dr=
Al /v
(3.2.e)
6,111oi1r1ar
J,
) l",l'e,
(3.2.10)
This result relates the energy of the signal x(r) to the sum of the squares of the orthogonal series coefficients, modified by the energy in each coordinate. Er. If orthonormal
signals are used, we have E, = 1, and Equation (3.2.10) reduces 1o
N
E" =
)
i-
1",1,
l
In terms of the coefficient vector x. this can be written
as
5. = (x*)rx = xt x
(3.2. r l )
where t denotes the complex conjugate transpose [( )*]I. This is a special case ofwhat
is known as Parseval's theorem. which we discuss in more detail in Section 3.-5.6.
Example 3.2.4
tn this example. we eramine the representation of a finite-duration signal in terms of an
orrhog,onal set of basis signals. Consider the four signals defined over thc interval (0. 3).
as shown in Figure 3.2.2(a). Thcse signals are not orthogonal. hut it is prssihle lo represent them in terms of the three orlhogonal signals shown in Figure .i.2.2(h). since combinations of these three basis signals can be used lo reprcsent any of lhc four signals in
Figure 3.2.2(a).
The coefficients rhar represent the signal .rr(r). obtained by using Equation (3.2.4). arc
f-1
c,,= |
Jn
.r,(r)rfjx(t').lt
=2
.!
c,.
= Jo| .t,(r)dj'(r)rlr = tt
c,.,
= lt|
fl
.t, (r )6.,t
(tltlt =
t
Sec.
3.2
Orthogonal Beprssentations ol Signals
111
r2 (r)
rt (r)
1
I
0
0
rt(,)
x3(l)
2
I
0
-l
(a)
0s (r)
0zU\
0!(r)
t-(b)
Flgure
322
-
Orthogonat representations of digital signals.
In vector notation, xr : [2, 0, llr. Similarly, we can calculate the coefficients for
xz(t).:r0), and :o(r), and these become
rzr = l, xn = 1, rz.l = 0, or 12 = [l' 1' 0]r
rrr = 0, xn = 1, rrr = l, or xl = [0' l' l]'1
xat = 2' or xr = [l' - l' 2lr
'r,rr = l, roz = - l,
Since there are only three basis signals required to completely reptescnt .r,(r), i = 1'2.3'
4, we now can
thini ofthese four iignals ai
""ctor.
in three-dimensionrl space. We would
112
Fourier
Series
Chapter 3
like to emphasize that the choice- of the basis is not unique, and many other possibilities
exist. For ixample, if we choos!
11
6,() =-6xr(t), 0,(t) =
l) !(t) - u (r - 3)l
fi\tuA - -
and
0,G) =
rra,o1r1
then
,,=ln,-+',+l'.
"=l+,+'+)''
r, = 1Vi,o,olr
x4
= [0,0, V6]r
In closing this section, we should emphasize that the results presented are general,
and the main purpose of the section is to introduce the reader to a way of representing sipals in terms of other bases in a formal way. In Chapter 4, we will see that if the
signal satisfies some restrictions, then we can write it in terms of an orthonormal basis
(interpolating sigrals), with the series coefficients being samples of the sigpal obtained
at appropriate time intervals.
3.3
THE EXPONENTIAL FOURIER SERIES
Recall from Chapter
1
that a signal is periodic if, for some positive nonzero value of ?,
nT), n = L,2,...
x(t) = x(t +
(3.3.1)
The quantity I for which Equation (3.3.1) is satisfied is referred to as the fundamental period, whereas 2r/T is referred to as the fundamental radian frequency and is
denoted by roo. The graph of a periodic sigral is obtained by periodic repetition of its
graph in any interval of length I, as shown in Figure 3.3.1. From Equation (3.3.1). it
follows that2T,3T, ...,arc also periods of r(t). As was demonstrated in Chapter l, if
two signals, r, (t) and xr(t), are periodic with period T, then
x
(t)
0
Iigure
33.1
Periodic signal.
Sec.
3.3 fie Exponential Fourier Series
113
xr(t)=a.r,(t)+bxr(t)
(3.3.2)
is also periodic with period L
Familiar examples of periodic signals are the sine. cosine, and complex exPonential
functions. Note that a constant signal r(l) = c is also a periodic signal in the sense of
the definition, because Equation (3.3.1) is satisfied for every positive L
In this section, we consider thc representation of periodic signals by an orthogonal
set of basis functions. We saw in Section 3.2 that the set of complex exPonentials
0,(l) = explj2nnt/Tl forms an orthogonal set. If we select such a sel as basis functions, then, according to Equation (3.2.3),
,(,)
=.i-.,.*l,Tl
(3.3.3)
where, from Equation (3.2.4), the c, are complex constants and are given by
,, = lrlr'
,rr"-p[-l
T).
(3.3.4)
Each term of the series has a period T and fundamental radian ttequency 2t fT = ,0.
Hence, if the series converges, its sum is periodic with period 7. Such a series is called
the complex exponential Fourier series, and the c, are called the Fourier coeffrcients.
Note that because of the periodicity of the integrand, the interval of integration in
instance, by the
Equation (3.3.4) can be replaced by any other interval of length
interval ,0 <, s lo ?, where to is arbitrary. We denote integration over an interval
of length I by the symbol /,r1. We observe that even though an infinite number of frequencies are used to synthesiie the original signal in the Fourier-series expansion, they
do not constitute a continuum; each frequency term is a multiple of ao/2r. The frequency corresponding to n 1 is called the fundamental, or first, harmonicl z.= 2 cor-
l-for
*
:
responds to the second harmonic, and so on. The ccefficients c, define a
complex-valued function of the discrete frequencies nor6, wherc n = 0, +1, !2, ... ,
The dc component, or the full-cycle time average, of r(t) is equal to co and is obtained
by setting n = 0 in Equation (3.3.4). Calculated values of co can be checked by inspecting r(r), a recommended practice to test the validity of the result obtained by integra'
tion. The plot of lc, I versus nr,rn displays the amplitudes of the various frequency
components constituting r(r). Such a plot is therefore called the amplitude, or magnitude spectrum, of the periodic signal .r(r). The locus of the tips of the magnitude lines
is called the envelope of the magnitude spectrum. Similarly, the phase of the sinusoidal
components making up r(r) is equal to l, cn and the plot of 4 c,, vcrsus nroo is called
the phase spectrum ofx(t). In sum, the amplitude and phase spectra of any given periodic signal are defined in terms of the magnitude and phase of c,,. Since the spectra
consist of a set of lines representing the magnitude and phasc at a = nr,l.., they are
referred to as line spectra.
Fouder
114
Series
Chapter 3
For real-valued (noncomplex) signals, the complex conjugate of c, is
,:
=
rt
l+ I,^,
=
l,l,
=
c-n
*olalalo,l"
'u"-olf,?!!)o'
(3.3.5)
Hence,
l"-, I =1",
I and N,c-n=-4cn
(3.3.6)
which means that the amplitude spectrum has even symmetry and the phase spectrum
has odd symmetry. This property for real-valued signals allows us to regroup the exponential series into complex-conjugate pairs, except for co, as follows:
*ry+)
^i__,^*ry+)*P,*
=. * P, ,-,"*oliff!] ..i, ,.*pwf
r(r)=co *
=
*
=
*.i
p__,
(,
-..-rlflf-t] . *r [,+{])
"
"
: * *.izne{c,*r['T,]]
(z n"t.,;
"o,4{ -
2tmlc,l""T)
(3.3.2)
Here, Rel . I and Iml ' I denote the real and imaginary parts of the arguments, resPectively. Equation (3.3.7) can be written as
x(t\
= an.,i
[n,
*
"o"4il o,,"rn4f]
(3.3.8)
The expression for x(t) in Equation (3.3.8) is called the trigonometric Fourier series
for the periodic signal .r(t). The coefficients au, an, and D, are given by
o,=.co=l[,rruro,
a,
= 2Relc,l : r, [,rrr,
(3.3.ea)
"or4{
o,
bn: -2lmlc,l =
?[,nrlr"in4'f
at
(3.3.9b)
(3.3.9c)
In terms of the magnitude and phase of c,, the real-valued signal r(t) can be
expressed as
Sec.
3.3
The Exponential Fourier Serles
x(r)
: co + )
='o
115
zl,,l
*'(?F * +.,)
+,i n,.or(f
* o,)
(3.3.10)
where
A, = 2lc,l
(33.11)
= 4c,
(33.t2)
and
0n
Equation (3.3.10) represents an alternative form of the Fourier series that is more compact and meaningful than Equation (3.3.8). Each term in the series represens an oscil'
lator needed to generate the periodic signal r(l).
A display of lc, I and 4 c,, versus n or naofor both positive and negative values of
n is called a two-sided amplitude spectrum, A display of A, and Q,, versus positive r or
nroo is called a one-sided spectrum. Two-sided spectra are encountered most often in
theoretical treatments because of the convenient nature of the complex Fourier serie.s,
It must be emphasized that the existence of a line at a negative frequency does uot
imply that the signal is made of negative frequency comPonents, since, for every com'
ponent cB explj}nrtlTl, there is an associated one of the form c-, expl- i?:trr.t/TlThese complex signals combinc lo create the rcal comPoncnt a, cos(?sttt/T\ +
b,sin(?tttrt/T). Note that, from the definition of a definite integral, it follows that if
r(r) is continuous or even merely piecewise continuous (continuous except for Enitely
many jumps in the interval of integration), the Fourier coefficients exist' and we can
compute them by the indicated integrals.
Let us illustrate the practical use of the previous equations by the following exarnples. We will see numerous other examples in subsequent sections.
Exanple 83.1
Suppose we want to find the line spectra for the periodic signal shown in Figure 33.2' The
signal r(t) has the analytic represeDtalion
llgure 33.2 Signal .r(l) for
Example 3.3.1.
116
Fourier
Serles
Chapter 3
t-x. -l<r<o
.rtt)=[r,
0<r<1
and
r(, + 2) = .r(t). Therefore, oo = 2n/2 = tt. Signals of this type can <rcur as external
forces acting on mechanical systenul, as electromotive forces in electric circuits, etc. The
Fourier coeffr cients are
",
=
f,l_,. <,1 expl- inn tldt
=
; tl,
_K
(l
2\
=
- K exp(- inntlat + l' x "*p1-ln,fiarf
- explinrl * exp[-yznl - 1)
#{r(
}("rntr,,,,,r+
zr.
spe
exP
[- r"1))
(3.3.13)
n odd
=llntr
[r,
The amplitude
-yn
int
(33.14)
n even
ctrum is
,".,:[ffi'
[0,
nodd
,r even
The dc component, or the average value of the periodic signal :(r), is obtained by setting
z = 0. When we substitute n = 0 into Equation (33.13), we obtain an undefined result
This can be circumvented by using I'H6pital's rule, yielding co = 0. This can be checked
by noticing that r(l) is an odd function and that the area under the curve represented by
.r(t) over one period of the signal is zero.
The phase spectrum of .r(t) is given by
--:{
_,,1t
n=(h-t),m=7,2,...
0,
n = Lm.m =
7t
n= -(1rz-l),m=r,2,...
,'
0. 1.2 ...
The line spectra ofr(r) are displayed in Figure 3.3.3. Note that the amplitude spectrum
has even symmetry, the phase spectrum odd symmetry.
Ercnple 8.83
A siausoidal voltage E sinool
is passed through a half-wave rectifrer that clips the negative portion of the waveform, as shown in Figure 33.4. Such signals may be encountered
ts
l..r
I
.:
o0
(!
z
-
(.)
":
r!
qJ
G
cl
x
lrl
o
x
Etr
oo
cl
r!,
:o
.-'o
o!
o-L
()o
I
a.l
I
Eq)
.i o.
O6r
rdG
6ao. -o
o!
I
trtrtit
'-
E
I
-ti
qE
-: cL
El,
=o
baE
I
. 117
FouderSerles ChaflerS
118
r (r)
E
E dn oro,
2tt
-2n't0g
oJ6
rJg
Flgue
rdq
{rl9
33.4 Sipal:(t)
for Example 3.3.2.
in rectifier design problems. Reaifien are circuits that produce direct curent (dc) from
alternating cunent (ac).
The analytic representation of
.r()
is
when-i<l<o
0'o
lo.
r(') = {
Irrio.or,
and
r(r +
2tt
tion (3.3.4),
/a)
",
= x(l). Since.r(l) =
whenocrca
0 when
-r /tro<, < 0, we obtain, from Equa-
=
|Jtr.,nr r,.rel-ifffa,
=
* f"
+[exp[jroor]
E.o l"t_. ,
_
=
J" [erp[-ior6(a
d
-
exp[-jroor]l exp[:lhroot]dr
-
1)t]-exp[-itoo@ +
t)tldt
=ryF#("*[-n-] .*of',])
=
E
?r.i:
nr)
cos
l*+,
=lr,
(nt/2lexpl-im/21'
n+
!7
(33.15)
aeven
zodd,
n+lt
(33.16)
S€tting n = 0, we obtain the dc component, or the average value of the periodic qignal, as
co = E I n . This result can be verified by calculating the area under one half cycle of a sine
wave and dividing by f. To determine the coefEcients cl atrd c- I which correspond to the first
harmonic, we note that we cannot subsdmte ,, = + I in Equation (3.3.15), since this yields
an indeterminate quantity. So we use Equation (33.4) instead with n = + 1, which resuls in
cr
_E
E
=d. and ct= li
The line spectra of .r(r) are displayed in Figure 3.3.5.
In general, a rectifier is used to convert an ac signal to a dc signal. Ideally, rectified output r(t) should consist only of a dc component. Any ac component contributes to the rip-
Sec.
3.3
'.'t'
The Exponential Fourier Series
-4
-3
-2
0
-l
I
(a)
(b)
Flgure 335 Line spectra for.r(t) of Example 3.3.2. (a) Magnitude spectrum and (b) phase spectrum.
ple {deviation from pure dc) in the sig[al. As can be seen from Figure 3.3.5, the ampli'
tudes of the harmonics decrease rapidty as n increases, so that the main contribution to
the ripple comes from the firct harmonic. The ratio of the amplitudes of the first harmonic
to the dc component can be used as a measure of the amount of ripple in the rectified signal, In this example, the ratio is e qual lo r/4. More complex circuits can be used that produce less ripple. (See Example 3.6.4).
Eranple 83.8
Consider the square-wave stgnal shown in Figure 3'3'6' The analytic re[-resentation of 'r(r) is
(
f0.
r0)
=
-T
-t
when;-<t<-,
|
I
-r
whenl:<r<;
1K,
l"t'
|.0. whenr<t<;
and
x(t +
used
f)
= :(r). Signals ot this type can be produced by pulsc generators and
extensively in radar and sonar systems. From Equation (3.-3 {). rve obtain
are
120
Fourier
| -T -,.i
-, r
-T- 1
Flgure
-2z
33.5
Signal
0
r
T
n-t
1 1'-T
T
Serles
Chapter 3
r+!7
.r(t) for Example 3.33.
I rrn ,<,> *ol1fc)"
,,= Tl_ro
= + f:, *
*ol4!l*
=#l*,V+l-*,[T ']l
: K ntr
nnl
-Sln Kt
n't
= rstncT
(33.17)
where sinc ()r) = sin (,rI)/nI. The sinc function plays an important role in Fourier analysis and in the study of LTI systems. It has a maximum value at tr = 0 and approaches zero
as I approaches infinity, oscillating through positive and negative values. It goes through
zero at tr = +7,.-2t ,,. .
l-t us investigate the effect of changing Ion the ftequency spectrum of r(l). For 6xed
t, increasing reduces the amplitude of each harmonic as well as the fundamental frequency and, hence, the spacing between harmonics. Hovever, the shape of the spearum
is dependent only on the shape of the pulse and does not change as I increases, except for
the amplitude factor. A convenient measure ofthe frequency spread (known as the bandwidth) is the distance from the origin to the first zero-crossing of the sinc function. This
distance is equal to2r/t atd is independent of f. Other measures of the frequency width
of the spectrum are discussed in detail in Section 4.5.
We conclude that as the perod increases, the amplitude becomes smaller and the spectrum becomes denser, whereas the shape of the spectrum remains the same and does not
depend on the repetition period T. The amplitude spectra ofr(r) with r = 1 and = 5,
10, and l5 are displayed in Figure 3.3.7.
I
I
f,'rrernple 33.4
In this example, we show that
Fourier series representation
r(r) = t2, -tt < , < ?r, with r( + 2tl =.r()
,Ol =
f-a(*rr
- |.o.zr * |.orl,
-
...
(3.3.18)
)
Note that.r() is periodic with period 2zr and frmdamental frequency oo =
Fourier series coefficients are
has the
1.
The complex
Sec.
3.3
121
The Exponential Fourier Series
0
- l0
-15
(a)
0
- l0
(b)
-45
-30
15
0
-15
lo
45
(c)
Flgure 33,7 Line spectra for the:(t) in Example 3.3.3. (a) Magnitude
spectrum forr = I and I = 5. (b) Magnitude spectrum fort = 1 and T=
10. (c) Magnitude spectrum for z = I and I = 15.
cn
=
i;
cn
=
L"
t2 expl-
jntldt
Integrating by parls twice yiclds
The term
co is
2 cosnrr
-- n-,--,
n+O
obtained from
I
r"
2;1
n2
3
From Equations (3.3.9b) and (3.3.9c),
"
t2.dt
FouderSerles Chaptgr3
122
A
o, = 2Relcn| = -?cosrrr
b.: -2lm{c,f = I
because c, is real. Substituting into Equation (3.3.8), we obtain Equation (3.3.18).
eranple 83.6
Consider
r)-l"*(?,
r(r) = I + z'r"(f
-
sin(7zrr)
.
.".(+,
I
It can easily be veritied that this signal is periodic with period = 5Il s so that roo = 7tt/3.
Thus 7rr = ?aoand2Ett /3 = 4roo. lVhile we can frnd the Fourier series coefficients for this
example by using Equation (3.3.4), it is much easier in this case to represent the sine and
cosine signals in terms of exponentials and write x() in the form
r(r) = 1 + ]exp[jroor]
+
|
expli3.tstl
- lexp[-;,or] - Jexpti,orl-lexpt-i,orl
-
11
exp [ -73r,0
| +)
expll*ror] +
]
exp
[
- j*ror]
Comparison with Equation (3.3.3) yields
co= I
c, =
c!1=
cr= clr=
co
-(i.r,)
-L
= c:4 =,
with all other cn being zero.
Since the amplitude spectrum of .r(r) conrains only a finite number of components, it
is called a bandJimited signal.
3,4
DIRICHLET CONDITIONS
The work of Fourier in representing a periodic signal as a trigonometric series is a
remarkable accomplishment. His results indicate that a periodic signal, such as the signal with discontinuities in Figure 3.3.2, ao be expressed as a sum of sinusoids. Since
sinusoids are infinitely smooth signals (i.e., they have ordinary derivatives of arbitrary
high order), it is difficult to believe that these discontinuous signals can be expressed
in that manner. Of course, the key here is that the sum is an infinite sum, and the signal has to satisfy some general conditions. Fourier believed that any periodic signal
Sec.
3.4
Dirichlet
Conditions
123
could be expressed as a sum of sinusoids. However, this turned out not to be the case.
Fortunately, the class of functions which can be rePresented by a Fourier series is large
and sufficiintly general that most conceivable periodic signals arising in engineering
do have a Fouricr-series representation.
applications
For the Fourier series to converge, the signal r(l) must possess the following properties, which are known as the Dirichlet conditions, over any period:
l. x(t) is absolutely
integrable: that is,
-h+T
l'r(r)lar <
l,
-
2. x(r) has only a finite number of maxima and minima.
3. The number of discontinuities in x(l) must be finite.
the
These conditions are sufficient, but not necessary. Thus if a signal x(t) satisfies
Dirichlet conditions, then the corresponding Fourier series is convergent and its sum
discontiis r(r), except at any point ro at which x(r) ii discontinuous. At the points of
ndti, tt. sum ot ttre ieries is the average of the left- and right-hand limits of x(t) at
,o; that is,
r(ro) =
+
] r,ro x(6)l
(3.4.1)
Example 8.4.1
Consider the periodic signat in Example 3.3.1. The trigonometric Fourier series coefficients are given bY
a" =
2Relc,l = 0
b,= -Ztm|j =1+(r -
(tx
:l
I
I
-a:.
n7t '
n odd
I
0.
z even
I
\
so that
r(r)
cosnt)
can be written as
r
I
4Kl
,"rr+...'l
.r(t)=?lsinzrt+isin3rt*"'*;sin""'
t
1, two points of discontinuity of r('), the sum in
zero, whictr is equal to the arithmetic mean of the values
'-
e.4.2)'
Equation
-I( and K
(3;r, h; a value of
of rliy. furtn.rmore, since the signal satisfies the Dirichtet conditions, the series con,ergei und x(r) is equal to the slm of the infinite series. Setting t = lD in Equation
we notice that at , =
0 and
t=
(3.4.2), we obtain
+...(-rr-r r^'-1 * ")
i
i.
'0=*=1#('
-
Fourier
124
Series
Chapler 3
or
,i r-,y,-, ;l_ i=X
Example 3.42
Consider the periodic signal'in Example 3.3.3 with
Fourier-series coefficients are given by
a,=2Relc,|
r = I and T = 2. T\e trigonometric
=6tin"I
b"= -Z Im[c,l = 0
Thus. ao = Klz,a,-o when a iseven,a,, :2K/nr when n = 1' 5, 9, ... ' a, = -2K/nt
when n = 3,'1,11,..., and b, 0 forn = 1.2, .... Hence,x(t) can be written as
:
?{[.or,r, - ]r.*r",r +
+ "'l
]coss",r
,(i=:*
(3.4.3)
Since.r(l) satisfies the Dirichlet conditions over the interval [- I, ll, the sum in Equation
(3.4.3) converges at all points in that interval, except at t = !U2, the points of discontinuity of the signal. At the points of discontinuity, the righGhand side in Equation (3.4.3)
has the value K/2. tthich is the arithmetic average of the values K and zero of:(t).
Il.learrple 8.43
Consider the periodic signal:(r) ln Example 3.3.4. The trigonometric Fourier-series coef-
ficiens are
ao=
tt2
T
4
an= _,c,,sn1I.,
n*o
b,=0
Hence. .z(r) can be written as
,(i = + + +,i L,,-lI cosnr
(3.4.4)
For this example, the Dirichlet conditions are satisfied. Further,.r() is continuous at all ,.
Thus the sum in Equation (3.4.4) converges to r ( ) at all points. Evaluating r(, ) at, = t gives
:(zr) =
r: =1, * e),)i
*-l:"1
frn'
6
Ssc.
3.5
Propertiss of Fouri€r
Series
125
is important to realize that the Fourier-series rePresentation of a periodic signat
r(r) cannot be differentiated term by tenn to give the representation of dt(t)/dt.To
demonstrate this fact, consider the signal in Example 3.3.3. The derivative of this signal in one period is a pair of oppositely directed 6 functions. These are not obtainable
It
by direct differentiation of the Fourier-series representation of r(t). However, a
Flurier series can be integrated term by term to yield a valid reprcsentation of I x(\dt.
ERTIES
ERIES
FOURIER
In this section, we consider a number of properties of the Fourier series. These properties provide us with a better understanding of the notion of thc frequency sPectrum
of a continuous-time signat. In addition, many of the properties are often useful in
reducing the complexity involved in computing the Fourier-series coefficients.
8.6.1 Least Squares APProximation PropertSr
we were to construct a periodic signal r(t) from a set of exponentials, how many
terms must we use to obtain a reasonable approximation? If ,r (r ; is a bandJimited signal, we can use a finite number of exponentials. otherwise, using only a finite number
of terms-say, M-results in an approximation of .r(r). The diffcrence between x(t)
and its approximation is the error in the apprgximation. Wc want the approximation
to be ..cGe" to x(t) in some sense. For the best approximation, we must minimize
some measure of the error. A useful and mathematically tractable criterion we use iS
the average of the total squared elror over one period, also known as the mean'
squared value of the error. This criterion of approximation is also known as the
aiproximation of r(t) by least squares. The least-squares approximation proPerty of
the Fourier series relates quantitatively the energy of the differcnce signal to the error
difference between the specified signal r(t) and its truncated Fourier-series approximation. Specifically, the prop€rty shows that the Fourier-series coefficients are the best
choice (in the mean-square sense) for the coefficients of the truncated series.
Now suppose that;(r) can be approximated by a truncated series of exPonentials
If
in the form
N
.r,u(r)
= ) d,exp[7zoor]
a--N
(3.s.1)
We want to select coefficients d, such that the enor, x(t) - r,v(r), has a minimum
mean-square value. If we use the Fourier series representation for x(t), we can write
the error signal as
e(t):
xG)
-
r,*,(r)
,N
)
kt
us define coefficients
c, expljnonLl
- )-N d exp[into,,r]
(3.s.2)
126
Fourier
Series
Chapter g
l"l
'ry
'" = ["*
lcn-d,, -N<z<N
E-
so that Equation (3.5.2) can be written as
"(r)
:
?
sn exp[7n roor]
(3.5.4)
Now' s(r)-isa periodic signal with period r = 2tt /t to, since each term in the summation is periodic with the same period. It therefore foliows that Equation (3.5.4) represents the Fourier-series expansion of e0). As a measure -of how *"it ,r1r;
approximates .r (t), we use the mean-square eror, defined as
MSE
Sub,stituting for
= il,rl"UrPr,
e(l) from Equation (3.5.4), we can write
MSE =
I l, rl2-r "exn tr,,rl) (,, i-.gi
i i tA{ll
nd-am=-e
tr
J(?)
expl-
,*pti(, - m)ootldtl
)
since the term in braces on the right-hand side is zero for n
Equation (3..5.5) reduces to
*r"
in orl) ar
(3.s.s)
* m aadisl form=n.
=,P_ le,l'
^/
1",-dnl'+
)
(3.5.6)
1",1,
l,l>lv
Each term in Equation (3.5.6) is positive; so, to minimize the MSE, we must select
n--N
dr=
Cn
(3.s.7)
This makes the first summation vanish. and the resulting error is
(MSE),in
= ) l",l'
lrl>,v
(3.5.8)
Equation (3.5.8) demonsrrates the fact that the mean-square eror is minimized by
selecting the coefficients d, in the finite exponential seriis of Equation (3.5.1) to be
identical with the Fourier-series coefficientsq. That is, if the Fourier-series
expansion
of the signal r(r) is truncated at any given value of N, it approximates r() with smaller
mean-square error than any other exponential series with the same nrmbe, of terms.
Furthermore, since the error is the sum of positive terms, the error decteaseg monotonicallv as the number of terms used in the approximation increases.
Example 35.1
Consider rhe approximarion of the periodic signal x(r) shown in Figure 3.4.2 by a set of
2rv + I exponentials. In order to see how the approximation error varies with thi number
S€c.
3.5
Properties of Fourier Series
127
of lerms, we consider the approximation of .t(l) based tu threc terms, then seven terms,
then nine terms, and so on. (Note lhat.:(r) contains only odrl harmonics.) ForN = 1 (rhrcr
terms), the minimum mean-square error is
(MSE)"i,
= ) l.,l'
l"l,t
=
,.p,
,1s
''"i
s
'-14i
,, 1ft,. n,
I
,l odd
8K2 l3n2
= ;7.|\t4'=
I
0.189K2
Similarly, for N = 3, it can be shown thar
(MsE)*" =
-
|{:
ftz 11
0.01K2
9.6.2 Efrects of S5mnretry
Unnecessary work (and corresponding sources of errors) in determining Fourier coefficients of periodic signals can be avoided if the signals posscss any type of symmetry.
The important types of symmetry are:
1. even symmetry,
r(r) = r(-r),
2. odd symmetry.x(r)
= -.r(-r),
3. half-wave odd symmetry,
r(r) =
-r(, -l ;)
Each of these is illustrated in Figure 3.5.1.
Recognizing the existence of one or more of these symrnerries simplifies the conrPutation of the Fourier-series coefficients. For example, the Fourier series of an even
signal r(l) having period I is a "Fourier cosine series."
x(t)=an*2,o,"or\!
with coefficiens
4 trt2
lrrzx(t)dt, und n,,=
%=;1,
iJn
2
Znrt
x(t)cos=f
-
dt
whereas the Fourier series ofan odd signal.r(t) having penocl 7 is a "Fourier sine serics."
Fourier
128
Series
Chapter 3
r(r)
r(t)
symmetry,
T=3
-3 -2
-l
0
123t
(a)
.r(r)
0l
(c)
Figure
3.5.1 Types of symmetry.
,(,)
=,i o,sin4f
with coefficients
u,=
+[: 'rt)sin4ldt
The effects of these symmetries are summarized in Table 3.1, in which entries such as
ao * 0 and bzn*r * 0 are to be interpreted to mean that these coefficients are not necessarity zero, but may be so in specific examples.
In Example 3.3.1 x() is an odd signal, and therefore, the c,t are imaginary (an = 0),
whereas in Example 3.3.3 the c, are real (b, = 0) because.r(r) is an even signal.
TABLE 3.7
E lBc.ts o, Symm€t
y
Remarks
b"
Symmsfy
a,*O
b,=o
ao=0
or=
b,+o
ao=0
azr=O
Even
ao#
odd
Half-wave odd
O
O
aut, *
Integrate over Il2 only.
and multiply the coefficients by 2.
lntegrate over 12 only,
and multiply the coefficients by 2.
bz,=o
O
bL,,t +
O
Integrale over 12 only.
and multiply the coefficients by 2.
Sec.
3.5
Properties ol Fourier Series
't29
Figure 3.5,2 Signal
Example 3..5.2.
r(r) for
E-arnple 35.2
Consider the signal
(o-!t.
r
o<t<r/2
'rrl={"
l!,-to.
t/
rtz<r< l
which is shown in Figure 3..5.2.
Notice thal .{(r) is both an cven and a half-u'ave odd signal. Therefore, ao =
and we expect lo have no cven harmonics. Computing a,, we ohtain
". = +1,''' (^
,
(
|
)'
0,
='l-qq
[
(nr)'
=
0,
-'|t)"o'hl'a,
.!4=tl-cos(aTr)),
ln?I
0,4
n + tl
n even
nodd
Observe that ao, which corresponds to the dc term (rhe zero harmonic), is zero because
the area under one period of .r(l) evaluates to zero.
8.6.8 Linearity
Suppose that r(r) and y(r) arc periodic with the same period. Lct their Fourier-series
expansions be given by
r(r)
=)
n= --
B.exp[7nr,r,,r]
(3.5.9a)
y(r)
=)
1,exp[rno6tl
(3.s.eb)
1gO
Fourier
Serles
Chaptor 3
and let
.(t) = kf (t) - kry(t)
where &, and k, arc arbitrary constants. Then we can write
s(r)=
=
i,=:" (t'F,+ kr1)expfint,utl
,I,
"'exP[7hto"t]
The last equation implies that the Fourier coefficients of e (t) are
a,,:
3.6.4
MuctofTbo
krB,,
+
kr1,,
(3.5.10)
Signals
If .r(r) and .v(t) are periodic signals rvith the same period
as in Equation (3.5'9)'
their product is
z(t; -- ,1,,r',r)
=
=
=
,,i,
0,,
exp[7n<ontl,,i,
,,P,,,,>:_
P,,1,,,
r,,
explimu,ntl
exp[i(, + nr)our]
,i(,,i, o,-,,,r,,,)exp[iro,,,r]
(3.s.r1)
The sum in parentheses is known as the convolution sum of the two sequences p,,, and
1,,,. (More on the convolution sum is presented in Chapter 6.) Equation (3.5.11) indicates that the Fourier coefficients of the product signal z (t ) are equal to the con volution
sum of the two sequences generated by the Fourier coefficients of -r(l) and.v(t). That is'
i
If y(r) is replaced
P'-"'
ar'riUr.
''"'
:
|
exP
1,.
""."t)
[-/to"r]ril
we <rbtain
z(,)=
> (i
7= -z \n1- -z
P,,,,,rr)exp[/r,,,,tl
and
,,,i,.F,-,,rfi
='i],rr(r).v*(t)exp[-lltr,,tJdt
(3'5'12)
Sec.
3.5
Properties ol Fourier Ser
ies
191
3.5.6 Convolution of Ttvo Signals
For periodic signals with the same period, a special iorm ofconvolution. known as periudic ur citculat curlt(riutiotl, is dcrined bv the integral
lr
:(I) '- -I | ;;(r),v(l - t)dr
J(h
(3.5.r3)
where the integral is taken over one period L It is easy ro show rhat z(r) is periodic
with period Tand the periodic convolution is commutative and associative. (See problem 3'22). Thus, we can rvrite r (r ) in a Fourier-series representation with coefficients
: I ir e (l) exp [-7hor,,l ldt
", i J,,
I rtt (t)v(r
=V
- r) exp [ -7ar''r otldt dr
)J, 't
= !i
/.''',',
Using the change
,,, =
t*p [-7nr'rnt i
t | ['
of variablcs o = r -
.lf'.r,r,
r,, -
t
cxp[- lnr,,nr]
t ) cx, [ -
lrr
ro
u{t
- r t, at}a'r
(3.5.14)
in the second integral, we obtain
lil
-jrrt,,,.,lao]a"
.-.r(rr)cxpf
Since y(t) is periodic. the inner intcgral is independent of a shitt
Fourier-series coefficients ofy(r), 1,. It follows that
or:
r and is equal to the
(3.s.15)
FnJ,
where B, are the Fourier-series coefticients of .r(r;.
Exqrnple 3.6.3
In this example. we compute thc Fourier-series cocflicients of thc product and of the peri.
odic convolurion of the rrvu signals shown in Figurc 3.5.3.
The anall,tic representrrion of .t1r) is
.((r) --
r
Figure
r, 0<r<4. r(r)=.t(r+4)
-J
-5 -4 -3
3.5.3 Sirnals .r(r)
-t
0
I
and _u(i) for Examplc 3.5.3.
Fourier
132
Series
Chapter 3
The Fourier-series coefficients of x(r) are
u"=
:
if ,*o[-T]"
?!nn
For the sigral y(r), the analytic representation
4. The Fourier-series coefficients are
is
given in Example 3.33 with
t =2 aad T =
,"=ll,x*vlt;)a,
Knr'Kn
=
-_sm7
=
'smc'
From Equation (3.5.15), the Fourier coefficients of the convolution signal are
aa
=
2iK
b:r)zs,I
nn
2
and from Equation (3.5.11), the coefficiens of the product signal are
i
"' =
=
Lo-^'t^
1 .m,,
;";*
-7
-?-^@ - ^)" 2
8.6.6 Pareoval'sTheorem
In Chapter l, it was shown that the average power of a periodic signal
P
r(t)
is
=:lt Jo't lxlt)l'zdt
The square root of the average power, called the root-mean-square (or rms) value of
.r(l), is a useful measure of the amplitude of a complicated waveform. For example, the
complex exponential signal x(t) = c" exp [7hort] with frequency n too has lcn l' as its
average power. The relationship between the average power of a periodic signal and
the power in its harmonics is one form (the mnventional one) of Parseval's theorem.
We have seen that if x(r) and y(t) are periodic signals with the same period I and
Fourier-series coefficients p, and 1,, respectively, then the product of .t(t) and y(t) has
Fourier-series coeffrcients (see Equation (3.5.12))
o,
=i
go*^^r*
,nd -a
The dc component, or the full-cycle aveiage of the product over time, is
Sec.
3.5
Properties ol Fourier Series
13{}
"": lf
Y'v(t)dt
= i "u'
8,,,,;
If we let y(t) = .r(t) in this expression, then 8,, =
(3.5.16)
.y", and Equarion (3.5.16) becomes
'il,,rl* <'lP o' =,,,i-
I
8,,
l'
(3.5.17)
The left-hand side is thc average power ofthe periodic signal .r(r). The result indicates
that the total average power o[.r(t) is the sum of the average ptrwer in each harmonic
component. Even though power is a nonlinear quantity, we can use superposition of
ayerage powers in this particular situation, provided that all thc individua.l components
are harmonically related.
We now have two different ways of finding the average power of any periodic signal x(r): in the time domain, using the left-hand side of Equation (3.5.17), and in the
frequency domain, using the right-hand side of the sanre equalion.
S.6.7 Shift in Time
If .r(l)
has the Fourier-series coefficients c,,, then the signal .r(r
cicnts d,,, rvhere
o, = !r[,rr r,
-l
= exp[-itoor]
= c, exp [-
- t) has coeffi-
expl-in .ootldt
]
{rr
rt,rl"*nt
-
inu.uo)tt o
oor ]
(3.s.r8)
Thus, if the Fourier-series representation of a periodic signal r(t ) is known relative to
one origin, the representation relative to another origin shifted by t is obtained by
adding the phase shift n Gror to the phase of the Fourier coefficicnts ofr(r).
7h
n=anple 3.6.4
Consider the periodic signal r(l) shown in Figure 3.5.4. The sig.nal can be written as the
sum of the two periodic signals rr (r) and rr (r), each with period 2n/r,ro, where .r, (r) is the
.r(r)=flsin..,)orl
tigure 3S.4 Signal -r(t) for Example
3.5..1.
134
Fourier
Series
Chapl€r 3
half-wave rectified signal of Example 3.3.2 and x2(r) = :r (, - r/(r0)' Therefore. if p, and
'y, are thc Fourier coefficients of .r, (r) and -rr(r), respectively, then, according to Equation
(3.s.18),
,,
= B,
=
p,,
"*p
a]
[-1n..
exp[-lnrr] = (-
lfp,
From Equation (3.5.10). the Fourier-series coefficients of .r(t) are
d,=p,+(-1f9,
: [2P,,
[0.
n even
n odd
where the Fourier-series coefficients of the periodic signal .r1(t) can be determined as io
Equation (3.3.16) as
U, =
*f
Esinorsrexp[7ho6t]dt
(e
,, even
;o--;'r'
|
n=tt
l-ilE.
lo
otherwise
|.0,
Thus,
2E
r(l - n') o"=
0.
n even
n odd
This result can be verifred by directly computing the Fourier-series coefficients
ofr(,).
3.6.8 Integration of Periodic Signale
If a periodic signal contains a nonzero average value (co + 0), then the integration of
this signal proJu".. a component that increases lirrearly with time, and therefore, the
resulr;nt signal is aperiodic. However, if co = 6, then the integrated signal is periodic'
but might c-ontain u d"
.orpon"nt. Integrating both
f_,,1,'ta,
sides of Equation (3.3.3) yields
n
=
^2_lh*p[inronr],
*
o
(3.s.1e)
The relative amplitudes of the harmonics of the integrated signal compared with its
fundamental or" 1"5 than those for the original, unintegrated signal. In other words,
integration attenuates (deemphasizes) the magnitude of the high-frequency comPo-
Sec.
3.6
lnputs
Systems with P€dodic
135
nents of the signal. High-frequency components of the signal are the main contributgrs
to its sharp details, such as those occurring at the points of discontinuity or at discontinuous derivatives of the signal. Hence, integration smooths the signal, and this is one
of the reasons it is sometimes called a smoothing operation.
3.6
SYSTEMS WITH PERIODIC INPUTS
Consider a linear, time-invariant. continuous-time system with impulse response
From Chapter 2, we know that the resPonse resulting from an input r(t) is
Il(t).
y(tt=[ hg)x(t-ldr
J_^
For complex exponential inputs of the form
r(t) = exp[ior]
the output of the system is
y0): [
/r(r)exp[io(r -r)]dr
= exptTrrl
f
h(r)expl- ir,orldt
By defrning
H(o,) =
/-
r,1"1"*pt- ionldt
(3.6.1)
we can write
y0) = tl(o) exp[jror]
(3.6.2)
II(ro) is called the system (transfer) function and is a constant for fixed ro. Equation
(3.62) is of fundamental importance because it tells us that the system resPonse to a oomplex exponential is also a complex exponential, with the same frequency ro, scaled by the
quantity H(or). The magnitude llr(,o)l is called the magnitude function of the system,
ana + H(.) is known as the phase function of the system. Knorving H(ro), we catr detetmine whether the system amplifies or attenuates a given sinusoidal component of the
input and how much of a phase shift the system adds to that particular component.
To determine the response y(l) of ao LTI system to a periodic input.r(t) with the
Fourier-series representation of Equation (3.3.3), we use the linearity proPerty and
Equation (3.6.2) to obtain
y0)
= nd2-a
H(nor)c,exp[lnoot]
(3.6.3)
Equation (3.6.3) tells us that the output signal is the summation of exPonentials with
coeffrcients
136
Fourier
d,:
Serles
H (n,or,\c,
Chapter 3
(3.6.4)
These coefficients are the outputs of the system in response to c,, exp Llno0rl. Note that
since H(nton) is a complex constant for each n, it follows that the output is also periodic with Fourier-series coefficients d. In addition, since the fundamental frequency
of y(t) is tos, which is the fundamental frequency of the inputx(r), the period of y(r)
is equal to the period of .r(t). Hence, the response ofan LTI system to a periodic input
with period is periodic with the same period.
I
n-nrnple B.G.l
Consider the system described by the input/output differential equation
y(")(r) + 5P,Yt')(,)
For input
r()
=in,""'1,1
= exp[iot], the corresponding output is .v0) = ll(o)exp[irot].
Since
every input and output should satis$ the system differential equation, substituting into the
latter yields
*
[O.f !r,fr.ul']at.)
exp[iror] = ioa,(I.)'exRU.,rl
Solving for l/(ro), we obtain
H(r,r) =
)
q,(i')'
---d-;=(jo)' + )
p,(ito)'
Exanple 3.6.2
Let us find the output voltage y(l) of the system shown in Figure 3.6.1 if the input voltage
is the periodic signal
r(t) = 4*t' -
2 cos?t
t-------------'l
ll
t
L=t
I
Itgure 3.6.1 System for Example
3.6.2.
Ssc.
3.6
137
Syslems wlth Periodic lnputs
Applying Kirchhofl's voltage law to lhe circuit yields
o#! *1,r,=rr,u,
If we set r(r) = expUr,rtl in this equation, the output voltage
is
y(t) = I{(or)exp[lot].
Using the system differential equation. we obtain
iorH(ro) exp [jro11
+
f
41.1.*p
1
qLexp
i.,l =
[lr,rr]
Solving for H(r,r) yields
ei/h.
H(.\=
Al any frequency o, = tr o0, the system function
H(noJ =
For this example. oro
-
= 2{icos(t
fample
-
--t!,"*
I and R/L = l,so that
y(i = -j,exp[ir]
+
4s")
is
the output is
t'V",el-itl inexp[i2rl *ery;r1 ial
-
la6 cos(a
- el')
&6.9
Consider the circuit shown in Figure 3.62. The differential equation goveming lhe system is
to
=
cff +yf
For an input of the form i(t) = exp[iot], we expecl thc outPut u(t) to
a(t) = /I(r,r) exp[iot]. Substituting into the differential equation yields
expfirorl = CjroH(ro)expt;rrl +
]41<,,1
exp[orr]
Canceling the exp[lorll term and solving for H(o), we have
H(.)=
lnll,c
+
r(11=
11r1
)
c
R
u(t) = l,(t)
Flgure
3.6.3.
3.6.2 Circuit for Exaople
be
138
Fourier
Serles
Chapter 3
Let us investigatc lhe response of the system,to a more conrplex inpui. consider an input
that is given by the periodic signal .r(r) in Example,3.3.l. The input signal is periodic wirh
period 2 and r,re = z, and we have found that
(zx
l:-.
Lnn
zodd
c,= 1'
I
t
o.
n even
From Equation (3.5.3), the output of the system in response to this periodic input
v(i=
"2-?! t/a-]
is
jn,cexplinntl
l, odd
F-ample 8.6.4
consider the system shown in Figure 3.6.3. Apptying Kirchhofls volrage law. we find thar
the differential equation describing the system is
x(t)=y*(++c4+D)+y(t\
which can be
wri
en as
Lc y"(t)
* f,,,@ + y(t\ = x(tl
For an input in the form.t(r) = exp[ior], the outpur voltage is y(r) = H(t:)exp[jorr].
Using the system differential equation, we obtain
(ja\'1LC H(ot) +
j,n*Hkn) + H(a) = 1
Solving for H(ro) yields
H(r,r) =
jaL/R - a2 LC
-I + --.1-----
with
Ill(r,)l
=
L
+
r(r) +
C
R
r(r)
-
Figure 3.63
3.6.4.
Circuit for Example
Sec.
3.6
Systems with Periodic lnputs
139
and
lt;:*-:,,',1-. -r-*;e
rectiried sigpar in Exampre
Now, suppose that the input
periodic.
rvith
output
of
the
system
is
representation giverf
the
the
Fourier-series
Then
3.3.2.
by Equation (3.6.3). Let us investigate the effect of the system on the harmonics of the input
signal .t(r). Suppose that t,l0 = laht, LC = 1.0 x l0-a. andL/R = 1.0 x 10-4. For these
values, the amplitude and phase of H(roo) can be approximated respectively by
lH(zo6)l =
1
ir,1:-a7
and
l.
{H(no6) = ,;rRC
Note that the amplitude of H(n too) decreases as rapidly as lln2.The amplitudes of the frrst
few components d,, tt = 0, |, 2. 3, 4, in the Fourier-series representation of y(t) are as
follows:
l,t,l = :,
la..l =
o,
lr,,l = z.e x
,o'f.
ld,
l=t.t, rc-r3
5n
I
l,t,l = 4.4x Io ij"
Thc dc component of the input r(r) has becn passed rvithout an) attenuation, whereas the
first- and higher order harmonics have had in their amplitudcs reduced. The amount of
reduction increases as the order of the harmonic increases. As a rnatter of fact, the function of this circuit is to attenuate all the ac conrponents of thc hulf-wave rectifred signal.
Such an operation is an example of smoothing. or filtering. Thc ratio of the amplitudes of
the first harmonic and the dc cornponent is 7.6 x l0-2zr/4, in comparison with a value of
rr/4 tor the un[iltered halI rvave rcctilied waveform. As we nlcntioned before. complex
circuits can be designed to produce better reclified signals. 'l hc tlesigner is always faced
with a trade-off between complexity and performance.
We have seen so far that when signal x(t) is transmitted thr-trugh an LTI system (a
communication system, an amplifier, etc.) with lransfer function 1/(r,r). the output y(t)
is, in general. different from.r(r) and is said to be distorted. In conl.rast, an LTI system
is said to be distortionless if the shapes of the input and thc ()utput are identical, to
within a multiplicative constanl. A delayed output that rctains the shape of the inPut
signal is also considered distortionless. Thus, the input/output lclationship for a distortionless LTI should satisfy the cquation
,r,(t)=6..,,-,r,
(3.6.s)
The corresponding transfer [unclion H(to) of thu distortionless svstem will be of the form
H(ut) -- Kexp[-lrot,]
(3.6.6)
Fourier
140
1
H
Serles
Chapter 3
(.nl
Flgure 3.6.4 Magnitude and phase characteristis of a distortionless
syslem.
Thus, the magnitude lA(o)l is constant for all to, while the phase shift is a linear function of frequency of the form -trto.
Let the input to a distortionless system be a periodic signal with Fourier series coefficients c,. It follows from Equation (3.6.4) that the corresponding Fourier series coefficients for the output are given by
d, = K expl-inlr,ut,1lc,,
(3.6.7)
must be
Thus, for a distortionless system, the quantities la,,lllr,,l andlfurconslant for all n.
In practice, we cannot have a system that is distortionless over the entire range
-co < t,t < o. Figure 3.6.4 shows the magnitude and phase characteristics of an LTI
system that is distortionless in the frequency range -or. ( ro ( ro..
&l/n
Flrample &65
The input and ourput of an LTI system are
.r(l) = 8 exp[i (too, + 30')l + 6 exp[l(3root - 15")] - 2 exp[i(5oor + 45")l
y(r) = aerp[i(orot - 15')] - 3exPU(3orot - 30')l + exp[i(Soot)l
We want to determine whether these iwo signals have the same shape. Note that the ratio
of the magnitudes of corresponding harmonics, ld,l/lr,l, has a value of l2 for all the
harmonics. To compare the phases, we note thar the quantity l$c,
4dl/z evaluates
(+
15'
30"
180')/3
to 30' -(-15") = 45' for the fundamental,
= 45' for the third
harmonic, and (45' + lEO')/s = 45' for the filth harmonic. It therefore follows that the
two signals.r() and y(r) have the same shape, except for a scale factor of l2 and a Phase
shift of t/4. This phase shift corresponds to a time shift of = t /4t'to. Hence, y(l) can
be written as
-
-
t
y(,,=i,(,-#;)
The system is therefore distortionless for this choice
ofr(t).
Sec.
3.6
Systems with Periodic lnputs
141
Example 3.6.6
and y(t) be the input and the output, respectively, of the simple RCcircuit sbowu
in Figure 3.6.5. Applying Kirchhoff s voltage law, we obtain
Let.r(l)
dv(t\
'-!--!!
+
1
1
iay$)
RC.t0)
T------------]
rl
i R=t0kO
I
Flgure
3.65
Circuit for Exarnple
3.6.6.
Setting.rG) = exp[jtorl and recognizing that y1t1 = H(o) expIlr,rtl, we have
i
Sr:lving for
ofl (a)exp jtor| *
ntr;
I
exp
[jor!
:
^[
H(o) yields
exp
ti,r]
^!a
llRC
nlu) = yp6+
1_
=
rl
r.r.
wlrcre
''
I
loo x l0- ll
=
107
s-r
Hence,
ln(,)l
=Vrh
*H("t)=-tun-t9The amplitude and phase spectra of H(o) are shown in Figure 3.b.6. Note that for ro ( q,
H(o; =
1
and
4H(0,)
= -9rl
That is. the magnitude and phase characteristics are practically ideal. For example, for inPut
Fourl€r
142
| ,r(sr)
Serlos
Chapter 3
1 H(ul
|
l.o
0.707
Itgure 3.6.6 Magnitude and phase spectra of H(ro).
x(r) = 4exP[ildt]
the slatem is practically distortionless with output
v(t) = H(td)A erP[ildt]
=
,]!. n"'
exP[ildr]
-Aexpllld(r-
1o-7)l
Hence, the time delay is 10-7s.
3.7
THE GIBBS PHENOMENON
Consider the signal
in Example
3.3.1, where we have shown that
x(t) could
be
expressed as
,(0
=#.i_
) expt;,,r1
a odd
We wish to investigate the effect of truncating the infinite series. For this purpose, consider the truncated series
r,v(,) =
?;
,i-!expllnntl
r
odd
The truncated series is shown in Figure 3.7.1 for lV = 3 and 5. Note that even with
N = 3, rr(t) resembles the pulse train in Figure 3.3.2. Increasing N to 39, we obtain the
approximation shown in Figure 3.7.2. It is clear that, except for the overshoot at the
points of discontinuity, the latter figure is a much closer approximation to the pulse
train x(r) than is.rr(l). [n general, as N increases, the mean-squate etror between the
approximation ano the given signal decreases, and the approximation to the given sig-
Sec.3.7
I
The Gibbs Phenomenon
Flgure
3.7.1
Sigrrals xr0 )
J:-'
andrt()'
In
nal improves everywhere except in the immediate.vicinity of a finite discontinuity.
tiie neigfrUorfr"od of points of discontinuity in x(r), the Fourier-scries representation
fuil" toionr".g", even though the mean-square error in the represcntation approaches
of
zero. A carefu-l examinatioi of the plot in Figurc 3.7.2 reveals that the magnitude
the overshoot is approximately 9vo higher than the signal x (r ). In fact, the 9vo overshoot is always priient and is independenr of the number of terms used to approximrte.ignat r(r). This observalion was first made by the mathematical physicist Josiah
Willard Gibbs.
of a
To obtain an explanation of this phenomenon, let us consider the general form
truncated Fourier series:
lv
x,v(r)
=)
c,exP[Throst]
=,i,+L x(r)
=
+[. ,nrt,i,
exp [
-
inutotldr
exp [jn roo(t
exP
[lntool]
- "llla,
It can be shown (see Problem 3.39) that the sum in braces
is equal to
(3.7. r )
144
Fourier
r39
Figure
N
g(r
3.72
- t)A,-) exp[ynon(r -/V
Series
Chapter 3
(r)
Signal
r,r().
.r;l(ru. ]),.4 -.'r]
t)] =
.,"(.,?)
(3.7.2)
The signal g(o) with N = 6 is plorted in Figure 3.7.3. Notice the oscillarory behavior of
the signal and the peaks at points o = mr,m = 1,2, ....
Substituting into Equation (3.7.1) yields
*
]),,u ..--,)]
''"[("
.
--- da
r^,(,)=lIrt,t
t Jo\
.ir(",.1;)
='rl,n'u - ",'r[("1
'
l)*"],,
.i" (.,,
;)
(3.7.3)
Sec.
3.8
Summary
145
Ilgure
3.73
Signal3(o) for N =
6.
In Section 3.5.1, we showed that xr(t) converges tox(t) (in the mean-square sense) as
N -+ :o. In particular, for suitably large valucs of N, rr(t) shoulrl he a close approximation tor(t). Equation (3.7.3) demonstrates the Gibbs phenomenon mathematically,
by showing that truncating a Fourier series is the same as convolving the given r(t)
with the signal g(t) defined in Equation (3.7.2). The oscillating nature of the signal g(t)
causes the ripples at the points of discontinuity.
Notice that. for any signal. the high-frequency components ( high-order harmonics)
of its Fourier series are the main contributors to the sharp details, such as those occurring at the points of discontinuity or at discontinuous derivatives of the signal.
3.8
SUMMARY
o
Two functions $,(t) and $,(t) are orthogonal over an interval (a, D) if
f'o,t,lo,.rrla, = {f,,'
r
',j',
and are orthonormal over an interval (a, b) if E, = I for all i.
Any arbitrary signal r(t) can be expanded over an interval (a, b) in terms of the
orthogonal basis functions (dr(t)l as
.r(r)
=)
la
c,g,(r)
-a
where
''
=
t'
f,'u'o'*(')d'l
Fourier
146
r
,"*rlry)
The fundamental radian frequency of a periodic signal is related to the fundamental period by
_o=
.
*l
are orthogonal over the interval [0, 7].
A periodic signal r(t), of period I, can be expanded in an exponential Fourier series as
,(,) =,i_
o
Chapter 3
The complex exponentials
d,0) = *ol+
n
Sories
2tr
i
The coefficients c, are called Fourier-series coefficients and are given by
P't;tl,
,, = !,
@,-vll,r,
o
.
r
.
.
The fundamental frequency rou is called the lirst harmonic frequency, the frequency
2roo is the second harmonic frequency, and so on,
The plot of lc,l versus nroo is called lhe magnilude sPectrum. The locus of thc tips
of the magnitude lines is called the envelope of the magnitude spectrum.
The plot of {,c, versus n r,ro is called the phase spectrum.
For periodic signals, both the magnitude and phase sPectra are line spectra. For realvalued signals, the magnitude spectrum has even symmetry. and the phase spectrum
has odd symmetry.
If signal x(r) is a real-valued signal, then it can be expanded in a trigonometric series
of the form
x(r): ao.,i
(r"
"orhf'
+
o The relation between the trigonometric-series
o,"inhl')
coefficients and the exponential-
series coefficients is given by
ao=
a,,
= 2 Re[q,]
t,,
= -Zlmlc,,l
c,,=
o
co
I
,(a,,-
An alternative form of the Fourier scries
x(tl = A,, *
with
ib,,l
is
fi,n,*r(2i'
*
0,,)
Sec.
3.8
147
Summary
Ao=
co
and
A, = Zlc,l,
:
0,,
4.,,
.
For the Fourier series to converge, the signal.r(r) musl be absolutely integrable, have
only a finite number of maxima and mininra, and have a finitc number of discontinuities over any period. This set of conditions is known as the Dirichlet conditions.
o
If the signal .r(t)
has even symmetry, then
0. n = 1.2....
2r
b, -oo
=
i J,r,r,'(')o'
4r
r
If the signal .r(t)
2ntt
i ),r,r,'(l)cos j:
o" =
rlt
has odd symmetry, then
4,,=0, n:0'l'2...
4r
2nrt
d'
u"=
r
If the signal
.r (t
'J"'""""n
T
) has half-wavc odd symmetry. then
az'=O'n=0'1""
o2,,+t
=
|
il)"''
I,r,r,.rurro'?Qn
,,,
bzu=0' n=l'2'"'
b
z,
tr
=
|
[,r,r,,
(,
r
rrnreL
_u
\: !,,,
o If B, and 1, are, respectively,
the exponential Fourier-series coefficients for two
periodic signals.r(r) and y(r) with lhe same period. then thc Fourier-series coefficients for z(t) = krx(t) + kr-v(t) are
a,=krP,,+kr1n
whereas the Fourier-series coefficients for
o,
o
=,,,i.
z(t) = x(t)y(t)
are
B,-,,'Y,,
For periodic signals.r(r) and y(r) with the same period
is defined as
.trl = 1-I ltn
[ .r(t)r(r - t)dr
f.
thc periodic convolution
FourierSedes Chapterg
148
o
The Fourier-series coefficiens of the periodic convolution of x(r) and y(t) arc
o, = 9,1,
o
One form of Parseval's theorem states that the average power in the signal x(t) is
related to the Fourier-series coefficients g, as
P=
.
The s),stem (transfer) function of an LTI system is defined
H(r,r)
o
e
.9
= [- 41";"*p1- j.ilrldr
J _-
=i
H(nao)c,exp[Throor]
where oo is the fundamentaf frlriency ancl c, are thc Fourier series coefficients of
the input.r(t).
Represen':ng x(t) by a finite series results in an overshoot behavior at the points of
discontinuity. 1)re magnitude of the overshoot is approximately 97o. This phenomenon is known as the Gibbs phenomenon.
CHE KLI
OF IMPORTANT TERM
Absolutely lntegrable elgnal
Dlrlchlet clndltlong
Dletoltonless eystem
Even harmonlc
Erponentlal Fdurler ssrles
Fouiler c@fflclents
Glbbs phenomenon
Hall-wave odd symmetry
lrboratory torm ol Fourler earles
Least squares approrlmallon
Magnltuds opoctrum
3.10
as
The magnitude of l/(ro) is called the magnitude function (magnitude characteristic)
of the system, and {fl(to) is known as the phase function (phase characteristic) of
the system.
The response y(t) ofan LTI sysrem to the periodic inputr(r) is
y(r)
.
,i-l,-"|'
Mean-square error
Mlnlmum msan-aquare errot
Odd harmonlc
Orthogonal tuncuons
Orthonormal tunctlons
Pareeval's theorem
Perlodlc convolutlon
Perlodlc slgnals
Phase spectrum
Transler lunctlon
Trlgonometrlc Fourlor aerles
PROBLEMS
3.1. Express the sct of signals shown in Figure Pli.l in ierms of the orthonormal basis signals
$,(t) and gr(t).
3.2. Given an arbitrary set of functions r, (r), i = l, 2, ... , detined over an interval [ro, rrl, we
can generate a set of orthogonal functions r|l,(r) by followingthe Gram-schmidt onhogondization procedure. Let us choose as the lirst basis function
Sec.3.10
Problems
rt
149
(r)
r2ir)
x!(r)
3
2
t
0
-l
-2
0r (r)
ll
,/i
E
\,1
2
0
0
-fr
Figure P3.l
rfr, (l
)=
.r,
(r)
We then choose as our second basis function
r!2(t) =.rr(t) + ar{rr(r)
where a, is determined so as to make r[2(t) orthogonal to Ur(r). We can continue this procedure by choosing
U.r(,) = -t.r(r) + brll,r(,) + bz,Jtz?)
with b, and Dsdetermined from the requirement lhat
musr hc orlhogonal to both
'lrr(r)
r!, (t) and rfr(t). Subsequenl functions can be generated in a similar manner.
For any two signals x(t), -v(r), let
(r(r).y(r)) = l'' *1,'1y1,1a,
!
t,,
and let E,
=
(e) Verify that the cocfficients c, . b, . and b, are given by
(.r,(t)..t,(t))
(.r, (r ), .r, (r )).
-,EI
o,
= ---
,
Di=
_
(.r
--
r
(l)..r1(r))
e,
Fourier
150
.
(.r, (r),
Series
Chapter 3
rr(t))(.r,0). .tr()) - E'(rr(r), .tr(t))
<\(t),x2u)12 - EtEz
(b) Use your results from Part (a) to generate a set of orthogonal funclions from the set
of signals shown in Figure Pli.2.
(c) Obtain a set of orthonormal functions 0,(l). ,
determined in Part (b).
=
1, 2, 3,
from the set
{,()
that you
rr(r)
-r2(r)
v,17
0
-vztl
Figure P32
33. Consider the set cf functions
.t,
(r) = a-'r1r;,..r(t) =
c-'u(t),x{t) = e-1'u(t)
(a) Use the method of Problem 3.2 to generate a set of orthonormal functions 0,(r)
from.r,(t),i = 1,2,3.
(b) Let i(r) =
l
)
.,0,(r)
be the approximaiion of
r(r) = 3e-qu(t)
,= l
in terms of S,(), and
let e(l) denote the approximation error. Determine the accurary of i() by comPuting the ratio of the energies in e(t) and,r(t).
3.4. (a) Assuming that all 0, are real-valued functions, prove that Equation (3.2.4) minimizes
the energy in thc error given by Equation (3.2.1). (Hint: Differentiate Equation (3.2.7)
with respect to some particular c,, set the result equal to zero, and solve.)
O) Can you extend the result in Part (a) to complex functions?
3S. Showthatiftheset$^(l),k=0,=1,!2,...,isano(hogonalsetovertheinterval(0,7-)and
.r(r): )
coS^(r)
(P3.s)
then
',
=
+,L' :o$;()dr
where
q= |.T li,o)l'zdt
Jg
3.6
Walsh functions are a set of orthonormal functions detined over the interval [0, l) that
take on values of
over this interval. Walsh functions are characterized by their
=l
Sec.3.10
Problems
151
sequenct. which is defined as one-half the numher of zero'crossin.qs of the funclion rrver
the interval [0. I ). Figure P-'1.6 shows the first seven Walsh-()r.'lcred Walsh functions
wal. (/t. r ). arranged in order ol increasing scqucncv.
wulu ( /l. r)
t
0
rlll
-a
-1
-7
Figure P3.6
(a) Verify that the Walsh functions shown are orthonormal ove'r [0. 1).
(b) Suppose we wanl to represent the signal x(t) = 11,,1r; - rr(t - l)l in terms of the
Walsh functions as
x,v(r) =
)
l-0
co
wal,.(k, t)
Find the coefficients cr for N = 6.
(c) Sketch.r/v(r) for N = 3 and 6.
3.7. For the periodic signal
Jr(r) = 2
* ].or1,
Z
+ 45") + 2cos(3r)
(a) Find the exponential Fourier series.
(b) Sketch the magnitude and phase spectra
-
2sin(4r + 30")
function of or.
3.& The signal shown in Figure P3.8 is created rvdcn a cosine volluqe or current waveform is
rectified by a single diode. a process known as half-wave rcctiliealion. Deduce the expt,'
nential Fourier-series expansion for the half-rvave rectified signal.
3.9. Find the trigonomelric Fourier-series expansion tbr the signal in l'roblem 3.8.
3.10. The signal shown in Figure P3.10 is created rvhen a sine volt;rgr or curtent waveform is
rectified by a a circuit with two diodes, a pr(rccss known as lull-s'ave rectificarion. f)educc
the exponential Fourier-series expansion for the Iull-wave rcclil icd sigllal.
as a
3.11. Find the trigonometric Fourier-series expansion for the signal rn l'roblcm 3.10.
Fourler
152
-5t -2t
2
3tr 2t
-n0a
T'
-3r
2
Serles
aa
Chapter 3
5r
I
Figure P3J
0
Flgure P3.10
3.12 Find the exponential Fourier-series represenlations of the signals shown in Figure
F3.12.
Plot the magnitude and phase spectrum for each case.
3.13. Fitrd the trigonometric Fourier-series representations of the signals shown in Figure El.l2.
3.14. (a) Show that if a periodic siggal is absolutely integrable, then l",l -.
.
(b) Does the periodic sigral .r(r)
(c)
:
,irr
4
h"r"
a Fourier-series representation?
why?
siglral:(r) = tan2nt have a Fourier-series represenlation? Why?
3.15. (a) Shorv that:(r) = t2, -tt < t = t,r(t + 2t) =.t(l) has the Fourier series
Does the periodic
,t,)
=
T - r(.o,r- |*.2
+
|cos3r
-. )
(b) Set, = 0toobtain
€ (-l)"" _- "'
lZ
3,--7-
3.16. The Fourier coefficients of a periodic signal with period
I
are
z=0
Does this represent a real signal? Why or why not? From the form of cn, deduce the time
.t(t). Hint: Use
signal
J exp[-TnrortD(t
- t)dt = exp[-r'n rrrrrl
Soc.3.10
153
Problems
r (r,
(a)
.r(r)
(b)
(c)
t(r)
rir)
1
2
-l
0
I
.,
t
-!
r(r)
-2 0
2
(e)
(f)
r (r)
r (r)
-t 0 I
-3
2
(h)
(s)
tigure
3.17. (a) Plot the signal
.\'
(,
)
=
r*$
P3,12
2
4 fr,nt .in "14 cos 2n"rl
tor M = 1,3. and 5.
(b) Predict the form of r(r ) as lll -r ':r.
3.1& Find fhe exponenrial Fouricr serics for thc impulse trains shorvtt in t:igure P3.18.
3.19. The waveforms in Problcnr J.l8 can be considered to be periodic s ith period N for N anl
integer. Find the exponcntial Fourier series coefficients for thc casc N = 3'
32lL The Fourier series coefficicnts Ior a periodic signal x(r) with pcrroJ 7 are
.,,=
[''"r;;"'l'
Fourier
154
\'(,
-t
Series
Chaprer 3
I
-l
Flgure
P3.lt
(a) Find Tsuch that cs = l/150 if Iis Iarge, so that
(b) Determine the energy in x(t) and in
sin(nnlI) = nr/7.
2
i(r)
= a-) -2 6,s';..,
321. Specify the types of symmetry for the signals shown in Figure
P3.21. Specify also which
terms in the lrigonometric Fourier series are zeto-
.r(rJ
'r
(r)
0
(a)
(
.r(r)
.r
b)
(r)
(d)
(c)
-r(r)
o
Tl2
(e)
Flgure P32f
Sec.3.10 Problems
1S5
322. Periodic or circular convolution
is a special case tf general convolurion. For periodic signals rvith lhe same period 7'. periodic convolution is defined bl the integral
z(I) =
lr
7 J,n.r(").v(r
- t)dr
(a) Show that z(r) is periodic. Find its period.
(b) Shorv that periodic convolution is commutative and associati\c.
3.23. Find the periodic convolution l(r) = r1r;tu,r, of the two signals shorvn in Figure
Verify Equation (3.5.15) for these signals.
P3.23.
| {t}
I
0 l :
-l-l
I
Flgure P3.23
324. Consider the periodic signal .r (t) that has the exponeniial Fourier-scries expansion
,(r) =,,i"c,,exp[lntoorl,
co
=
o
(e) Integrate term bv term
1o obtain the Fourier-series expansion of 1,(l) = | x(t)dt, and
v(r) is periodic, too.
(b) How do the amplitudes of the harmonics of r,(r)compare tothe amplitudesof the har-
shorv that
monics of
-r
(t )?
(c) Does integration deemphasize or accentuate the high-frequcncv components?
(d) From Part (c). is the integrated waveform smoother than lht: original waveform?
325. The Fourier-series representation of the triangular signal in Figure P3.25(a) is
.((r) =
8r.
I - + I
I - + .../\
;r [sin: - t sin3r ,-, sin5r - On sinTr
Use this result to obtain the Fourier series for the signal in Figurc P.j.25(b).
r(r)
.r(t)
(a)
(
Figure
P3.2-s
b)
,
u9
Fourier
Series
Chapter 3
3-?6. A voltage .r(r) is applied lo the circuit shown in Figure P3.26. If the Fourier coefficients
oi.r (I ) are givcn by
,n=
Ifrrl
n, * r.*PL/h:l
(a; Prove that .r (r ) must be a real signal of time.
(b) What is the average value of the signal?
(c) Find the first three nonzero harmonics ofy(t).
(d) What does the circuit do to the high-frequency terms of lhe input?
(e) Repeat Parls (c) and (d) for the case where y(t) is the voltage across the resistor
instead.
R=
lo
t'lgure P3J6
3.27. tsnJ the voltage.y(I) across the capacitor in Figure P3.26 if the input
is
r(r) = I + 3cos(t + 30') + cos(2r)
32&
The input
..(r)
is applied to four different systems.
l0
339
=i
c,exp[lrooll
,r,i. ,*rnrr,
"utputs
are
)'r(t)
=>
yr0)
=i
,-:-
c,exp[j(zr,ro(r-
.vr(l)
= )
".:
exp[-tooln l]c, exp[7(nrrrot
,o(r)
= ) e:<p[- jroo Inllc, exp[j(noor)]
**"",* each system has.
lc,l exp[j(atoor + g
16)
-
-
3zr'ro)]
3ntos)]
-
3nr,16)l
Determine rvhat tvpe
",
For the circuit shorvn in Figure P3.29.
(a) Determine the transfer function H(o).
(b) Sketch both I H(<,; I and 4H(,,r).
(c) Consider the input x(r) = 16 exp[lr,rr]. What is the highest frequency (o you can
use such that
Sec.3.10
't57
Problems
lr{r-)-tt"'
lx(t)
I
'o'o'
(d) What is the highest frequency u, you can use such that 4 H(,,, ) deviates from the ideal
linear characteristics bv less than 0.02?
R
ka
r(l)
Figure P3.29
consider the
330. Nonlinear devices can be used to generate harmonics of the input frequerlcy.
nonlinear system described b!'
)'(r) =.4.t(r) + Bx2(t)
Find the response of the system to 'r(r) = rtr cost,r,t +
or cos:to"r' List all new harmonics
generated by the system, along with their amplitudes'
l00ps is Passed
331. The square-wave signal ot Example 3.3.3 with Ii = 1, l" = 500ps' and t =
throuih an ideal-lJw pass filteiwith cutoff I = 4'2kHz and applied to thel-rl system
*ho..-frequ.r,cy ,".pon." I/(to) is shown in Figure P3.31. Find the Iesponse Of the SYstem.
I
H(r,il
r,r
x l0l
2tt
H
(ql
rz x 101
........+-_
Figure P33l
FouderSedes Chaptor3
158
332. The triangular waveform of Example 3.5.2 with period f = 4 and peak amplitude ,4 = 10
is applied to a series combination of a resistor R 100 (l and an inductor L = 0.1 H.
Determine the power dissipated in the resistor.
3J3. A fint-order s),stem is Eodeled by the differential equation
:
ff
*rr1,y--u1,y
the input is the waveform of Example 3.32, frnd the amplitudes of the fiIst three har'
monics in the output.
Repeat Problem 3.33 for the system
If
334
Y"(t) + 3Y'(t) + 2Y(t)
=.r'()
+ .r(t)
335. For the system shown in Figure P335, the input r(t) is periodic with period L Show that
at my time , > Ir after the input is switched on, y.(l) and %(r) approximate Re [c.f and
Im [c,], respectively. tndeed, if fi is an integer multiple of the period Iof the input sigral
r(r), then the outpuls are precisely equal to the desired values. Discuss the outPuE for the
following cases:
(al Tr=7
(b) f, = rr
(c) Tr >> T.b|ut Tt + T
t'c(,)
coS 1166l
r(r)
0)o =
sin n(,o,
aT
I
Tl
.vr(' )
trgnre P335
in Figure E1.36. The input is the half-wave rectified sigpal of
Problem 3.8. Find the amplitude of the secoad and fourth harmonics of the output y(t).
336. Consider the circuit
shorrm
Rr = 500 O
+
.r(r)
+)c"too/rf
Rr=5ooo
-v
(r)
Flgmr P336
P3.37. The input is the half-wave recified sigpal of
Problem 3.8. Find the amplitude of the second and fourth harmonics of output y(t).
337. Consider the circuit shown in Figure
S€c.
3.10
Problems
,
159
= 0.1
r(r) +-) c= roosr
R=lkO
r'(
r,
Ftgure P337
33& (a) Determine the dc componlnt and the amplitude of lhe second harmonic of the oulput signal y(t) in the circuits in Figures P3.36 and P3.17 if the input is the frrll-wave
rectified signal of Problem 3.10.
O) Find the first harmonic of the output signal .y(r) in the circuirs in Figuras I{136 and
P3.37 if the input is the triangular waveform of Problem 3.32.
339. Show that the following are identities:
(a)
r
)
exp[yhtoorl
'r"[(iu. ]).,,,]
=-l-,-,^.,
sin (orll2)
3.40. For the signal x(t) depicted in Example 3.3.3, keep Ifixed and discuss the effect of valving t (with the restriction r < 7') on the Fourier coefficients.
14L Consider the signal x(t) shown in Figure
3.3.6. Determine thc eftect on the amplitude of
the second harmonic of r (l ) when there is a very small error in measuring r. To do this,
let t = ro - e. where e << r0, and tind thc second harmonic dependence on E. Find the
percentage change in lc, I when 7: 10, r = I, and e = 0.1.
3.4L A truncated sinusoidal waveform is shown in Figure P3.42.
r4 sin
Flgure P3.42
(a) Delermine lhe Fourier-series coefficients.
o) Calculate the amplitude of the third harmonic for B = A/2.
(c) Solve for to such that lc, I is maximum. This method is used to gcnerate harmonic content from a sinusoidal waveform.
,T
Fourier
34j.
Serles
Chapter 3
For the signal .r(l) shown in Figurc P3.43, Iind the following:
(a) Determine the Fourier-series coefficients.
(D) Solve for the optimum value of lo for which lc. I is maximum.
(c) Compare the result with part (c) of Problem 3.,12.
lsinr
-'--1
-2r -2tt + to -t
-n+
/'O
,0
lo+n
,o
Flgure P3.43
3.114 The signal.r(r) shown in Figure P3.zl4 is the output of a smoolhed half-wave rectified signal. The constants ,r, ,r. and A satisfy the following relations:
(,)'l =
A=
f -
sinr,rlr
tan-l1oRC;
*rl;t]
e"*n[- ib] =,'n,,,
RC = 0.ls
rrr
= 2rr X
6O
:
377
radls
(a) Verify that ot' = 1.5973 rad, A = 1.M29, and (')r2 = 7.316 rad.
(b) Determine the exponential Fourier-series coefficients.
(c) Find the ratio of the amplitudes of the fint harmonic and the dc component.
A
sin
r.;l
,4 exp
(- l0r)
1.0
I
I
I
I
I
I
olr
3.11
olz
Figure P3.tl4
COMPUTER PROBLEMS
3.t15. The Fourier-series coefficients can be computed numerically. This becomes advantageous
when an analytical expression for.r(r) is not known and.r(l) is available as numerical data
or when the integration is difficult to perfoim. Show that
Sec.
3.11
Computer Problems
161
on-
.M
U ),.r(rmAt)
2!
Ztmn
" = Mt, ,,>--t.r (r; -ll ) cos M
an
,, =
'*
2,x(,n
Lt) sin2!#!1
where .r (rz 6t) are M equally spaced data points representing,r (r) over (0.
the interval between data points such that
A,t
I)
and Ar is
= TIM
(Hint: Approximate the intcgral with a summation of rectangular strips, each of width Ar.)
3.46. Consider the triangular signal of Example -i.5.2 with A = n /2 and T = 2r.
(a) Use the method of Problem 3.45 io compute numerically the frrst five harmonis from
N equally spaced points per period of this waveform. Assume that N = t00.
(b) Compare the numerical values obtained in (a) with the actual values,
(c) Repeat for values of .V = 20.40,60, and 80. Comment on vour results.
3.47. The signal of Figure P3.47 can be represented as
. 431
-slnnr,
r(rl=
rfin>
,-odd
Using the approximation
4{1
- =
rr(')
n 1,.;t'nn"'
a-odd
wrile
a computer program
to calculate and sketch the error Iunction
er(t)=x0)-ir(t)
from I =
0tol = 2 forN = 1.3,5,and7.
r(r)
tigure
3.tl& The integral-squared error (error
P3.47
energy) remaining in thc approximation of Problem 3.47
after N terms is
lo'
l",url'u, = ln'l.,ol'a,
-
,i ,,,1 '
Calculatc the integral-squared error for N = I l. 27.32,41.51. l0l. and 201.
3.49. Write a program to compute numerically thc coefficients of thc scries expansion in terms
of wal" (&. l). 0 < A s 6. of thc signal r(t1 = ,[u(I) - ri(, l)1. Compare your results
with those of Problem 3.6.
Chapter 4
The Fourier Transform
4.1
INTRODUCTION
We sarv in Chapter 3 that the Fourier series is a powerful tool in trealing various prohlems involving periodic signals. We first illustrated this fact in Section 3.6. where we
demonstrated how an LTI system processes a periodic input to produce the output
response. More precisely. at any frequency ntor. we showed that the amplitude of the
output is equal to the product of the amplitude of the periodic input signal. lq,l. ana
the magnitude of the system function I H1<'r) | evaluated at ro = zor,,. and the'phase of
the output is equal to the sum of the phase of the periodic input signal. {c,,. and the
system phase *H (a) evaluated at o = ,ro0.
ln Chapter 3. we were able to decompose any periodic signal with period Iin terms
of infinitely many harmonically related complex exponentials of the form exp [7h or,,l].
All such harmonics have the common period 7 = 2n f a,,.ln this chapter. we consider
another powerful mathematical technique. called the Fourier transform. for describing
both periodic and nonperiodic signals for which no Fourier series exists. Like the
Fourier-series coefficients. the Fourier transform specifies the spectral content ofa signal. thus providing a frequency-domain description of the signal. Besides being useful
in analytically representing aperiodic signals. the Fourier transform is a valuable tool
in the analysis of LTI systems.
It is perhaps difficult to see how some typical aperiodic signals. such as
r.(t). exp[-rlrr(t).
rect(tlT)
could be made up of complex exponentials. The problem is that complex exponentials
exist for all time and have constanl amplitudes. whereas typical aperiodic signals do
not possess these properties. In spite of this. we will see that such aperiodic signals do
162
Sec.
4,2
The Continuous-Time Fcrurror Translorm
163
have harmonic contcnt: that is. thcv can be expressed as the supcrposition of harmonically relatetl cxponentials.
In Section .1.2. rve use the Fourier series as a stepping-stone ro develop the Fourier
transform ancl shrrw lhal lhe latter can he considered an extension of the Fourier series.
In Section 4.3. we consider thc propcrties of the Fourier transfornt that make it useful
in LTI system analysis and provide examples of the calculation of some elementary
transform pairs. In Scctitrn 4.{. we discuss some applications related to the use of
Fourier tl'anslorm theory in comrnunication systems. signal proccssing, and control systems. In Scctron 4.5. rve inrroducc the concepts of bandwidth and duration of a signal
and discuss sevcral mea:;ures for these quantities. Finally, in rh t same section, the
uncertainty principle is Jeveloped and its significance is discussed.
4.2
THE CONTINUOUS-TIME FOURIER
TRANSFORM
In Chapter 3. we presented the Fourier series
as a merhod for analyzing periodic sigWe
nals.
saw that the representation of a periodic signal in terms of a weighted sum of
complex exponentials was useful in obtaining the steady statc rcsponse of stable, Iinear, time-invariant sysiems to periodic inputs. Fourier series analysis has somewhat
limited application in that it is restricted to inputs which are periodic, rvhile many signals o[ inlcrcst arc aperiodic. Wc can tlcvelop a nrcthod, krrrr*rr as the Fourier transform. for representing aperiodic signals by decomposing such signals into a set of
weighted exponentials, in a manner analogous to the Fourier scries representation of
periodic signals. We rviil use a heuristic development invoking physical arguments
where necessary. to circumvent rigorous mathematics. As we see in the next subsection, in the case of aperiodic signals. the sum in the Fourier serics becomes an integral
and each exponential has esscntiallv zero amplitude, but the totality of all these infinitesimal exponentials produces the aperiodic signal.
4.2.1 Development of the Fourier Tlansfom
The ge'reralization of the Fourier series to aperiodic signals was suggested by Fourier
himself and can be deduced lrom an examination of the structurc of the Fourier series
for periodic signals as the period 7 approaches infinity. ln nrakin-e the transition from
the Fourier series to the Fouricr transfornr. rvhcre necessary. rvc use a heuristic development invoking phvsical argunlcnts to circumvent somc rcry subtle mathematical
concepts. After taking the linrit, rve will find that the magnitudc spectrum of an aperiodic signal is not a linc spectrufil (as with a periodic signal). but rnstead occupies a continuum oi frequencies. Thr'sanre is true of the correspondine phasc spectrum.
To clarity horv the chanEle fronr discrete to continuous spectra takes place, consider
the periodic signal .i(r1 shorvn in Figure 4.2.1. Now think of kccping the waveform of
one period of i(t) unchanged, but carcfully and inrentionalh, increase L In the limit
as I -r 2,, only a singlc pulsc r,rmains because lhe nearest nciglrllors have been moved
to infinity. Wr-'sflrv in (ihapter 3 that increasing 7'has two ellccts on the sp€ctrum of
TheFourierTransform Chapler4
1U
Ilgure
421
Allowing the Period
f
to increase to obtain the aperiodic
sigral.
amplitude of the spectrum decreases as 1/I, and the spacing between lines
decreases as2t./7. As I approaches infrnity, the spacing between lines approaches
zero. This means that the spectral tines move closer, eventually becoming a continuum.
The overall shapes of the magnitude and phase spectra are determined by the shape of
the single pulse that remains in the new sigral .r(r), which is aperiodic.
To investigate what happens mathematically, we use the exponential form of the
Fourier series representation for;(r); i.e.,
i(r): The
;(r)
=>
(4.2.1)
c,explinaotl
tlE -@
where
,, = Lr
ll',,rrL)
In the limit as f
tity, d<rr, so that
+
cD,
we see that .oo
exp[-yn oor]dr
= 2r /T &,comes
(4.2.2)
an infinitesimally sma[ quan-
1.-.49
T '2n
ntoo should be a continuous variable. Then, from Equation
(4.2.2),lhe Fourier coefficients per unit frequency interval are
we argue that in the limit,
*
=
*l'
__
t,,,
exp
[-itor]
dr
(4.2.3)
Substituting Equation (4.2.3) into Equation (4.2.1), and recognizing that in the limit
the srrm becomqs an integral and i(l) approaches x(t), we obtain
,u,
:
l-__ [l-_-,u,
expt-i,r]ar] expl1,,lfr
The inner integral, in brackes, is a function of
X(to), we can write Equation (4.2.4) as
,(,) = *l..xt,l
where
r,r
(4.2.4)
only, not ,. Denoting this integral by
exp[iror]dro
(4.2.5)
Sec.
4.2
The Continuous-Time Founer Transform
X(trt =
165
(4.2.6)
[ _,rUrexp[-7ror]dr
Equations (.1.2.5) and (4.2.6) ctrnstitute rhe Fourier-transform pair for aperiodic signals that most electncal engrnccrs use. (Some communications engineers prefer to
write the frequency variable in hdrtz rather than rad/s: this can he done by an obvious
change of variables.,l ,\'(o,l) is callcd the Fourier transform o[.r(r) and plays the same
role for aperiodic signals that <,, plays for periodic signals. Thus. -Y(to) is the spectrum
of .r(l) and is a continuous function defined for all values of o. whereas c, is defined
only for discrete frequeni-ics. 'Ihcrefore, as menrioned earlicr. an aperiodic signal has
a continu(rus spectrum riirher than a line spectrum. X(c,r) spcciiies the weight of the
complex t:xponentials rr:.:J to rcpresent the waveform in Equation (4.2.5) and, in general, is a complex functir,n of thc variable to. Thus, it can be written as
x(.,)
:
lx(to)l exp[ig(r,r)]
(4.2.7)
The magnitude of X(ar) plotted against ro is called the magnitude specrrum of r(r), and
lX(.)l'is called the energy spectrum.'I'he angle of X(to) plotted versus ro is called the
phase spectrum.
In Chapter 3, we saw that for any periodic signal x(r), therc is a one-to-one correspondence between ,r(l) and the set of Fourier coefficicnts c,,. Here, too, it can be
shown that there is a one-to-one correspondence betwrjen.r(t ) and X(ro), denoted by
.r(t) <+ X(to)
which is meant to inrply that for every.r(r) having a Fourier transform, there is a
unique X(o) and vice versa. Some sufficient conditions for the signals to have a Fourier
transform are discussed later. We emphasize that while we have uscd a real-valued signal x(t) as an artifice in the development of the transform pair. the Fourier-transform
relations hold for complex signals as well. With few exceptions. horvever, we rvill be
concerned primarily with real-valued signals of time.
As a notational conveniencc, X(r,r) is often denoted by :? {.r (, )l and is read ..the
Fourier transform of .v(r)." In addition. we adhere to the conrcnlion that the Fourier
transform is represented by a capital letter that is the sante as rhe lowercase tetter
denoting the time signal. For exanrple,
yjlh(t)l = l/(o) =
l"
J_.
At,t exp[-iorl,ir
Before we examine furthcr thc gcneral properties of the Fouricr transform and its
physical meaning, Iet us introrluce l set of sufficicnt conditions for lhe existence oflhe
Fouricr transform.
4.2.2 Existence of the Fourier Tlansforrn
The signal .r(t ) is said to havc a Fourier transform in the ordinltrl sense if the integral
in Equation (4.2.6) convcrges (i.c.. exists). Since
and
lly(,)/rl =llt?ll,tt
lexp[-jr,rtll = l, it follorvs that the integral in Equarion
({.J.6) exists if
The Fourier Trans{orm Chapler 4
166
l. x(t) is absolutely integrable and
2. xO is "well behaved."
The first condition means that
f -1,{'\la' . -
(4.2.8)
class of signals that satisfy Equation (4.2.8) is energy signals. Such signals, in general, are either time limited or asymptotically time limited in the sense that .r(t) -r 0
@. The Fourier transform of power signals (a class of signals defined in Chapas ,
=
ter 1to have infinite energy content, but finite average power) can also be shown to
exist, but to contain impulses. Therefore, any signal that is either a Power or an energy
signal has a Fourier transform.
"Well behaved" means that the signal is not too "wiggly" or, more correctly, that it
is of bounded variation. This, simply stated, means that r(r) can be represented by a
curve of finite length in any finite interval of time, or alternatively, that the signal has
a finite number of discontinuities, minima, and maxima within any frnite interval of
time. At a point of discontinuity, ,0, the inversion integral in Equation (4'2.5) converges
to | 1.r1rf 1 + ,(r; )l; otherwise it converges to.r(t). Except for impulses, most signals
of interest are well behaved and satisfy Equation (4.2.8).
The conditions just given for the existence of the Fourier transform of .r(t) are sufficient conditions. This means that theri: are signals that violate either one or both conditions and yet possess a Fourier transform. Examples are power signals (uni1-51sp
signal, periodic signals, etc. ) that are not absolutely integrable over an infinite interval
and impulse trains that are not "well behaved" and are neither power nor energy signals, but still have Fourier transforms. We can include signals that do not have Fourier
transforms in the ordinary sense by generalization to transforms in the limit. For example, to obtain the Fourier transform ofa constart; we consider x(l) = rect(r/r) and let
A
-,
t
-+
co
after obtaining the Fourier transform.
4.2,8 Examples of the Contlnuous.Tine
Fourier Tranefotm
In this section, we compute the transform of some commonly encountered time signals.
Elvqrnple 43.1
The Fourier transform of the rectangular pulse
x@'1 =
=
l"__x(t)
x() = rect(/t)
is
expl-iottlttt
f/',rexp1-1o,tldr
=*("*l+l- *ol.;.])
Ssc.
4.2
The Continuous-Time Fourier Translorm
167
This can be simplified to
x@ =:sin ]1
=
"
*,n.l' = , sa 9|
Since X(r,r) is a real-valued function of o, its phase is zero for all or. X(t'r) is plotted in Figure 4-2-2 as a fuuction of o.
.\
X(LJ) = r srnc
(r)
(,)f
ltr
-i
8a
-
a
Figtre 4.2.2 Fourier transform of
a rectangular pulsc.
Clearly, the spectrum of the rectangular pulse extends over the range - @ ( to ( rc.
However, from Figure 4.2.7. we see that most of lhe spcctral conlcnl of the pulse is contained in the interval -2tr/r < a <2rft. 'l'his intcrval is lltrclcd the main lobe of the
sinc signal. The other portion of the spectrum represents what rc called the side lobes of
lhe spectrum. lncreasing r results in a narrower main lobe. \\'herL'as a smaller t produces
a Fouricr transform with a wider main lobc.
Bsa'nple
422
Consider the triangular pulse defincd as
L(t/t)
=l'
-
'''
Lo'
,
,,'
l' l >T
This pulse is of unit height, centered about t = 0, and of rvidth 2t. Its Fourier transform is
x
@ = I _L(t /r)expl- j,;,tldt
-
l, (' . -l)"*nt-i',ta, * /, (, -')exp[-l,r]dr
f
(r
- 1).wri,,td, + /, (r -
=,1:l - 1)"o.",,a,
')cxp [ -;.,r]dr
168
The Fourior
After performing the integration
Transtom
Chapter 4
and simplifying the expression, we obtain
A(r/t)<+tsinifi="S"'T
Exanple 42a
The Fourier transform of lhe one-sided exponential signal
.t(t)
=exp[-crlz(r),
a
>0
is obtained from Equation (4.2.6) as
X(r) =
f'
|-- (exp[-ct]u(t)
exp[-jror])dr
J
f*
=fJ6 exp[-(c+jo)tldt
_t
@'29\
o + lrrr
Exarnple 4.2.4
In this example, we evaluate the Fourier transform of the two-sided exponential signal
r(r) = exP[-o
lr
l]'
c
)
0
From Equation (4.2.6), rhe rransform is
x1.1 =
/0
exp[cr] exp[-jror]dr +
-l+l
d- ja
o
f
expt-crl exp[-jor]dr
+.rr,,
2d
=;4;'
f,lrar.ple 425
The Fourier transform of the impulse function is readily obtained from Equation (4.2.5)
by making use of Equation (1.6.7):
e160)l =
[
s1r;
exp[-;o tldt: t
We thus have the pair
6
(t) <+ 1
Using the inversion formula, we must clearly have
(4.2.10)
Sec.
4.2
The Continuous-Time Fourier Transform
,O
=
*l
169
.
,expliorlrro
(4.2.fi')
Equation (4.2.1 l) stares that thc impulse signal theorelically consists of equal-amplitude
sinusoids of all freguencies. This integral is obviously meaningless. unless we interprei E(r)
as a function specified by its properties rather than an ordinarv function having definite
values for every t. as we demonstrated in Chapter L Equation (4.2.1 l) can also be written
in the limit form
6(11
sin
= 1;'
qr
This result can be established by writing Equation (4.2.1 I )
s(,) =
(4.2.r21
1rI
as
*Hl /' exp[ir,rrld(,,
I .. 2 sincr
21t c-a
I
-- sin ot
=ltmnl
Erample 4.2.6
We can easily show that /1. expljlotldto/Ztr "behaves" like the unit-impulse function by
putting it inside an integral; i.e., wc evaluatc an integral of thc fornr
I-
l*
I__*^j@4d,,)B(idt
g(t)
is any arbitrary well-behaved signal that is continuous at , = 0 and possqsses a
Fourier transform G(or). Interchanging the order of integration, rve have
where
i-J-"U' "trexp[,r]dr] d, =,:, f -(;( - r,r)do
F'rom the inversion formula
it follows that
j, ['
=
= g(o)
-ct-oa,, :" L-G(o)dro
That is,
(l/2rr)/1-expljatlda
"behaves" like an impulse at t = 0.
Another transform pair follows from interchanging the roles of t and ro in Equation
(4.2.11). The resull is
D(or) =
);
l-_-expll,,tat
or
I er 2zt 6 (to)
(4.2.13)
In words, the Fourier transform of a constant is an impulse in the frequency domain. The
factor 2rr arises because we are using radian frequency. If we werc to write the transform
in terms of frequency in hertz, the factor 2rr would disappear (D (or) = 6(l)/2tr).
170
The Fourler
Translorm
Chapter 4
fuqnpls 42.7
In this example. we use Equation (4.2.12\ and Example 4.2.1 lo prove Equarion (4.2.13). By
leiting t go to :c in Example 4.2.1 . we find that the signal r (r ) approaches I for all values of
,. On the other hand, from Equation (4.2.12), the limit of the transform of recr (rh) becomes
'
2 sinot
lim:*=2rr6(r,r)
iJz (,
2
f,aarnple 4.28
Consider the exponential signal .r(t) = exp[/or,rrl. The Fourier transform of this signal is
t'
x(, ) = I exp Ur,rstl exp [-ltotldt
J_=
[
t_-
exp[-71-
-
,oo\tldt
Using the result leading to Equation (4.2.13). we ohrain
exp[jtoot] er
2zr 6
(ro
- ron)
This is expected, since exp[7to,,tl has energy concentrated at
@.2.14)
t,ru.
Periodic signals are power signals. and we anticipate, according to the discussion in
Section 4.2.2, that their Fourrer transforms contain impulses (delta functions). In Chapter 3, we examined the spectrum of periodic signals by computing the Fourier-series
coefficients. We found that the spectrum consists of a set of lines located at ano{}.
where or0 is the fundamental frequency of the periodic signal. In the following example, we find ihe Fourier transform of periodic signals and show thar the spectra of periodic signals consist of trains of impulses.
Bynmple 4.2.9
Consider the periodic signal .r(t) with period
the Fourier-series representation
,(,) =
,i"
I;
thus. rou =
2zr
/T. Assume that x(r)
has
,exp[,7hroor]
Hence, taking the Fourier transform of both sides yields
x(,)
=
,i.
c,elexp[y'zr,r,r]l
Using Equation (4.2.14). we obtain
X(t,l) =
,P,2rrc,6(or
- rroo)
$2.15)
Thus, the Fourier transform of a periodic signal is simply an impulse train u/ith impulses
located at ro = zr'ro, each of which has a strength 2zrc,, and all impulses are separated from
Sec.
4.3
Properties of lhe Fourier Translorm
171
each other by ton. Note that bL'cause the signal -t(r) is periodic, thc nragnitude spectrum
lX1.1l is a train of impulses of streng,th 2nlc,l, whereas the spectrum obtained through
the use o[ the Fourier series is a line spectrum with lines of finitc anrplitude lc, l. Note
thal the Fourier transform is not a periodic functron: Even though ths impulses are separated by the same amount, their weights are all different.
Example 4.2.10
Consider the Periodic signal
.r1r1
=
,i. o(r - zr)
which has period L To lind the Fourier transform, we first have to compute the Fourierseries coefficients. From Equation (3.3.4). the Fourier-series coefficicnts are
,,= trl,,,vl*vl- i'7')0, = t,
-r(l) = 6(r) in any interval of length T. Thus, the impulse train has the Fourier-series
reprcsentation
since
r(,) =.>_
+*rl'+)
By using Equation (4.2.14), wc tind that the Fourier transform ol thr: impulse train is
*@:?;.i-'(- -'f)
@.2.16)
That is, the Fourier transformation of a sequence of impulses in thc time domain yields a
sequence of impulses in the frequency domain.
A brief listing of some other Fourier pairs
4.3
is given in Table 4. l.
PROPERTIES OF I'HE FOURIER TRANSFORM
A
number of useful properties of the Fourier transform allorv some Problems to be
solved almost by inspection. In this section, we shall summarizc many of these properties, some of which may be more or less obvious to the reader.
4.3.1 Linearity
x,(t)
e+ X,(or)
xr(t)
<-+
Xr(a)
then
&rf
(r) + b.rr(t)
<+
aXr(a) + bX2(a)
(4.3.r)
The Fourier Translorm Chapter 4
172
TABLE 4.I
Some S€lecred Fourler Tranatorm Palrs
x(r)
x(.)
I
2t
6
2. u(t)
zrD
(r'r)
l.
(ro)
I
+.;
l@
3.6(r)
4. 6G
1
-
,o)
5. rect(t/t)
6.
"7t ain" "lit -
7.
sgr r
sinr'r't
12.
sin
/2a s)
2t6(ro 2zr
(l)
14. cos oor rect
(/t
exp[-at]z(r),
16.
texp[-at]zO,
tn-l
11
) a,6(o -
Re
zoo)
f
tut, -
ros)
-
]
tot, -
roo)
+ 6(ro + oo)l+
izt[ttto -
ro6)
-
".in.Qjfdl
)
15.
roo)
rf6"-oro)+E(oi+too)I
oor
-
(o
2
(cos root)u
1,
(@
2 sintor /2
2n
jot
13. (sinool)z(t)
17.
SlnC
rect
1tl
9. ) a, exp[lzrool]
,0. ;;;",
l.
. :-or
f
8. exp[lrootl
I
[-jotol
exp
I
[a] > 0
Re
atjot
lal > 0
exP[-arlz(;,
Re[al > 0
18.
exp[-alrl],
19.
lrl exp[-alrl], Rela] >
(;^)'
I
(a + jrLl)'
7a
a>o
aT;t
o
4oj.
a2+az
6(.o + os)l
6 (<.r
+ oo)l
,
#;,
(oo
'3-''
Sec.
4.3
Properties of the Fourier Translorm
173
TABLE 4.1 (@ntinued)
r(r)
x(,)
I
20.
;4,:,Re{al
n.
F+,
24.
I
0
exp
[-
alr,r
l]
:4rryIPl:-dell
Re[a]>o
?2. expl- at2l,
23.
>
2a
ti
a>o
[-r2l
V;*p[
*
J
, r@T
6(t/r)
T SlnC-
:ln
?.!.'(, T)
> s(r-nI)
where a and b are arbitrary constants. This property is the direct result of the linearity
of the operation of integration. The linearity property can be easily extended to a linear combination of an arbitrary number of components and simply means that the
Fourier transform of a linear combination of an arbitrary number of signals is the same
linear combination of the transfornr of the individual componcnts.
Eremplo 43.1
Suppose we want to find the Fourier transform of cos rool. The cosine signal can be written as a sum of lwo exponentials as follows:
coso,o,
=
I
i
[exRIlrootl + exp[-ioor]l
From Equation (4.2.14) and the linearity property of the Fourier transform,
9[cosr,rol]
:
-
roo)
*
6(to + rou)l
l [tt, -
rog)
-
6(to + r,l,)l
zr[6(o
Similarly. the Fourier transform of sin ool is
glsinoor|
43.2
:
T
Symmetry
If x(t) is a real-valued time signal, then
X(-(,,t)
: X*(')
(4'3'2)
where * denotes the complex conjugate. This property, referred to as conjugate symmetry, follows from taking the conjugate of both sides of Equation (4.2.6) and using
the fact that .r(r) is real.
174
The FourierTranslorm Chapter4
.
Now, if we express X(to) in the polar form, we have
x(.) = lx(o,)lexpli$(<,l)l
(4.3.3)
Taking the complex conjugate of both sides of Equation (a3.3) yields
X*1<r;
Replacing each to by
-o
= lx(r,r)l exp[-jg(r,r)]
in Equation (a33) results in
x(-,) : lx(-,)l
exp[jg(-ro)]
By Equation (4.3.2), the left-hand sides of the |ast two equations are equal. It then
follows that
lx(,ll = lxt-,tl
0(r) = -0(-r)
@3.4)
(4.3.5)
i.e., the magnitude spectrum is an even function of frequency, and the phase spectrum
is an odd function of frequency.
kample43.2
From Equations (4.3.4) and (4.3.5), the inversion formula, Equation (4.2.5), which is written in terms of complex exponentials, can be changed to an expression involving real cositrusoidal sigtals. Specifically, for real .r(t),
,(O =
=
*l- r,,, exp[ir,rr]dro
t
r.rexp [ltor] dto * ]
f_ _*
x
rl dro
[" <,l "xp [lr,r
* t: lx(to) | (exp [i(r,,r + Q(to))] + exp[-i(rr,r + Q (o))])do
=j
[- z1xt 1l cos[ror + rfi(r,r)]dr,r
=
Equations (4.3.4) and (4.3.5) ensurq;hat the exponentials of the form exp[jrut] combine
properly with those of the form exp [-ir,rt] to produce real sinusoids of frequency o for
use in the expansion of real-valued time signals. Thus, a reai-valued signal r(l) can be written in terms of the amplitudes and phases of real sinusoids that constitute the signal.
Example 43.9
Consider an even and real-valued signal.r(t).
x(,,,) =
=
Is
transform X(to) is
I- r(,) explJ-
r(r)(cmror
jutldt
- jsinot)dr
Sec.
4.3
175
Properties of lhe Fourier Translorm
Since.r(r.1 cosr,r, is anevcn funuriott
t-rf
r and.r1t.1 srn to, ts iln otlti
,Y(o1 = 2
r'
J,
r(I
) ct
s.t
luttctionof t. we havc
dt
rvhich is a real and even funcl ion of rrr. Therefore. thr' Fouricr iransli)rrtt of an even and real'
valued signal in the timc domain is an cven and rcal-valucd signal irr thc lrequency domain'
4.8.3 Time Shifting
If
,r(t) e+ X(or)
then
.r(t
-
,,r) e+ X(t'r)
exp[-itoI,,]
(4.3.6a)
Similarly,
.r(r) e/-,/ e+
X(o
-
to,,)
(4.3.6b)
The proofs of thesc properties follow from Equation (4.2.6) after suitahle substitution
of variables. Using the polar fornt, Equation (4.3.3). in Equatiorr (1 -3.6a) yields
Sl.r(r
- ,.)l = lX(t,r)l exp[i(S(r,r) - o4,)l
The last equation indicates that shifting in tinre does not alter thc anrplitude spectrum
of the signal. The only effcct of such shifting is to introduce a plrasc shift in the transform that is a linear function of o. The result is rcasonable hecattsc rve have already
seen that. to delay or advance a sinusoid, we have ()nly to adjust thc Phase' ln addition.
the energy conlent of a wavefornr does not depcnd on its posilion in time.
4.8.4 Time Scaling
If
.r(t) e+ X(r,r)
then
\(0,)
.,
lll "(l
)
(4.3.7)
where o is a real constant. Thc proof o[ this follows directly fronr thc definition of the
Fourier transform and the appropriate substitution of variables.
Aside from the amplitude factor of u lo | ' linear scaling in tinrc l'v a factor o[ a corresponds to linear scaling in frcquency by a factor of l/o. The rcsult can be interpreled
physically by considering a typical signal .r(r) and its Fourier translirrm X(to), as shown
in Figure 4.3.1.1f l"l . t..t(or) isexpanded in time.and the signal varies more slowly
(becomes smoother) than the original. These slorver variations dcctnphasize the highfrequency components and ntanifcsl themselves in more appreciahle low-frequency
sinusoidal components. That is, expansion in the time domain irnplies compression in
176
The FourierTranslorm Chapter4
I X(or)
x(otl, o I
I
l.l r,(?)l ,a(l
I
I
x(atl,
a
>
I
Itl
I
;x(;);''>r
(c)
tlgore
43.1
Examples of the time-scaling property: (a) The original sig-
nal and its magnitude spectrum. (b) the time-expanded signal anA
is mal-
nirude spectrum. and (c) the iime-compressed signal and the resuhing
magnitude spectrum.
the frequency domain and vice versa. If lc > l. r(ar) is compressed in time and must
|
vary rapidly. Faster variations in time are manifested by the presence of higher frequency components.
The notion of time expansion and frequency compression has found application in
areas such as data transmission from space probes to receiving stations on gartr. ro
reduce the amount of noise superimposed on the required sign"l, it ir necessary to keep
the bandwidth of the receiver as small as possible. one means of accomplishing this is
to reduce the bandwidth of the signal, store the data colected by the probe, und th"n
play the data back at a stower rate. Because the time-scaling facior is'known, the signal can be reproduced at the receiver.
Sec.
4.3
177
Properlies ol the Fourier Transform
Example 4.S.4
Suppose we want to determine the Fourier transform of the pulsc.l(r)
o > 0. The Fourier transform ot rect (t/r) is, by Example 4.2. I .
*{rect(,/r)}
= o rect(dIh).
=".in.l'
I
By Equation (4.3.7), the Fourier transform of c rect (ot/r) is
.
Y7
otT
lo recl (otlt)| = T Stnc 2ar.
Nole lhat as we increase the valuc of the parametcr o, the rectangular pulse becomes narrower and higher and approachcs an impulse as o -J e. Corresptrndingly, the main lobe
of the Fourier transform becomes wider, and in the limit X(ro) approaches a constant
value for all ro. On the other hand, as q approaches zero. the reclangular signal approaches
I for all t. and the transform approaches a delta signal. (See Examplc 4.2.7.)
The inverse relationship betwcen time and frequency is encounlered in a wide variety
of science and engineering applications. In Section 4.5, we will cover one aPPlication
of this relationship, namely, lhe unccrlainty principle.
4.
.6
Differentiation
If
r(t)
e+
X(o)
then
d'jl) ,.
i.x<-)
(4.3.8)
The proof of this property is obtained by direct differentiation ol' both sides of Equation (4.2.5) with respect to r. The differentiation proPerty can bc cxtended to yield
o";,t:'
(4.3.e)
'ur,,r)"x(r'r)
We must be careful when using the differentiation property. First of all, the property
does not ensure the existence of Tldx(t)/dtl. However, if v cxists, it is given by
jtox(to). Second. one cannot alwavs infer that X(ro) -- 9ltt.r(t\/tltl/i,o.
Since differentiation in the time domain corresponds to multiplication by lro in the
frequency domain, one might conclude that integration in the time domain should
involve division by ito in the frequency domain. However. this is true only for a certain
class of signals. To demonstrate it. consider the signal
.r(r) =
| lft)dr.With
Y(t'r)
TheFourierTranstorm Chapter4
178
as
its transform, we conclude from dy(t)/dt = r(l) and Equation (4.3.8) that
iroY(o) =
X(<o). For Y(to) to exist,
y(t) should satisfy the conditions listed in Section
4.Z.2.Thisis equivalent toy(co) = 0, i.e.,
I_- x@)dT = X(0) = 0. In this case,
J
(4.3.r0)
f__x@ar--1x1,;
This equation implies that integration in the time domain attenuates (deemphasizes)
the magnitude of the high-frequency components of the signal. Hence, an integrated
signal is smoother than the original signal. This is why integration is sometimes called
a smoothing operation.
If X(0) + 0, then signal x(t) has a dc component, so that according to Equation
(4.2.13), the transform will contain an impulse. As we will show later (see Example
4.3.10), in this case
f-.rt
1o, er rrX(0)6(to) +
tL
x(,)
(4.3.1 l )
f,sarnple 43.6
Consider the unit-step funclion. As we saw in Seclion 1.6, this function can be written as
,(i =:- [,,,,=
j]
| * |'c"'
The first term has z16(or) as its transform. Although sgnt does not have a derivative in the
regular sense. in Section 1.6 we defined the derivalives of discontinuous signals in terms
of tlre della function. As a consequence.
d tt
I
s(r)
7 [7sgnt] =
Since sgnr has a zero dc component (it is an odd signal). applying Equation (4.3.10) yields
i.,*{}.e,,}=r
or
'{j *"'} = *
(4.3.12)
By the linearity of the Fqurier lransform. we obtain
r(r) er rr6(r,r) + ]
(4.3. r3)
Therefore. the Fourier transform of lhe unit-step function contains an impulse at to = 0
correpponding to the average value of 12. It also has all the high-frequency compondnts
of the signum function. reduced by one-half.
Sec.
4.3
Properties ol the Fourier Transform
179
4.8.6 Energy of Aperiodic Signals
In Section 3.5.6. we related the total average power of a periodic signal to the average power of each frequency component in the Fourier series o[ the signal. we did
this through Parseval's theorem. Now we would like to find thc analogous relationship for aperiodic signals, which are energy signals. Thus, in this sccrion. we shorv that
the energy of aperiodic signals can be computed from their transform X(o). The
energy is defined as
E=
f
-lx(r)1'z
il
=
[-_-x(t)x*(r)dt
Using Equation (4.2.5) in this equarion resulrs in
'
=
1
-'urlrt, I---*.
r-rexP [-ior r ]rlor dr
Interchanging the order of integration gives
E
:
*l* ".,,,[/_,
r(r)exp[-y,,rr]dr]r/-
=; f .lx1,vl'za,
We can therefore write
[-_-l,r,tl'a,
=
); [_-t*r,tPr,,
(4.3.r4)
This relation is Parseval's relation for aperiodic signals. It says rhat the energy of an
aperiodic signal can be computed in the frequency domain by computing the energy
per unit frequency, i6 (o) : lX(o) lz/2r, and integrating over all f rcquencies. For this
reason, E (r,r) is often referred to as the energy-density spectrum, or, simply. the energy
spectrum of the signal, since it measures the frequency distribution of the total energy
ofr(t). We note that the energy spectrum of a signal depends on rhc magnitude of the
spectrum and not on the phase. This fact implies that there are many signals that may
have the same energy spectrum. However, for a given signal, thcrc is only one energy
spectrum. The energy in an infinitesimal band of frequencies d o.r is. rhen, i8 (or)do, and
the energy contained within a band or, ( to s ro, is
A/t =
f'';l
Jq Zn
lx1,yl,a.
(4.3.1s)
That is, lX(r)l'not only allows us ro calculate thc total energy of -r(r) using parseval's
relation, but also permits us to calculate the energy in any given lrcqucncy band. For realvalued signals, lX(r)l'is an even function, and Equation (4.3.14) can be reduced to
,=Ill
lx1,;1,a,
(4.3.16)
180
The Fourier
Transform
Chapter 4
Periodic. signals, as defined in Chapter l, have infinite energy, but finite average
power. A function that describes the distribution of the average power of the signal as
a function of frequency is called the power-density spectrum, or, simply, the power
spectrum. In the following, we develop an expression for the power spectral density of
power signals, and in Section 4.3.9 we give an example to demonstrate how to compute the power spectral density of a periodic signal. Let x(r) be a power signal, and
define x,(l) as
"(')
=
{;l')'
;;;
"
= x(t) rect(t/2t\
We also assume that
<-+
The average power in the signal
p
: ts
l|
x'
(')
,,;,:'
[_,v<,>rd,] = l,s
l*
I:"tr,(,)t,d,]
(4.3.17)
where the last equality follows from the definition of .r,(l). Using Parseval's relation,
we can write Equation (4.3.17) as
; r's [* /- l''r'rl'"] =);f-n[ryr]"
lzn ['sr,la,
(4.3.18)
J__
where
s(<o)
= 6^ [lx=(')l'2l
ZT
"_,, I
(4.3.1e)
J
S(to) is referred to as the power-density spectrum, or, simply, power spectrum, of the
signal .r(t) and represents the distribution, or density, of the power of the signal with
frequency o. As in the case of the energy spectrum, the power spectrum of a signal
depends only on the magnitude of the spectrum and not on the phase.
Example 43.6
Consider the one-sided exponential signal
r(t) = exp [- t]n (t )
From Equation (4.2.9),
lx(,)l'=
#;,
The total energy in this signal is equal to l/2 and can be obrained by using either Equation (1.4.2) or Equation (43.1a). The energy in the frequency band -4 < ro < 4 is
Seo.
4.3
Proporties ol the Fourier Transtorm
le4
181
l
AE=:?t I I +:--.d,,t
or'
Jo
l-
= tloran- ,.u lo = o.qzz
Thus. approximately M%o of the total energy content of the signal lies in the frequency
band -4 ( rrr ( 4. Note that the previous result could not be obtained with a knowledge
of .r (r) alone.
4.8.7 Convolution
Convolution plays an important role in the study of LTI systems and their applications.
The property is expressed as follows: If
r(t)
<+ X(<o)
h(r)
e> H(<o)
and
then
r(t) x &(l)
t-+ X(<o)H(or)
(4.3.20)
The proof of this statement follows from the definition of the convolution integral, namely.
elx(t) * h(t11=
l__l[-_r<,lotL
- t)d"]expt -i,otldt
Interchanging the order of integration and noting thatx(t) does not depend on ,, we have
elx(t) * n()l =
f_-x(rrlf_^U - t)exp[-;.,r]rt]dr
By the shifting property, Equation $.3.6a), the bracketed term is simply II(o)
exp [-7'on]. Thus,
hlx{t) * h(t)l =
=
[' .t(t)exp[-i onlH(tt)dr
J--
r'
H(.) | x(t) exp [- jrot]dt
J_-
= II(ro)X(o)
Hence, convolution in the time domain is equivalent to multiplication in the frequency
domain, which, in many cases, is convenient and can be done by inspection. The use of
the convolution property for LTI systems is demonstrated in Figure 4.3.2. The amplitude and phase spectrum of the output /(l) are related to those of the input r(t) and
the impulse response ft (t) in the following manner:
The Fourier
182
l'(l)
Ul
LTI
ll (o,
lt
=
.\
l'(oJ) =
Translorm
Chapter 4
(r) * ,l(r)
X(Gr)/r(or) Figure 4J.z Convolution property
of LTI system response.
I
yt,ll = lxlr,yl 1a1,,,11
+Y(r,r)=4X(ro)+ 4H(.)
Thus, the amplitude spectrum of the input is modifiea Uy I a1r,r) | to produce the amplitude spectrum of the output, and the phase spectrum of the input is changed by 4H(r,r)
to produce the phase spectrum of the output.
The quantity H(or), the Fourier transform of the system impulse response, is generally referred to as the frequency response of the system.
As we have seen in Section 4.2.2, lor fl(<,r) to exist, lr(t) has to satisfy two conditions. The first condition requires that the impulse response be absolutely integrable.
This, in turn, implies that the LTI system is stable. Thus, assuming that Ir(t) is "well
behaved," as are essentially all signals of practical significance, we conclude that the
frequency response of a stable LTI system exists. If. however, an LTI system is unstable. that is. if
f .ln<,tla,: then the response of the system to complex exponential inputs may be infrnite, and the
Fourier transform may not exist. Therefore, Fourier analysis is used to study LTI systems with impulse responses that possess Fourier transforms. Other, more. general
transform techniques are used to examine those unstable systems that do not have
finite-value frequency responses. In Chapter 5, we discuss the Laplace transform,
rvhich is a generalization of the continuous-time Fourier transform.
Example 4.3.7
we demonstrate how to use the convolution property of the Fourier lransform. Consider an LTI system with impulse response
ln this example.
n(t) = exP [- at]rr(t)
whose input is the unit srcp function u(t). The Fourier transform of the output is
Y(a) = 9lu(t)19 [exp[-atlu()l
=
l.ur,;.,i](;;1;;)
zrl
= ..6(o) * -. ;--.+
o
l@la
..-,
l(lD)
=J[,tt,l*.'l-'-!.
oL
,(,)l aa+l@
Sec.
4.3
r83
Properties ol the Fourier Transform
Taking tlrc tnversc Fourier iirnstorm of both sidcs rcsults in
ll
r'(r) = :rt(r)
00
I
=;1, -
- =ctp[ - utlu(t)
exp[-arl]rr(r)
Example 4.8.8
The Fourier transform of thc rriangle sigrral A(r/r) can be otrtancrl by observing that the
signal is the convolurion of rhe rectangular pulse ( I /Vr ) rect ( r/r ) rvith itself; that is.
L(r/l
=
rect(r/t; *
)
uf recr(r/rl
From Example 4.2'l and Equation (4.3.20), it follows that
(-{,1 rectrrl,l}) : ,(.,,. T)'
elA(,/r)) =
Example 4.3.9
An LTI system has an impulse response
ft(') = exPl- arlrr(t)
and output
-y(r) =
[exp[-br]
-
exp[
- ttllu(t)
Using thc convolution property, lve find that the transform of thc input is
Y(o)
x(r) = H(,,;
- a)(19 1 n1
= -(c
(lto+b)(ir,r+c)
DE
jlo+h'7or+c
where
D=a-D
and E=c-o
Therefore,
.y(r) = [(a
-
b)
exp[-br] + (c -
a)
exp[- ctllrr(t)
Example 4.3.10
In this example, we use thd relation
tl
J_-.(")rt
:
-t(t)'. rr(r)
18/.
The Fourier
Translorm Chapier 4
and the transform of u(t) to prove the integration property, Equation (4.3.11). From
Equation (4.3.13) and the convolution properly. we have
*{i'..r,1a"} = e1(t)+ rr(r)l = .rt.ll,s1,t + 1]
= rrX(o)o(or)
* {-E)
l@
The last equality follorvs from the sampling property of the delta function.
Another important relation follows as a consequence of using the convolution prop
erty to represent the spectrum of the output of an LTI system; that is,
Y(<'r)
= X(r'r)H(to)
We then have
lY(,)l' = lx1'141,112 = lx(to)l'zla(,)l'
(4.3.21)
This equation shows that the energy-spectrum density of the response of an LTI system is the product of the energy-spectrum density of the input signal and the square of
the magnitude of the system function. The phase characteristic of the system does not
affect the energy-spectrum density of the output, in spite of the fact that, in general,
H(ro) is a complex quantity.
4.3.8 lluality
We sometimes have to find the Fourier transform of a time signal that has a form similar to an entry in the transform column in the table of Fourier transforms. We can find
the desired transform by using thc table backwards. To accomplish that, we write the
inversion formula in the form
f' x(r)"*p[*
jottld,, :2t,x(t)
J__
Notice that there is a symmetry between this equation and Equation (4.2.6): The two
equations are identical except for a sign change in the exponential, a factor of2t, and
an interchange of the variables involved. This type of symmetry leads to the duality
property of the Fourier transform. This property states that if .r(r) has a transform
X(to), then
X(t)
++
2n.r(-ro)
We prove Equation (4.3.22) by replacing, with
2r
x(- t) =
=
/'
__
[,"_
-,
in Equarion (4.2.5) to
x1r1"xp[-lor]dto
_xg)expl-
jrtldr
(4.3.22)
Ber
Sec.
4.3
Properlies of the Fourier Translorm
185
since t,r is jusl a duntmv variahlc lor intcgrltion. Now rcplacing t hv
o andrbytgivcs
Equation (4.-1.22).
Example 4.3.1I
Considcr thc srrnal
. @ul
.r(r)=saf= stnc
::2i
From Equation (4.2.6
).
,{r, Y} : /-
s"
e!! expt-1,,ttat
This is a vcry ditficult integral to evaluate directly. However, wc l(,und in Example 4.2.1 that
rect
(r/r ) .t .
Su
l
Then according to Equation (4.3.22).
"{"Y}
= 11recr(-,/, ,, ='J,recr((,,,/(,,,)
because the reclangular pulse is an even signal. Note that thc tt atlsform X(ro) is zero outside the range - a rf 2 s ot s u sf 2, but that the signal .r (l ) is rr()l t imc limited. Signals with
Fouricr lransforrns that vanislr outsidc a givcrr flequency hirrtl rrrt ,:allcd bandJimited signals (signals with no spectral content above a certain maxiurunr lrsquency, in this case,
ar/2.). lt can be shown that time limiting and frequency limiting are mutually exclusive
phenomena: i.e., a iimc-limilcd signal -r(r) always has a Fouricr ttitttsform that is nol band
limited. On the oihcr hand. if X(o) is band limited. lhen the coltcsponding time signal is
never time limited.
Example 4.3.12
Differentiating Equation (4.2.6) n times with rcspect to to. we rcadily obtain
(-7i)'-r(r)
*d'!|:l
(4.3.23)
that is, multiplying a time signal by t is equivalcnt to differcntiating the frequenry spec'
trum, which is the dual of dif[erentiation in the time domain.
The previous two examples demonstrate that, in addition lo its consequences in
reducing the complexity of thc calculation involvcd in determining some Fourier transforms. duality also implies that every property of the Fourier transform has a dual.
4.S.9 Modulation
If
.r(t) <+ X(or)
m(t) <+ M(a)
The Fourier Trans,lorm Chapter 4
186
then
r(r)rr(r)
-f
txtrl
* M(to)l
(4.3.24)
Convolution in the frequency domain is carried out exactly like convolution in the time
domain. That is.
X(or) *H(r,,)
= J"f
o)do =
=
f= .H(o)Xkt-o)do
"X@)H(ot-
This property is a direct result of combining two properties, the duality and the convolution properties, and it states that multiplication in the time domain corresPonds to
convolution in the frequency domain. Multiplication of the desired signal r(r) by m (t)
is equivalent to altering or modulating the amplitude of r(r) according to the variations
in z(r). This is the reason that the multiplication of two signals is often referred to as
modulation. The symmetrical nature of the Fourier transform is clearly reflected in
Equations (4.3.20) and (4.3.24): Convolution in the time domain is equivalent to multiplication in the frequency domain, and multiplication in the time domain is equivalent to convolution in the frequency domain. The importance of this ProPerty is that
the spectrum of a signal such as x(t) cos ronl can be easily computed. These types ofsignals arise in many communications systems, as we shall see later. Since
cos.,.t =
jtexntiroll + exp[-iour]l
it follow that
9lx(l) cosrorrl =lW<.
-
@o)
+ X(or + oo)l
This result constitutes the fundamental property of modulation and is useful in the
spectral analysis of signals obtained from multipliers and modulators.
Evanple 43.13
Consider the signal
r,O =.r(r)p(t)
where p(r) is the periodic impulse train with equal-strength impulses, as shown in Figure
4.3.3. Analytically, p (t) can be written as
p1r) =
,i.
o1r
- ,r1
Iigure
-47 -3T -2T -T
0
4JJ
Periodic pulse train
used in Example 43.13.
Sec.
4.3
187
Properties of the Fourier Translorm
Using the sampling propcrty of thc delta function, we obtain
r,(,)= i:(nl)6(r-aI)
That is, r,(r) is a train o[ impulses ,0"".0 a.""""* apart, the strength of the impulses
being equal lo the sample values of r(t). Recall from Example 4.2.10 that the Fourier
transform of the periodic impulse train p(r) is itself a periodic impulse train; sp€cifically'
P@=+
"2.'(,-?)
Consequently, from the modulation property,
1
X,(r) = IX(t,). P(o)l
,'
=
i.t"x(,). s(, -T) = +.i-r(- -':;)
That is, X,(to) consists of a periodically repeated rcplica of X(r,r).
Example 43.14
Consider the system depicted in Figure 4.3'4, where
xlry =
ll(9a12)
rtrt =.E-a(r -
ll)
to1: tt(13'l2
r (r)
) (r)
Flgure
4.3.74.
43.4
System for ExamPle
The Fourier transform ofx(r) is the rectangular pulse with width ro,,, and the Fourier transform of the product r(r)p (r) consists of the periodically repeated rcplica of X(to), as shown
in Figure +.1.5. Similarly, the Fourier transform of ft(r) is a rectan*rlar pulse with width
3or. According to the convolution properly, the transform of thc outPut of the system is
Y(r,r) = X,(ro)H(ro)
= X(ro)
or
y(tl = x(t)
188
Translorm Chapler 4
The Fourier
Z(@t
-.nB O .iB
-;'
-'r-
-2.e
l.a
lll.i,
f(qr)
-anB 0 ae
o)
:l
Figure
435
Spectra associated wirh signals for Example 4.3.14.
Note that since the system i(r) blocked (i.e.. filtered out) all the undesired components
ofx,(t) in order to obtain a scaled version ofx(r), we refer to such a system as a filter. Filters are important components of any communication or control sysiem. In Chapter 10,
we study the design of both analog and digital filters.
f,xq'rrple 4.S.16
In this example, we use the modulation properly to show lhat the power spectrum of the
periodic signal .r (t ) with period I is
S(r,r)
=2, ) l.,l'S1, - rro;
where c, are the Fourier coefficients.trU,
"*
.o=i2t
We begin by defining the truncated signal
the modulation property. we find that
x,(<,r) =
=
2f
[Zt
r"(r)
as rhe
product.r(r)
Sa.r - X(to)l
* f-2t Saprx(to - p)dp
rect
(/2r).
Using
Sec.
4.3
Properties of the Fourier Transform
189
Substituting Equation (4.2.15) tor X(r,r) and forming the function i ,t - (r,l) l:, rve have
LIJ'I]- = Y. )
2t
,,o--.
Zrc,cX,Sa[(or
-
no6)rlsa[(r,r
-
muro)r]
The power-density spectrum of the periodic signal .r(r) is obtained by taking the limit of
the last expression as r -) co. It has been observed earlier that as r -+ .o, the transform of
the rectangular signal approachcs 6(r,r); therefore, we anticipatc lhat the two sampling
functions in the previous expression approach 6(o - *op), where ft : m ar,d a, Also,
observing that
6(to
-
nr,ro)E(,
-
mroo)
=
{i:,
- ,,r,.
Il,*,1"o.
we calculate that the power-density spectrum of the periodic signal
S(or)
=
1;n1
is
!!1')l'
=2" )
L'l
-
lc,l2E1r,r
n
on)
For convenience, a summary of the foregoing properties of the Fourier transform is
given in Table 4.2. These properties are used repeatedly in this chapter, and they
should be thoroughly understood.
TABLE 4.2
Some Solected Prop€rtl€s of lho Fourler Transtorm
L
)
",x,,(,t
.:f
(-.)
)
o,.r,(r1
2. Complex conjugation
r''(,
)
3. Time shift
.r(r
-
4. Frequency shift
.r(t) exp [loutl
X(ro
5. Time scaling
x(ar)
6. Differentiation
tl"x1t)/dr'
7. Integration
I
/lal x (- /t)
(lo)"X(o)
X(o)
8.
Parseval's relation
f_-t,at'a,
9.
Convolution
r(r)+rr(I)
X(ro)H(r,r)
10.
Duality
x(t)
2r
11.
Multiplication by t
(-.ti)'.t0)
ry@)
(4j.23)
x(t)nr (t)
I
X(or) i, rvl ( r,r)
2tr
(4.3.241
Linearity
n=l
n=l
12. Modulation
16
)
xk)dr
X(o) exp["
-
(4.3.r)
(4.2.6)
i totol
r'r,,)
t
(4.3.u)
(4.3.6b)
(4.3.7)
(4.3.e)
*
,-'fi "xtolol';
(43.1r)
f__l^t.l',.
(4.3.14)
j
t(-o)
da'
(4.3.2o1
(4i.221
The Fourier
190
4,4
Translorm
Chapter 4
APPLICATIONS OFTHE FOURIER TRANSFORM
The continuous-time Fourier transform and its discrete counterpart, the discrete-time
Fourier transform, rvhich we study in detail iu Chapter 7, are tools that find extensive
applications in communication systems, signal processing, control systems. and many
other varieties of physical and engineering disciplines. The important processes of
amplitude modulation and frequency multiplexing provide examples of the use of
Fourier-transform theory in the analysis and design of communication systems. The
sampling theorem is considered to have the most profound effect on information
transmission and signal processing, especially in the digital area. The design of filters
and compensators that are employed in control systems cannot be done without the
help of the Fourier transform. In this section, we discuss some of these applications in
more detail.
4.4.1 Amplitude Modulation
The goal of all communication systems is to convey information from one point to
another. Prior to sending the information signal through the transmission channel, the
signal is converted to a useful form through rvhat is known as modulation. Among the
many reasons for employing this type of conversion are the following:
l.
to
2. lo
3. to
4. to
transmit information efficiently
overconre hardware limitations
reduce noise and interference
utilize the electromagnetic spectrum efficiently.
Consider the signal multiplier shown in Figure 4.4.1. The output is the product of
the information-carrying signal x(t) and the signal rn (l), rvhich is referred to as the carrier signal. This scheme is known as amplitude modulation, which has many forms.
depending on m(t). We concenlrate only on the case when m(t) = cost,r.I, which represents a practical form of modulation and is referred to as double-sideband (DSB)
amplitude modulation. We will now examine the spectrum of the output (the modulated signal) in terms of the spectrum of both r(l) and rn (t).
The output of the multiplier is
y(t) = .r (r) costout
Since y(t) is the product of two time signals, convolution in the frequency domain can
be used to obtain its spectrum. The result is
\(r)
v
lr(r)
(tl
tigure
4.4.t
Signal multiplier.
Sec.
4.4
Applicauons o, the Fourier Transrorm
191
I
| .t(<o t I
'Qs
Y(c,r)
-GJo
o
0
|
c.ro
I
I
1.-r.,-J
Figure
4.42
Magnitude spectra of information signal and modulated
signal.
I
Y(r)
2t
1
2
X(r,r) *
zr [D (to
-
roo)
[X(t" - on) + X(to +
+ 6 (r,r +
to,,)]
ro,,)].
The magnitude spectra of r(t) and y(t) are illustrated in Figurc 4.4.2.The part of the
spectrum of Y(o) centered at *on is thc result of convolving ,Y(to) with D(or - orn),
and thc part ccnlcrcd al -tr,, is lltc rcsult o[ convolving,\'(o) rvilh 6(o + <'ru). This
process of shifting the spectrum of the signal by q, is necessary hecause low'frequency
(baseband) information signals cannot be propagated easily by radio waves.
The process of extracting the information signal from the modulated signal is
referred to as demodulation. In effect, demodulation shifts back the message spectrum
to its original low-frequency location. Synchronous demodulation is one of several
techniques used to perform amplitude demodulation. A synchronous demodulator
consists of a signal multiplier, with the multiplier inputs being thc modulated signal and
cos o0r. The output of the multiplier is
z(t) = y(t)
cos r'rot
Hence,
1
Z(o\
2
=
[Y(to
-
or,,)
+ Y(o + r,ru)]
)xo *)xr, - 2a,,) +)x6 * 2.,,1
The result is shown in Figure 4.4.3(a). To extract the original inlormation signal r(t)'
the signal z (t) is passed through the system with frequency rcsponse H(ro) shown in
Figure 4.4.3(b). Such a system is referred to as a low-pass filter. since it passes only lowfrequency components of the input signal and filters out all [requencies higher than torthe cutoff frequency of the filter. The output of the low-pass filter is illustrated in Figure 4.4.3(c). Note rhar if lH(('))l = t, l.l ( ror,andthere werc no transmission losses
't92
The Fourier
I
I
-<ig
z(u)
Translorm
Chapter 4
I
H(.,tlI
0
,ne
(b)
I X(o:)
-@D O
I
-n
.,)
(c)
Flgure 4.rL3 Demodulation process: (a) Magnitude specrrum of z0);
G)
the low-pass-filter frequency response; and (c) the extracted information
spectrum.
involved, then the energy of final signal is one-fourth that of the original signal because
the total demodulated signal contains energy located at to = 2oro that is ev-entually discarded by the receiver.
a.4.2 Multiple."i.E
A very useful technique for simultaneously transmitting several information
signals
involves the assignment of a portion of the final frequency to each signal. This iechnique is known as frequency-division multiplexing (FbM), and we enciunter it
almost
daily, often without giving it much thought. L:rgei cities usually have several AM radio
and television stations, fire engines, police cruisers, taxicabs, mobile telephones,
citizen band radios, and many other sources of radio waves. All these souices are frequency multiplexed into the.radio sp€crrum by means of assigning distinct
frequency
bands to each signal. FDM is very similar to amplitude modulati-on. considei
three
Sec. 4.4
Applications ol the Fourier Translorm
I
,Yr
(qr)
P, ,"s
Figure
I ,ll(&r)
l,YJ(o);
I
0
193
]!2 e
-Wz 0
4.4,4 Magnitude spectra f,)r x, (r), rr(t).
and
-lt)!
i
0
Wt
-ri(r) for the FDM
system.
band-limited signals with Fourier lransr.rrms. irs shown in Figure 4.4.4. (Extension to
n signals follows in a straightforward nianner.)
If we modulate.r, (t) with cosror r,.rr (r) with cos(')2I, and -rr(t ) with coso3r, then. summing the three modulated signals, we obtain
y(t) = x,(t)cos(,)r, + .r3(I) eostrr2/ + xr(t) cosorl.
Thc frequency spectrum of .y (l ) is
Y@)
=
l2[,]',(or
-
or,) + X,(ro + to,)l
I
,., [,r'rtt.r - t,l:) + X, (r',r + to. )]
, l,r.,tu' - or)
+ Xr(ro + r,r,)l
which has a spectrum similar to that in Figure 4.4.5. h is important here to
make sure that the speclra do not overlap-that is, that tu, I LV, 1 a, - W, and
,ll,r'l W, < or - Wr. At the receiving end, some operations rnust be performed to
recover the individual spectra.
Because of the form of I f lor; I, in order lo capture the spcctrum of x, (t), we would
need a system whose frequency response is equal to I for -r - Wr 5 ro s o, * LV,
and zero otherwise. Such a system is called a band-pass filter, since it passes only frequencies in the band or - Wr s ro s tor + lryr and suppresscs all other frequencies.
| )'(or)
-O3
-@t
-{i2
Flgure
4.45
0
I
crt
cJr
Magnitude spectrum of y(r) for the FDlvl system.
The Fourier
194
Transform
Chapter 4
.r,(t)
rt (r)
x2lt)
:z(r)
(r)
r!(r)
.r3
cOS oJS
l
IIgu:: 4.4.6
COS @31
Frequency-division multiplexing (FDM) slstem. BPF =
band-pass filter. LPF
=
low-pass
fillcr.
The output of this filter is then processed as in the case of synchronous amplitude
demodulation. A similar procedure can be used to extract 12(r) or.r.1(r). The overall
system of modulation, multiplexing, transmission, demultiplexing, and demodulation
is illustrated in Figure 4.4.6.
4,4.3 The SanplingTheorem
Of all the theorems and techniques pertaining to the Fourier transform, the one that
has had the most impact on information transmission and processing is the sampling
theorem. For a low-pass signal x(t) which is band limited such that it has no frequency
components above or, rad/s. the sampling theorem says that x(t) is uniquely determined by its values at equally spaced points in time I seconds apart, provided that
T < rf a". The sampling theorem allows us to completely reconstruct a band-limited
signal from instantaneous samples taken at a rate rlr. = 2n/T, provided that ro, is at
least as large as 2or,,which is twice the highest frequency present in the band-limited
signal x(t). The mininum sampling rale Zot" is known as the Nyquist rate.
The process of obtaining a set of samples from a continuous function of time x(l) is
referred to as sampling. The samples can be considered to be obtained by passing x(r)
through a sampler, which is a switch that closes and opens instantaneously at sampling
instants aL When the switch is closed. we obtain a sample.r(n I). At all other times.
the output of the sampler is zero. This ideal sampler is a fictitious device. since. in practice. it is impossible to obtain a switch thal closes and opens instanlaneously. We
denote lhe oulput of the sampler by x,(t).
In order to arrive at the sampling theorem, we model the sampler output as
.r,(,)
:
r(,)p(r)
(4.4.1)
where
p(r)= ) 6(r-nI)
(4.4.2)
S€c.
4.4
Applications ol the Fourier Translorm
r (t)
195
-\r(')
Figure 4.4.7 The ideal sampling
process.
is the periodic impulse train. We provide a justification of this model later, in Chapter
E, where we discuss the sampling of continuous-time signals in greater detail. As can
be seen from the equation, the sampled signal is considered to be the Product (modulation) of the continuous-time signal r(t) and the impulse train p (t) and, hence, is usually referred to as the impulse modulation model for the sampling operation. This is
illustrated in Figure 4.4.7.
From Example 4.2.10, it follows that
P(0,) =
T "i"r(", - ?) =T ^2.6(,
-nr,r,)
(4.4.3)
and hence,
x,(,): f
x1,y,-
I
=;)_-
r*
-r s
"1,1
X(o)P(r,r
X(or
-
-
o)do
zto,)
(4.4.4)
The signals.r(t), p(t), and x,(t), are depicted together with their magnitude.spectra in
Figure a.4.8, withr(i) beinga band-limited signal-that is, X(ur) is zero for
As can be seen, x,(l), which is the sampled version of the continuous-time sipal r(t),
consists of impulses spaced I seconds apart, each having an arca equal to the sampled
value of r(r) at the respective sampling instant. The spectrum X"(ro) of the sarrpled
signal is obtained as the convolution of the spectrum of X(r,r) with the impulse train
P(ro) and, hence, consists of the periodic repetition at intervals o, of X(ro), as shown
in the figure. For the case shown, to, is large enough so that the different components
of X"(ro) do not overlap. It is clear that if we pass the sampled signal r,(t) through an
ideal low-pass filter which passes only those frequencies contained in x(t), the spectrum of the filter output will be identical to X(o), except for an amplitude scale factor
of 1/I introduced by the sampling operation. Thus, to recover .r(t), we pass r,(t)
through a filter with frequency response
lrl , ,r.
H(,)=
=
{;' *h::
(4.4.s')
'I*tt(2"-g)
The Fourier
196
.r(,
I X(c))
)
p(tl
-37 -27 -T
|
-2u,
0
O
Chapter 4
P(ol
I Xr(o)
.tr(r)
-3T -2T -7'
Transform
T
2T 3T t
-2ot,
0
-ort
I
r,rr
:t:,
Ilgure 4.4.E Time-domain signals and their respective magnitude spectra.
This filter is called an ideal reconstruction filter.
As the sampling frequency is reduced, the different componenls in the spectrum of
X.(o) start coming closer together and eventually will overlap. As shown in Figure
A.a.9@) if o" - or, > ro", the components do not overlap, and the signal .r(t) can be
recovered from r"(t) as descririetl previously. If or, - or, = 0, the components just
touch each other, as showrr :,r ^rigure 4.4.9(b). If ro, - ro, ( 1116, the components will
overlap as shown in Figure "i.;.9(c). Then the resulting speclrum obtained by adding
the overlappin! componeni:r
no longer resembles X(r,r) (Figure a.9.(d)), and
'(rgether
x(l) can no longer be recov,:red from the sampled signal. Thus, to recover r(t) from
the sampled signal, it is cluar that the sampling rate should be such that
(os-(t,8>(r)B
Hence, signal .r(t) can t,e
;
crvered from its samples only
t,,lr) 2a"
if
(4.4.6',)
This is the sampling theorem (usually called the Nyquist theorem) that we referred to
earlier. The minimum permissible value of o, is called the Nyquist rate.
The maximum time spacing between samples that can be used is
T=
'ft
(t)a
(4.4.7)
Sec.
4.4
Applications ol the Fourier Transform
I
X.(ar)
197
|
-t t, -ar+@B
an,
-anr-brB
-ot, -a8
-!rB - to\ (o\+ aDB
tt)
0
(b)
I
X, (ro) |
-tDB O
Lt, ( (r)
ag u,
(o
-@!
(c)
0
|
arr
(d)
Iigure 4.4.9 Effect of reducingi the sampling frequency on X"(r,r).
If Idoes not satisfy Equation
(4.4.7), the different componenrs of X"(o) overlap, and
we lvill not be able to recover.r(r) exactly. This is referred to as aliasing. If .r(t) is not
band limited, there will always be aliasing, irrespective of the chosen s,rmpling rate.
f,snrnple 4.4.1
The spectrum of a signal (for example, a speech signal) is essenlially zero for all frequencies above 5 kHz. The Nyquist sampling rate lor such a signal is
a":2(ds = 2(2n x 5 x ld)
= 2rr
The sample spacing
I
is equal ro
x
l0r rad/s
2zr/o, = 0.1
ms.
The Fourier Translorm Chapter a
198
F;xenple 4.4.2
Instead of sampling the previous signal at the Nyquist rale of l0 kHz, let us sample it at a
rate oi I kHz. Ihat is.
a,=Ztt.
x8x
l0r rad/s
The sampling intcrval I is equal lo21r/a! = 0.125 ms. If we filter the sampled signal.r.(t)
using a low-pass filter rvith a cutoff frequency of4 kHz, the output sP€ctrum contains highfrequency components of:(l) superimposed on the Iow-frequency components, i.e., we
have aliasing and r(t) cannot be recovered.
In theory. if a signal .r(t) is not band limited, we can eliminate aliasing by low-pass
filtering the signal before sampling it. Clearly, we will need to use a samPling frequency
which is twice the bandwidth of the filter, or. In practice, however, aliasing cannot be
completely etiminated because. first, we cannot build a low-pass filter that cuts off all
frequency components above a certain frequency and second, in many applications,
.r(t) cannot be low-pass filtered without removing information from it. In such cases,
we can reduce aliasing effects by sampling the signal at a high enough frequency that
aliased components do not seriously distort the reconstructed signal. In some cases, the
sampling frequency can be as large as 8 or 10 times the signal bandwidth.
m'<ample 4.4.3
l2(DHz is
bandpass signal,.r,(r). which is bandlimited to the range 8m <
with
cutoff
input to the system in Figure 4.a.lO(a) where H(o) is an ideal low-pass filter
frequency of 2fi) Hz. Assume that the spectrum of r,(t) has a triangle shape symmetric
about the center frequency as shown in Figure 4.4.10(b).
/<
An analog
Figure 4.4.10(c) shows X,,(o), the spectrum of the modulated signal,.r,,(t), while
Xo(ro), rhat ot the output of thc low-pass filter (baseband signal), ro(t) is shown in Figure
4.4.10(d). If we now sample ru(r) al intervals f with f < I /4fi) secs, as discussed earlier,
the resulting spectrum X, (<'r) will be the aliased vcrsion of X,(to) and will thus consist of
+ 1, + 2, etc.
a set o[ triangular shaped pulses centered at frequencies u = 2rk/T, k = 0,
If one of these pulses is centered at 2n x l(fr) rad/s, we can clearly recover X, (r'r) and
hence r,(t) by passing the sampled signal through an ideal bandpass filter with renter frequency 2fiX)zr rad/s and bandwidth of Efi)rr rad/s. Figure 4.4.10(e) shows the spectrum of
the sampled signal for 7 = I msec.
In general, we can recover r,(l) from the samPlcd signal by using a band-pas filter if
2rk/T = o,, that is if 1/Iis an integer submultiple of the center frequency in Hz.
The fact that a band-limited signal that has been sampled at the Nyquist rate can be
recovered from its samples can also be illustrated in the time domain using the concept of interpolation. From our previous discussion, we have seen that, since r(t) can
be obtained by passingr,(t) through the ideal reconstruction filter of Equation (4.4.5)'
we can write
X(o) = H(or)x"(co)
The impulse response corresponding to H(ro) is
(4.4.8)
Sec.
4.4
,{4
(
199
Applications ol the Fourier Translorm
-r. (, )
r)
cos (2000
p(t)
7t)
0
0
(b)
au.l20[orradls
(c)
xu@)
0
(d)
.r, (ru)
al2$Nr
-1.5 -l
0
{.5
radls
Flgure 4.4.10
0.5
(c)
l5
ul2ffinndls
Spectra of the signals of Example 4.4.3.
a(r) = r-s'Iuki
'fiI
Taking the inverse Fourier transform of hoth sides of Equation (4.4.8), we obtain
.r(t)=r.(t)x&(t)
=
[..t,1,i-u<,
-,.)]1
7st!oal
:i_r,@T";*;;?
:.i. f',"D':e*^;T,)
:,,i_
*,'r,nr)Sa(or(r -
n7'))
(4.4.9)
Equation (4.4.9) can be interpreted as using interpolation t() rcconstruct r(t) from its
kI)] are called intcrpolating, or sampling'
samples x(n?"). The functions Sa [ror(r
funciions. Interpolation using sampling functions, as in Equation (4.4.9), is commonly
referred to as band-limited interpolation.
-
The Fourler
Transform
Chapter 4
4.4.4 Signat Filtering
Filtering is the process by which the essential and useful part of a signal is separated
from extraneous and undesirable components that are generally referred to ai noise.
The term "noise" used here refers to either the undesired part of the signat, as in the
case of amplitude modulation, or interference signals generated by the electronic
devices themselves.
The idea of filtcring using LTI systems is based on the convolution property of the
Fourier transform discussed in section 4.3, namely, that for LTI systems, the i.ourier
transform of the output is the product of the Fourier transform of the input and the frequency response of the system. An ideal frequency-selective filter is a filter that passes
certain frequencies without any change and stops the rest. The range of frequencies that
pass through is called the passband of the filter, whereas the range of frequencies that
do not pass is referred to as the stop band. In the ideal case, lff(t,r)l = I in a passband,
while lrr(r,r) | = 0 in a stop band. Frequency-selective filters are ciassified according to
the functions they perform. The most common types of filten are the following:
l. Low-pass filters are those characterized by a passband that extends from to = 0 to
(o = (oc' where or. is called the cutoff frequency of the filter. (See
Figure 4.4.11(a).)
2. High-pass filters are characlerized by a stop band that extends from to = 0 to o r,r"
and a passband that extends from <o r,r" to infinity. (See Figure 4.4.11(b).)
3. Band-pass filters are characterized by a passband that extends from o @r to
(,) = (,r,, and all other frequencies are stopped. (See Figure a. .l1(c).)
4. Band-stop filters stop frequencies extending from o, to ro, and pass all other frequencies. (See Figure 4.4.11(d).)
:
:
:
I Hkt) l
I
H(u)
|
0
(a)
I
H(ai
t
I
HQo)l
0
(c)
ngule
(d)
4dU
.
Most common classes of filters.
S€c.
4.4
Applications ol lhe Fourier Translorm
201
As is usual with spectra of real-valued signals. in Figure 4.4.1I we have shown fl(ro)
only for values of to > 0, since H(or) = H(- -) for such signals.
Exanple 4.4.4
Consider the ideal low-pass filter with frequency response
,,,r,r
=
{l' kl,i:;
The impulse response of this filter corresponds to the inverse Fourier transfom ofthe frequency response H,r(o) and is given by
fr,, =
&rin.9tl
Clearly. this filter is noncausal and. hence, is not physically realizable.
The filters described so far are referred to as ideal filters because they pass one set of
frequencies without any change and completely stop others. Since it is imposible to
realize filters with characteristics like those shown in Figure 4.4.1 I , with abrupt changes
from passband to stop band and vice versa, most of the filters we deal with in practice
have some transition band. as shown in Figrure 4.4.12.
ll(or)
I
I
l/(r.r)
|
lH(.u)l
I
Ir(o)
I
I
202
E-anple
The Fouder
Transtorm
Chapter 4
4.4.6
Consider the following RC circuit:
r
I
I
^t
I
r (r)
v(t,
lr
tlr------
-- -- __ J
The impulse response of this circuit is (see problem 2.17)
h(t) =
*f *o[#],u,
and the frequency response is
H((,)) =
I
I
+ TtoRC
The amplitude spectrum is given by
1a1,11' =
l++rc),
and is shown in Figure 4.4.13. lt is clear that lhe RC circuit with the output taken as the
voltage across the capacitor.performs as a low-pass filter. The frequenryro. ar which
rhe
spectrum
= ry@/Va (3 dri berow H(0)) is caited r'rre fano edge. or
the ldB cutoff frequency ofthe filrer. (The transition between the passband and
thJsrop
band occurs near o..) Setting JH(or)l = t/!2, we obtain
latrll
Trry!$.
I
o.=EE
I H(ral
I
I
I
,/z
|
.-=
'RC
Egure 4.4.13 Magnitude sp€ctrum
of a low-pass RC circuit.
Sec.
4.4
Applications of the Fourier Transtorm
If we interchange the positions of the capacitor and the resistor, we obtain
a s)tstem
with impulse response (see Problem 2.18)
/r(r) =
5,11y
- *'..* [1]l,ol
and frequency response
H(,)=#,k
The amplitude spectrum ts given by
la(,rr)l'= t1'#",^.)and is shown in Figure 4.4.14.lt is clear that the RC circuit with output taken as the voltage across the resistor performs as a high-pass frlter. Again. by sctting lH(.)l = UVz,
the cutoff frequency of this high-pass filter can be determined as
I
''=Ra
I H{ut)
|
i
I
tlgure 4.4.14 Magnitude sp€ctrum
of a high-pass RC circuit.
Filters can be classified as passive or active. Passive filters arc made of passive elements (resistors, capacitors, and inductors), and active filters use operational amplifiers together with capacitors and resistors. The decision to use a passive filter in
preference to an active filter in a certain application depends on several factors, such
as the following:
l, The range of frequency of operalion of the ftlter. Passive filters can oPerate at higher
frequencies, whereas active filters are usually used at lower frcquencies.
2. The weight and size of the filter realization. Active filters can bc realized as an integrated circuit on a chip. Thus, they are superior when considerations of weight and
size are important. This is a factor in the design of filters for low-frequency applications where passive filters require large inductors.
--20/.
3. lhe sensitivity of
The Fourier
lhe
Transform
Chapter 4
filter to parameter changes and stability. Components used in
circuits deviate from their nominal values due to tolerances related to their manufacture or due to chemical changes because of thermal and aging effects. Passive filters are always superior to active filters when it comes to sensitivity.
4. The availability of voltage sources for operational omplifiers. Operational amplifiers
require voltage sources ranging from I to about 12 volts for their proper operation. Whether such voltages.are available without maintenance is an important
consideration.
We consider the design of analog and discrete-time filters in more detail in Chapter 10.
4.5
DURATION-BANDWIDTH RELATIONSHIPS
In Section 4.3, we discussed the time-scaling property of the Fourier transform. We
noticed that expansion in the time domain implies compression in the frequency
domain, and conversely. In the current section, we give a quantitative measure to this
observation. The width of the signal, in time or in frequency, can be formally defined
in many different ways. No one way or set of ways is best for all purposes. As long as
we use the same definition when working with several signals, we can compare their
durations and spectral widths. If we change definitions, "conversion factors" are
necdcd.to compare the durations and spectral rvidths involved. The principal purpose
of this section is to show that the rvidth of a time signal in seconds (duration) is
inversely related to the width of the Fourier transform of the signal in hertz (bandwidth). The spectral width of signals is a very important concept in communication systems and signal processing, for two main reasons. First, more and more users are being
assigned to increasingly crowded radio frequency (RF) bands, so thar the spectral width
required for each band has to be considered carefully. Second, the spectral width ofsignals is important from the equipment design viewpoint, since the circuits have to have
sufficient bandwidth to accommodate the signal, but reject the noise. The remarkable
observation is that, independent of shape, there is a lower bound on the duration-bandwidth product of a given signal; this relationship is known as the uncertainty principle.
4.6.1 l)efinitions of Duration and Bandwidth
As we mentioned earlier, spectral representation is an efficient and convenient method
of representing physical signals. Not only does it simplify some operations, but it also
reveals the frequency content of the signal. One characterization of the signal is its
spread in the frequency domain, or, simply, its bandwidth.
We will give some engineering definitions for the bandwidth of an arbidrary realvalued time signal. Some of these definitions are fairly generally applicable, and others are restricted to a particular application. The reader should keep in mind that there
are also other definitions that might be useful, depending on the application.
The signgl x(r) is called a baseband (low-pass) signal if lX(rtl = 0 for ltol > org
and is called a band-pass sigrat centered at or0 if lx(o.,)l = 0 for lto .ol > tos.(See
-
Sec.4.5
I
205
Duration-BandwidthRelationshlps
xl (ar) I
-0,g
lx2(or)
- atl -t,t
*tdao
I
O
Og
@
Flgure
4S.1
Amplitude spectra for baseband and band-pass signals.
-@o
-qrO
.rO
-
@B
oO
sro
+Or, 6
Figure 4.5.1.) For baseband signals, we measure the bandwidth in terms of the positive
frequency portion only.
Abeolut€ Bandwidth. This notion is used in conjunction with band-limited
signals and is defined as the region outside of which the spectrum is zero. That is,
.r(r) is a baseband signal and lX1rll is zero outside the interval ltol < or,then
B=aa
But if
then
r(r)
(45.1)
is a band-pass signal and lX(Gr)l is zero outside the interval
B:
if
r,o'2- a,
or
( o(
oz.
(4s.2)
tr',rarnple 45.1
The signal .r(l) = sintost/Tr is a baseband signal and has the Fourier transform
rect
(o2oB). The bandwidth of this signal is then ror.
8-dB (Ealf-Power) Bandwidth. This idea is used with
baseband signals
that have only one maximum, located at the origin. The 3-dB bandwidth is defined as
the frequency o, such that
lx(r,r,)l
(
lxlp)l
_
-
t
\/,
(45.3)
Note that inside the band0 < ro ro1, th€ magnitude lX1.;l fatls no lowerthan 1/\6
of its value at to = 0. The 3-dB bandwidth is also known as the half-power bandwidth
because a voltage or current attenuation of 3 dB is equivalent to a power attenuation
by a factor 2.
E:-arnple
452
The signal
x(t) = .*01- ,rrlu (t) is a baseband signal and has the Fourier transform
I
x(,)=ilr+i,
206
The Fourier
.
The magnitude spectrum of this signal is shorvn in Figure
3-dB bandrvidth is
4.-5.2.
Transform
Clearly: X(0) =
Chapter 4
I.
and rhe
B=lT
I X((,))
-l
TT
I
0
452
Figure
Magnitude spectrum
for the signal in Example 4.5.2.
I
Equivalent Bandwidth. This definition
is used in association with bandpass signals with unimodal spectra whose maxima are at the center of the frequency
band. The equivalent bandwidth is the width of a fictitious rectangular spectrum such
that the energy in that rectangular band is equal to the energy associated with the actual
spectrum. In Section 4.3.6, rve sarv that thc cnergy density is proportional to the squarc
of the magnitude of the signal spectrum. If o,, is the frequency at which the magnitude
spectrum has its maximum, we let the energy in the equivalent rectangular band be
Equivalent energ, =
?"1!9'J''
(4.s.4)
The actual energy in the signal is
Actuarenergy =
* f "1x1,11,a, = :f
lx1,;1,a.
(4.5.s)
Setting Equation (4.5.4) equal to Equation (4.5.5), rve have the formula that gives the
equivalent bandwidth in hertz:
B"'
:
l.ia.l,Jr/ lxt')l"r.,'
(4.s.6)
Frnmple 4.5.3
The equivalent bandwidth of the signal in Example 4.5.2 is
,*= ),llon-'J.+ -zd- = 2T
IT
Null-to-Null (Zero.Crossing) Bandwidth. This concept
applies to nonband-limited signals and is defined as the distance between the first null in the envelope of the magnitude spectrum above to,,, and the first null in the envelope below -,,,,
Sec.4.5
207
Duration-Bandwidth Belationships
where o,, is the radian frequency at which the magnitude spcctrum rs maximum. For
baseband signals, the spectrum maximum is at t'r = 0. and the bandrvidth is the distance
between the first null and the origin.
f,sarnple 4.6.4
ln Example
4.2.1, we showed that signal
.r() = rcct(t/T) has lhc Fourier transform
X(ro) = 751n.
aT
,-
The magnitude spectrum of this signal is shown in Figure 4.5.3. Frorn the figure, the nullto-null bandwidth is
B=T
I
_4n
T
Figure
z7o
45J
Bandwidth.
X(o)
I
2tr_ 0 !l
TTTTT
!
6n !1
o)
Magnitude spectrum for the signal in Exanrplc 4.5.4.
This is defined such thal
-*l
f',1*r.tl'0, =
.1x1.;1'?2,
(4.s.7)
For example, z = 99 defines thc frequency band in which 99% of the total energy
resides. This is similar to the Federal Communications Commission (FCC) detinition
of the occupied bandwidth, which states that the energy above thc upper band edge to,
belorv thc lower band ed8,e 9, is jTrt leaving 99?o of the total
is lVo and th"
"n".gy
en-ergy within the occupied band. The z7o bandrvidth is implicitll' defined.
RIIIS (Gabor)
Bandwidth.
Probably the most analylicilllv useful definjtions
of bandwidth are given by various moments of X(to), or even lrctlcr' of lX(to)l'. The
rms bandwidth of the signal is dcfined as
B:^. =
[L:;,21x1,11,a.
J_"
lxr."ll'a,
(4.5.8)
2OB
The
FourierTranslom Chaptsr4
a signal r(t) can be given in terms of its duralion T, which
is a measure of the extent of x (t ) in the time domain. As with banduddth, duration can
be defined in several ways. The particular definition to be used depends on the application. Three of the more common definitions are as follows:
A dual characterization of
l.
Distance between successive zeros. As an example, the signal
nt
'(t)--V4+
hasdurationT=llW.
2. Time at which x(t) drops
to a given value. For example, the exponential sigpal
x(r)
-
"*01-r/Llu(t)
duration I = A, measured as the time at whichx(t) drops to l/e of its value at, = 0.
3. Raditu of gyration. This measure is used with signals that are concentrated around
I = 0 and is defined as
has
T=2
x
radius ofgyration
(4.s.e)
lxQ)12 dt
For example, the signal
,(,)=-\+z,N*r[#]
has a duration
of
,:,W;/F,,^,1
=
t/i
a,
4.62 the Uncertainty Principle
The uncertainty- principle states that for any real signal .r(t) that vanishes at infinity
faster than l/Vl, that is,
(45.10)
\4r(r) = 0
,lim
and for which the duration is defined as in Equation (4.5.9) and the bandwidth is
defined as in Equation (4.5.8), the product TB shrruld satisfy the inequality
TB>I
(4.s.lr)
In words, T and B cannot simultaneously be arbitrarily small: A short duration implies
a large bandwidth, and a small-bandwidth signal must last a long time. This constraint
Sec.4.5
Duralion-BandwidthRelationships
has a wide domain of applicarions in communication systems, radar, and signal anrJ
speech processing.
The proof of Equation (4.5.11) follows from Parseval's formulr. Equation (4.3.1,1).
and Schwarz's inequality.
lf t,(t\t,toa4' = [:
where the equality holds if and only if y,
(t)
yr(t) =
ly,()l2dt
l.
(4.s.i2)
lt,(il',tt
is proportional to y,
(,)-that
ky,(t)
is,
(4.5.13)
Schwarz's inequality can be easily derived from
os ['ler,{r) -yr(t)l'zdt=s f $,()lzdt- zof tlt)rjl(r).rr+
f
br<,lfo,
This equation is a nonnegative quadratic form in the variable 0. For the quadratic to
be nonnegative for all values of 0, its discriminant must be nonpositive. Setting this
condition establishes Equation (4.5.12).If the discriminant equals zero. lhen for some
value 0 &, the quadratic equals zero. This is possible only if k.v,(r) - )t(t) = 0, ana
Equation (4.5.13) follows.
By using Parseval's formula, we can write the bandwidth of the signal as
:
L_l+r"
82=-*
(4.s. r4)
['-_l,r,tl'o,
Combining Equation (4.5.14) with Equation (4.5.9) gives
|
:
(TB\,
+
ilx(tylzat
I
lx'1t11'at
(4.s.1
t[
s)
1,1,v1'ar]
We apply Schwarz's inequality to the numerator of Equation (4.5.15) to obtain
.t.B
>2]L^1,''ul1l
['-l't'tl'o'
(4.5.
tb)
But the fraction on the right in Equation (4.5.16) is identically ctpral to l/2 (as can be
seen-by. integrating the numetator by parts and noting that x(l ) nrtrst vanish faster than
I /!t as t -+ t m). which gives the desired result.
To obtain equality in Schwaz's inequality. we must have
ktx(t\
dxltl
= ),'
The Fourier
210
Translorm
Chapter 4
OT
4D=*,
.r(r)
Integrating, we have
ln[,r(r)]
=T.constant
or
r(r) = c exp [&,2]
(4.s.17)
If k is a negative real number,.r(r) is an acceptable pulselike signal and is referred to
as the Gaussian pulse. Thus, among all signals, the Gaussian pulse has the smallest
duration-bandwidth product in the sense of Equations (4.5.8) and (a.5.9).
f,gornple 4.65
Writing lhe Fourier transform in the polar form
X(r'r) = 41-;
exP
[jQ (o)l
we show that, among all signals with the same amplitude A (o). the one that minimizes lhe
duralion ofx(r) has zero (linear) phase. From Equalion (4.3.23). we obtain
(-ri)r(r)
*4!;:) :l#+i/(,)+P]exp[ie(r,r)]
(4.s.re)
From Equations (4.3.14) and (4.5.18), we have
l'_t'ztx1rvt'zat
='i; L"{[iP]'*
,,'r,r[$$]']a.,
(4s re)
Since the left-hand side of Equation (4.5.19) measures the duration of .r(t). we conclude
that a high ripple in the amplitude spectrum or in the phase angle of X(r'r) results in signals with long duration. A high ripple results in large absolute values of the derivativcs
of both the amplitude and phase spectrum. and among all signals with the same amplitude A (to). the one that minimizes the left-hand side of Equation (4.5.19) has zero (linear) phase.
Bxarnple 4.6.6
A convenient
measure of the duration of
r
x(l)
is the quantity
=\ l'_,(na,
as the ratio of the area ofx(l) to its
height. Note that if r(t) represents the impulse response of an LTI system. then Iis a measure of the rise time of the system. which is defined as the ratio of the final value of the
tn this formula. the duration Tcan be interpreted
step response to the slope of lhe step response at some appropriate point
(ro = 0 in this case). lf we define the bandwidth of .r(l) by
tu
along the rise
Sec.4.6
211
Summary
, = ,lor [-._x<-'tait
is easy to show that
BT=2t
4.6
SUMMARY
r
The Fourier transform of x(r) is defined by
t-
x(o) =
J
o
.
r
.
=
r
X(ro) exists if x(t) is "well behaved" and is ahsolutely integrahle. These conditions
are sufficient, but not necessary.
The magnitude of X(o) plotted against r,r is called the magnitude spectrum of r(l),
ana lXlr; | '? is callcd the encrgy speclrum.
The angle of X(r,r) plotted versus to is called the phase spectrum.
Parseval's theorem states that
The energy of
r(r) within
:*
f .l x1'; l'za'
the frequency band
o,
(
o
zIt Jlll
^E=?[''111.11,r.u
The total energy of the aperiodic signal x(t) is
E
.
x1,1exp[i,r]ar,r
)i/'
f--l't'tl' a'
.
iatlat
The inverse Fourier transform of X(<,r) is defincd by
,fo
.
-_x(t)expl-
=
The power-density spectrum of
*J' txr,rt,a.
x(l)
is defined by
s(.,)=ri,,'flx't'1'1
,-r_
tr
L
J
where
x,
(t;
e
X, (or)
and
.r,
(t)
: x(t) rect(t /2r)
(
oz is uiven by
The Fourier
212
Translorm
Chapter 4
The convolution property of the Fourier transform states that
y(t) = r(t) ', h(t)
<+ y(or) = X(r,r)H(<,r)
If .Y(to) is the Fourier transform of .r(t), then the duality property of the transform
is expressed as
X(t)
<->
2n:(-to)
Amplitude modulation, multiplexing, filtering, and sampling are among the important applications of the Fourier transform.
If .r(t) is a band-limited signal such that
X(ro) = 6.
then
a
a
e
o
4
7
r(l)
| > rp
is uniquely determined by its values (samples) at equally spaced points in
time. provided that 7< T/t,os.The radian frequency o, = 2rlTiscalled thesampling frequency. The minimum sampling rate is 2or, and is known as the Nyquist rate.
The bandwidth B of x(t) is a measure of the frequency content of the signal.
There are several definitions for the bandwidth of the signal x(t) tnar arc useful in
particular applications.
The duration I of x(l) is a measure of the extent of x(t) in time.
The product of the bandwidth and the duration of x(t) is greater than or equal to a
constant that depends on the definition of both B and T.
CHECKLIST OF IMPORTANT TERMS
Allaelng
Amplltude modulatlon
Band-pas8 fllter
Bandwldth
Duallty
Duratlon
Energy sp€ctrum
Equlvalent bandwldth
Halr-power (3dB) bandwldth
Hlgh-p6s flltet
Low-pass fllter
Magnltude spoctrum
Multlplerlng
Nyqulst rate
AQ
a.L,
l,o
Paraeval's theorem
Perlodlc pulss traln
Phase Bpectrum
Powerdenslty spectrum
RGctangular pulse
RMS bandwldth
Sampllng fiequency
Sampllng funcuon
Sampllng theorem
Slgnal lllterlng
Slnc functlon
Trlangular puloe
Two-slded erponentlal
Uncertalnty prlnclple
PROBLEMS
4.1. Find the Fourier transform of the following signals in terms of X(to). the Fourier transforrn of .r().
(c) x(-r)
Sec.
4.8
(b) 'r"(r) =
(c) r,(r) =
(d)
213
Problems
40 +'(-')
4O--
i(:')
r"0)
(e) Re(:(r)l
+-r*0)
- r0)
tD Imk(t)l
43
Determine which of the following signals have a Fourier transform. Why?
(a)
:0)
= exP[-2t]z(r)
O).tG) = lrlu(r)
(c) x(t) = cos (rrlt)
(d) .r(t) =
:
1
(e) .r(r) = t2 exPl-?tlu(t)
43.
Show that the Fourier transform of
.r(t) can be written
x(a) =
i
as
tilm,+
where
m^=
44
f-"x(tldt,
(Hinl: Expand exp [-ltor] around, = 0 and integrate termwisc.)
Using Equation (4.2.12), show that
['
45.
n = 0,1.2....
"or.ot
d,
= 2n6(r)
Use the result to prove Equation (4.2.13).
Let X((,) = 1g.1[(, l)/2]. Find rhe rransform of the following functions,. using the
prope(ies of the Fourier transform:
-
(s)
r(-t)
o)
a1t1
(c)
r(t+l)
(d).r(-2+a)
(e)(-l)r(t+l)
,rr
4#
tgti
d-t
(t\
r(z -
l) exp[-j2tl
(l) .t(t) exp[-jzr]
(h)
{j) altl expl-jal
The Fourier
214
(&) (I
(l) t'
-
J_-
.r
l).r(r
-
Translorm
Chapter 4
l) exp[-l2tl
(t ) dr
4.6. Lgt r(r,1 = .*01-rlu(r) and let.v(r) = -r(r + I) + r(r - l). Find l'(o).
4.7. Using Euler's form,
exp [ltotl = coso, + j sin t,r,
interpret the integral
;-[exeli,rtldLo=0(r)
(Hintr Think of the integral as the sum of a large nunrber of cosines and sines of various
frequencies.)
4t.
Consider ihe two related signals shown in Figure P4.8. Use linearity, time-shifting, and
integration properties. along rvith the transform of a reclangular signal. to lind X(ro) and
Y(,).
.r(r)
Flgure P4.t
4.9. Find the energy of the following signals. using Parseval's theorem.
(a) .r(t) = exP [- 2rl,I0)
(b) .r(l) = n(t) - a(r - 5)
(c) r(t) - 6(tl4)
(d)
4.10.
A
r0) = !ln;Yl)
Caussian-shaped signal
x(t) = I
exP
[-
ar 2]
is applied io a system whose input/output relationship is
Y(r) =.r2(r)
(a) Find the Fourier transform of the output y().
215
Sec.4.8 Problems
(b) If y(r)
is applied to a second system, which is
h(t) =
I
l'TI rvith imprrl"'response
exP[-br:]
find the output of thc second system.
(c) How does the ourPut change if we inlerchange the order of thc two systems?
4.11. The two formulas
x1o; =
/' r(r)tit and
,(o) =
21,
[' ,xt-"t'
are special cases of the Fourier-transform pair. These two formulas can be used to evaluronr" detinire integrals. choose the appropriate .t (r ) and x(r,r) to verify the following
","
idr'nlities.
r"r
(b)
j" rif
ta
exp
J_
-
ae =
r
[- rr0'lde =
t
n;"L-ft,,a,=,
,u,
; I" -r":'*Lffi o, = t
4.12. Use the relation
[-
-x(t)y.(t)dt
=
i;
I- -x(r,r)Y*'(to)z/o
and a known transform pair to show that
t"ln
t\rdt = 4oi
.-. f' sincl
-+
(b)
= I - exP[-rra]
;;
(a)
Jo 1oz +
J.
(cl
;z
trdt
r' sinll
,-at
= 3r1
r' sin{,
2t
l_-
ldl J_--;'4 dt = 1
4,13. Consider the signal
'r(r) =
e*O 1-
"""t"
(a) Find X(ro).
(b)WriteX(to)asX(o)=R(r,r)+j/(r,r),whereR(to)and/(t,r)arctherealandimaginary
parts of X(ro), resPectivelY'
(c) Take the limit as e -+ 0 of part (b), and show that
9llin1 exp[-cllu(t)l = 116(o) +
Hinr.' Note that
I
--
216
The Fourler
Translorm
Chapter 4
..
a
[0. ro*0
llm --=-------= = (
.to e" I ot
o=0
[o,
f' ." @".dr=n
J-- e" +
The signal r(r) = exp[-ar]a(l) is inpur inro a system with impulse
h(t) = sirrlVr,rnrr.
(a) Find the Fourier transform Y(ro) of the output.
(b) For what value of a does the energy h the output
response
signal equal one-half the input-sig-
nal energy?
415. A signal has Fourier transform
xkl,r
+
-+ i4,.D 2
= -ct'tj4u+3
'n2
Find the transforms of each of the following sigpats:
(a)
(b)
r(-2r + r)
:0) exp[-/]
@4#
(d) .rO sin(nt)
(e) r(t) * 6(, - 1)
(f) .r(t) *.r(t - 1)
4.1& (a) Show that if r0) is band limited, rhat
x(o) = 6,
is,
for l-l , r"
then
,(r).#=r(r),
c>ro.
(b) Use part (a) to show that
l1-
, l;'
- r) o'='l
t-r
sinct sin(r
;l_- a
Isin
r
c>1
Ir"inor, lol -r
4.17. show that with current as input and voltage as output, the frequency response of an inductor with inductance L is jrol and that of a capacitor of capacitance Cisl/jotC.
4.1& Consider the system shown in Figure P4.18 with RC = 10.
(a) Find tl(r,r) if the output is the voltage across the capacitor, and sketch lA1t.ryl
function of o.
(b) Repeat Part (a) if the output
is the resistor voltage. Commenr on your results.
as a
Sec.4.8
217
Problems
Flgure
P4.lt
4.19. The input to rhe syslem shown in Figure P4.19 has lhe spectrum shown' ['et
Find
p(t) = costort,
the spcctrum Y(ro) of the outPut if
sin
otu
hr(tl=-;
Consider lhe cases
o-
)
oa and ro,,
s
to
))
.l
tor.
,li (, )
,rt(r)
r(rl
to,,
pl/,l
,,
(r)
X(ro)
It, (t :l
6)0
tigure
a)
P4.19
rL20. Repeat Problem 4.19 if
x1r; = sa
3la!
) (,)
218
The Fourier Transform Chapter 4
Ael. Ol The Hilbert transform of a signal.r(r) is obtained by passing
system with impulse response h(tl = l/zrt. What is H(r,r)?
(b) What is the Hilbert transform of the signal .r(r) = cosrrr?
4ZL
The autocorrelalion function of signal
R,(t) =
r(r)
rhe signal through an
is defined as
t'
+
J_".rr(r)x(r
t)dr
Show that
&(,) +, lx(o,) l'z
4.41. Suppose that r(r) is the input to a linear system with impulse response ft0).
(a) Show that
&(,): &0) + (r) r ft(-r)
where
(b)
y(l)
is the output of the system.
Show that
R."(r) <-+ | x1o1
424. For the system shown in Figure
.
r(r) =
l'z
I
a1rol
l,
P4.24, assume that
sinro,l sin ro.l
-l- * *,
ro,
(
or,
(a) Find y(t) if 0 < ro, ( to,.
(b) Findy() if ro, < or1< ot.
(c) Find y(t) if tq < r,rr.
H
(tol
,u,{_*,,,,
'btl
O
-l
tn
Flgure P4.20
425. Cbnsider the standard amplitude-modulation system shown in Figure P4.25.
(a) Sketch rhe spectrum of .r(r).
(b) Sketch the spectrum ofy(r) for the following cases:
(l) 0sro.(@o-o(ll) oo - rr,,, s ttr. < &ro
(iii) ro" > @o * o,
(c)
Sketch the spectrum of
z
(l) 0so.(00-o,,
(li) roo - ro,, s (D. ( roo
(lii)
ro.
>
0ro
+ od
(t) for the following
cases:
LTI
Sec.
4.8
219
Problems
(d) Sketch the spectrum of u1t) if o.
(i) o1 < t'r,,,
(il) <o1 < 2an - ,on,
(lll) tor > Zan 't a,,
x
)
oo
*
ro,, and
r (r)
(r)
I;rltcr
m(t\
cos
.4
H
--b)c
b)c
(r0,
cos ojo
lrl
clu\
0
'aia
ojc (,
o;c
(r)
,
ul(,lt
(u)
0
l,
or. os
-a, O u1
-(nt
t's
@l o
Flgure P425
AM demodulation consists of multiplying the received signal y (l ) bv a replica, zl cos r,r,,I. of lhe calricr and lorv-pass filtc rins t h'r rcsultinE signal : ( ).
Such a scheme is called synchronous dcmodulation and assumcs lhllt the phase of the carrier is known at the receiver. If the carrier phase is not known. ; (t ) hecomes
426. As discussed in Section
4.4.1,
z(r) = y(t)Acos(toor + 0)
4Zr.
where 0 is the assumed phase of the carrier.
(a) Assume that the signal .r(l) is band limited to <o-. and find the output;(r) of the
demodulator.
(b) How does i(r) compare rvith the desired output r(t)?
A single-sideband, amplitude-modulated signal is generated usinu the system shown in
Figure P4.27.
(a) Sketch the spectrum ofy(r) for @t= @^.
@) Write a mathematical expression for hrQ).ls it a realizable filter?
iI(o)
lllkol
n(tl
cos LJ"
-(.,O
,
Flgure P4.27
42&
Consider the system shown in Figure P4.28(a). The systems /r1(r ) and hr(t) respectively
have frequency responses
u0
Tho Fourler
M(ul
Tranolorm
Chapter 4
E{u)
n(t,
.r1(r)
(8)
Flgurc P4Jt(a)
H,(o) =
I
)lno{, -
oo) + H6(ro + roo)l
unO
H2@t)
=
-t
4
Wo(L,l
-
oo)
-
I/o(or + roo)t
(a) Sketch the spectrum ofy(t).
O)
Repeat part (a) for the Ho(ro) eho*r in Figure P4.2t(b).
Ho(j.,.t
-qb
- .+
-arg
0
0,O
(ro
i (,h
a',
Ilguro P4rtO)
(b)
+lr,
Ler.r(t) and y(l) be low-paes eignals with bandwidthe of 150 Hz and 350 Hz' rospec'
tively, and let e1r; = .r(l)y(t). The signal e(t) ls oampled uslng an ldeal eampler at intersecs.
vale of
(q) What is the maxlmum value that I, can take wlthout inuoducing aliaring?
I
(b) rf
r(r) = sin(1$-l]),rt,l
=."(Y)
sketch the sPectrum of the samPled signal for (i) 7i = 0.5 ms and (ii) T, = 2 m*
In natural sampling, the eignal .r(t) is multiplied by a train of rectangular pulses, aB shown
in Flgure P4.30.
(r) Find and Bketch the epectrum ofr,(t),
G)
Can
r(t)
be recovered wilhout any dlBtorlion?
Sec.
4.8
21
Problems
X(ul
xr(l)
n(r)
-27
P0'l
Flgure P4.A)
rl3l.
In flat-top sampling, the amplitudc of each pulse in the pulse train .r, (r ) is consunt during
the pulse, but is determined by an instantaneous sample of r(t). as illustrated in Figure
P431(a). The instantaneous sample is chosen to occur at the center of the pulse for con'
venience. This choice is not necessary in general.
(a) Write an expression for x, (t).
O) FindX,(or),
(c) How is this r$ult different from the result in part (a) of Problenr 4.30?
(d) Ueing only a low-pass filter, can r(t) be recovered without any distortion?
(e) Show that you can recover x(l) without any distortion if another filter, H", (or)' is
added, as shown in Figure P4.31(b),
n(,)
=
where
I
{1, ki,i:;
H",(,) =
*'til;'l;1,
l,l <,"
arbitrary,
elsewhere
=
X(or)
xr(')
- tu.
(a)
rr(,
)
(b)
Flgure P4.31 Flat-top sampling of x(t).
0
ola
222
The Fourier
Translorm
Chapter 4
system thal generates the baseband signal for FM stereoThe
left-speaker
and right-speaker signals are processed to produce
phonic broadcasting.
.rr_(t) + -rr(r) and.r1.(r) - .rfl(I), respcctively,
(a) Skctch the spectrum of ) (I)
(b) Sketch the spectrum of z (r). u(t), and z,(l).
(c) Shorv how to recover both 11(r) and -r*(l).
432. Figure P4.32 diagrams the FDM
XLU,I + XRk )
X
1|t:l - X pk tl
to
lo
r,; X
r.r
103
X
103
(a)
xr (r) + rn (r)
r! (r) -
rR
(r)
LPF
h3ul
0-15 kHz
cos 2rol
l
cos
cos 2@l
oJl,
L= t9kHz
I
(b)
Figure P432
433. Show that the transform of a train of doublets
a'(,
)
-
nT)
<-+
jri,$
)
n
is a train o[ impulses:
6(or
-
zoo).
2t
roo=7
4J4. Find
(a)
the indicated bandwidth of the following signals:
3Wr
-, ' (absolute bandwidth)
sinc
(b)
(c)
exp[-3tlrr(t)
exp[-3rla(t)
fal ,/"
exP
[-
or
']
w(t)
(3-dBbandwidth)
(957e bandwidth)
(rms ban<lwidth)
435. The signal X.(o) shorvn in Figure 4.4.10(b) is a periodic signal in or.
(a) Find the Fourier-series coefficienis of lhis signal.
Sec.
4.8
(b)
223
Problems
Show that
x,(o) =,2.11
(c)
-.,r,.-o["j],]
Using Equation (4.4.1 ) along with the shifting property of the Fourier transform. shorv that
'o = .?.+-@Dtl#3:+l)t
435. Calculate
the time-bandwidth product for the following signals:
(a) r (r) =
I exP [ ,']
-,--.
Y 2tr L" I z'l'
I
I
(Use the radius oIgyration measure for
(b) .t(r1 =
Iand lhe equivalcnt bandwidth
measure for
B')
sin1ur.Wt
----'
(Use the distance between zeros as a measure of I and the absolute bandwidth as a
measure of B.)
(c) .r(r) = Aexpl-orlrr(r). (use the time ar which r(r) drops to l/e of its value at, = 0
as a measure of Iand the 3-dB bandwidth as a measure o[ r9.)
Chapter 5
The Laplace Transform
5,1 INTRODUCTION
In Chapters 3 and 4, we saw how frequency-domain methods are extremely useful in
the study of signals and LTI systems. In those chapters, we demonstrated that Fourier
analysis reduces the convolution operation required to compute the output of LTI systems to just the product of the Fourier transform of the input signal and the frequency
response of the system. One of the problems we can run into is that many of the input
signals we would like to use do not have Fourier transforms. Examples are
exp[cl]a(t), o > 0; exP[-or],
< t < a; tu(t): and other time signals thai are not
absolutely integrable. If we are confronted, say, with a system that is driven by a rampfunction input, is there any method of solution other than the time-domain techniques
of Chapter 2? The difficulty could be resolved by extending the Fourier transform so
--
r()
that the eignal
is expressed as a Eum of complex exponentials. exp[sr], where the
frequency variable is s = o + 7'ro and thus is not restricted to the imaginary axis only.
This is equivalent to multiplying the signal by an exponential convergent factor. For
example,.exp [-ot] exp[ct]u(r) satisfres Dirichlet'g conditions for s > c and, there.
fore, should have a generalized or extended Fourier transform, Such an extended
transform is known as the bilateral Laplace transform. named after the French mathematician Pierre Simon de Laplace. In tbis chapter, we define the bilateral Laplace
transform (section 5.2) and use the definition to determine a set o[ bilateral transform
pain for eome basic signals.
As mentioned in Chapter 2, any signal .r(r) can be written as the sum of causal and
noncausal sipals, The causal part of .r(t), r(r)rr(r), has a special Laplace trangform
that we refer to as the unilateral Laplace transform or, simply, the Laplace transform.
The unilateral Laplace transform is more often used than the bitateral Laplace trans-
u4
Sec.
5.2
The Bllateral Laplaco Transtorm
form, not only because most of the signals occurring in practice are causal signals, but
also because the response of a causal LTI system to a causal input is causal. In Section
5.3, we define the unilateral Laplace transform and provide some examples to illustrate
how to evaluate such transforms. In Section 5.4, we demonstrate how to evaluate the
bilateral Laplacc transform using the unilateral Laplace transform.
As with other transforms, the Laplace,transform possesses a set of valuable properties that are used repeatedly in various applications. Because of their importance, we
devote Section 5.5 to the development of the properties of the Laplace transform and
give examples to illustrate their use.
Finding the inverse Laplace transform is as important as finding the transform itself,
The inverse Laplace transforrn is defined in terms of a contour integral. In general,
such an integral is not easy to evaluate and requires the use of some theorems ftom the
subiect of complex variables that are beyond the scope of this text. In Section 5.6, we
use the technique of partial fractions to find the inverse laplace transform for the class
of signals that have rational transforms (i,e.. that can be expressed as the ratio of two
polynomials).
In Section 5.7, we develop techniques for determining the simulation diagrams of
continuous-time systems. In Section 5.8, we discus some applications of the laplace
transform, such as in the solution of differential equations, applications to circuit analysis. and applications to control systems. In Section 5,9, we cover the solution of the
state equations in the frequency domain. Finally, in Section 5.10, we discrus the stability of LTI systems in the s domain.
5,2
THE BILATERAL LAPLACE TRANSFORM
The bilateral, or two-sided, Laplace transform of the real-valued signalr(t) is dehned
X,G)
AI' r(r) exp[-st]dr
as
(5.2.1)
where the complex variable s is, in general, of the form.r = o + itr, with o and trr the
real and imaginary parts, respectively. When o = 0, s = jo, and Equation (5.2.1)
becomes the Fourier transform of r(t). while with s # 0, the bilateral Laplace transform is the Fourier transform of the signal .r(t) exp[-ot]. For convenience, we sometimes denote the bilateral laplace transform in operator form as :lrlx(t)l and denote
the transform relationship between r(t) and Xp(s) as
x(,) t+ XB(s)
(s.2.2)
Let ue now evaluate a number of bilateral Laplace transforms to illustrate tho relationship between them and Fourier transforms.
il;",'r', '.''I-r- ' ::rl:rii.l,,:; jrL;;
,,, ,_ l,
.,r
.l
j,ti
226
I
i.}
$;
The Laplace
Transform Chapter
5
Example 62.1
Consider the signal.r() =
.*
1-r,Ia O. From
Xs(s)
=
[
the definition of the bilateral Laplace rransform.
exp[-arl exp[-sr]u(r) dr
=fJ6
exp[-(s + a)tldt
l_
= J+a
As stated earlier, we ctn look at this bilateral l:place transform as the Fourier transform
ofthe signal exp[-at] exp[-or]u(r). This signal has a Fourier rransform only if o > -a.
Thus, Xr(s) exists only if Re fsl > -a.
In general, the bilateral Laplace transform converges for some values of Re lsl and
not for othem. The values of s for which it converges, i.e.,
/-
lrt,ll exp[-Rels]rldr < -
(s.2.3)
called the region of absolute convergence or, simply, the region of convergence and
is abbreviatcd as ROC. It should bc stressed that tlre region of converg,encc depends
on the given signal x(t). For instance, in the preceding example. ROC is defined by
Re {sf > -a whether a is positive or negative. Note also that even though the bilateral
Laplace transform exists for all values of a. lhe Fourier transform exists only if a > 0.
If we restrict our attention to time signals whose Laplace transforms are rational
functions of s, i.e.. Xr(s) = N(s)/D(s), then clcarly. Xr(s) does not converge at the
zeros of the polynomial D(s) (poles of Xn(.r)). rvhich leads us to conclude thar for
rational laplace transforms, the ROC should nol conlain any poles.
is
Example 6.2.2
In this example. we show that two signals can have lhe same algebraic expression for their
bilateral laplacc transform, but different ROCs. Consider the signal
r(r) = - exp[-atlu(-t)
Its bilateral l:place iransform
is
Xs(s) =
=
-[
-
exp[-(s + a)tlu(-t)ttt
[o "*p1-1, + o)tltlt
For this integral to converge, we require that Re
eral Laplace transform is
xa(s) =
ls + al < 0 or Re [s | < -a.
I
r_;;
and the bilar-
Sec.
5.2
The Bilateral Laplace Translorm
In spite of the fact that the algcbraic ex;
two signals in Examples 5.2.1 and 5.2.:
Rocs. From these examples, we can cL..-.--, r,,ur, rut srgrrars rr.rr strlsr ror posrtrve nrne
only, the behavior of the signal puts a lower bound on the allosable values of Re[sl,
whereas for signals that exist for negative time, the behavior of the signal puts an upper
bound on the allowable values of Re{sl. Thus, for a given Xr(s). rhere can be more than
one corresponding ,ru (r ), dcpending on the ROC; in other worrls. the correspondence
between.r(l) and Xr(s) is not one to one unless the ROC is spccitied.
A convenient wav to display the ROC is in the complex s plane. as shown in Figure
5.2.1. The horizontal axis is usually referred to as the o axis. and the vertical axis is normally referred to as the 7o axis. The shaded region in Figure 5.2.1 (a) represents the set
of points in the s plane corresponding to the region of convergcnce for the sigpal in
Example 5.2.1, and the shaded region in Figure 5.2.1(b) represcnts the region of convergence for the signal in Example 5.2.2.
lmis
Inr
I
5.a1
I
(b)
(a)
Figure
is
s-plane representation of lhe bilateral Laplace transform.
The ROC can also provide us with information about whether .t (r) is Fourier transformable or not. Since the Fourier transform is obtained from the bilateral l-aplace
transform by setting o = 0, the region of convergence in this casc is a single line (the
Ito axis). Therefore, if the ROC for Xr(s) includes the lro axis, -t(r) is Fourier transformable, and Xs(o) can be obrained by replacing s in Xr(s) by jro.
Brornple 6.2.8
Consider the sum of trvo real exponentials:
r(,) ='l exp[-2r]u(r) + 4 exp[r]u(-r)
The Laplace
225
Transtorm Chapter 5
Note that for signals that exist for both positive and negative time, the behavior of the signal for negative time puts an upper bound on the allowable values of Re [s ], and the behavior for positive time puts a lower bound on the allowable Re [sf . Therefore, we expect to
obtain a strip as the ROC for such signals. The bilateral l:place transform ofx(r) is
x,(r) = 3exp[-(s + Z)tldr+
/
f
aexpt-(s
-
t)tldt
The 6rst integral converges tbr Re [sl > -2, the second integral converges for Re ls]
and the algebraic expression for the bilateral Laplace transform is
<
1,
xa$)=#-*
-s -
(s
5.3
+ 2)(s
1l
- l)'
-2 < Re[sl < I
THE UNILATERAL LAPLACE TRANSFORM
Similar to our definition of Equation (5.2.1), we can define a unilateral or one-sided
transform of a signal r(t) as
X(s1 =
f-x(r)
exp[-sr]dr
(5.3.1)
Some texts use t : 0* or, = 0 as a lower limit. All three lower limits are equivalent if
.r(t) does not contain a singularity function at, = 0. This is because there is no contribution to the area under the function r(t) exp[-sl] at , = 0 even if r(t) is discontinuous at the origin.
The unilateral transform is of particular interest rvhen we are dealing with causal
signals. Recall from our definition in Chapter 2 that if the signal r(t) is causal, we have
r(r) = 0 for t < 0. Thus, the bilateral transform Xr(s) of a causal signal is the same as
the unilateral transform of the signal. Our discussion in Section 5.2 showed that, given
a transform Xr(s), the corresponding time function r(t) is not uniquely specified and
depends on the ROC. For causal signals, however, there is a unique correspondence
between the signal .r(t) and its unilateral transform X(s). This makes for considerable
simplification in analyzing causal signals and systems. In what follows, we will omit the
word "unilateral" and simply refer to X(s) as the Laplace transform of x(t), except
when it is not clear from the context which transform is being used.
f,yqrnple 6.3.1
In this example, we find the unilateral l:place transforms of the following signals:
r,(t) = 4,
xr(l) = D(l), rr(t) = exp['2t], :r(,) = cosZ, r5(l) = 5;n2,
From Equation (5.3.1),
r,U,
=
4
I, oexp[-sr]rlr = J ,
Relsl >
0
Ssc.
5.4
Bilatoral Translorms Using Unilateral Transforms
xr6) =
/"
,Yr(s)
=
[
lg
6(r) exp[-sr]dr =
r,
for all
s
exp[,;'}tlexpl-stldt
=l_
s-i2
=;+ *i?L*7'
Since
cos2 =
Re I erp
[2r]]
and
Relsl >
o
sinZ = Im I exp[iZl), using the linearity ofrhe inregral
oPeration. we have
x.c)
=
x,t,r =
R.{+}=
r+'
r,n{-l-}=.t
Rels} > o
Re{s}>
o
Table 5.1 lisrs some of the important unilateral Laplace-transform pairs. These are
used repeatedly in applications.
5.4
BILATERAL TRANSFORMS USING
NILATERAL TRANSFORMS
The bilateral I-.aplace transform can be evaluated using the unilateral l-aptace transform if we express the signal .r(r) as the sum of two signals. The first part represents
the behavior of .r(t) in the interval (--,0), and the second parr represents the behavior of x(t) in the interval [0, cc). ln general, any signal that does not contain any singularities (a delta function or its derivatives) at l = 0 can be writtcn as the sum ofa causal
part.r.(t) and a noncausal part.r_(r), i.e.,
-r(,) = r+(,)r,(r) +.r_(l)a(-r)
(s.4.1)
Taking the bilateral Laplace transform of both sides, we have
Xa(s): X.(s) + [o ,-(r)exp[-sr]dr
J _*
Using the substitution t
: -t
yields
Xr(r) = X,(s) +
[ ,-t-')exp[st]r/t
Jo'
If r(t) does not have any singularities at, = 0, then the lower limit in the second term
can be replaced by 0-, and the bilateral Laplace transform beconres
Xr(s) = X.(s) +
9[.r_
(-t)rr(t)],_.
,
(s.4.2)
The Laplace
230
Translorm Chaptor 5
TABLE 5.1
Some Solected Unllsteral LaplacB'Tianstorm Palrc
Transtorm
Slgnal
l.
I
r (t)
2. u(r)
-
u(t
-
I-
a\
3.6(,)
4. 6(t
-
a)
[-
as ]
6.
exp[-at]uo
1.
{
8.
cos oor
u(t)
9.
sin r,ror
z(t)
I
G+ar'
Re
{sl
-;- s -s'+ @6
Rels! > 0
uro
Relsl > 0
s2
+
+ 2rl,1,,
s1s2 + 4roj;
+
-?.6-
s1s2
cos root
z(t)
[-at]
sin r,rot
u(l)
14. l cosroot z (t)
uo
-a
> -a
roo2
s2
expl-atl
15. I sin root
ls| >
nl
11. sin2 oor a(t)
Re[s] > 0
Re
s+a
10. cos2 oror a(r)
>0
for all s
[-asl
#,n=r.2....
exp[-atlu(t)
Rels|
for all s
exp
t'u(t)
13. exp
exp
I
5.
12.
Re(s| > 0
s
-
4rrlozl
s+a
Refsl
>0
Rels|>0
rf;G
Re [s]
> -a
0o(s+a)2+(l)3
Rels|
> -a
Relsl
>0
Re[sl
>0
G+
{-:-116+ rooz)2
(s2
_
2toos
(s2
+
_
0ro2)2
unilateral Laplace transform. Note that if r-(-l)z(t) has
an ROC defined by Re[s] ) o, then x-(t)u(-t) should have an ROC defined by
where9l'lstandforthe
Re
[sl
< -o.
Sec.
5.5
Properties ot the Unilaleral Laplace Transform
231
Example 6.4.1
The bilatcral Laplace transform of the signal .r(l) = exp [arlr.( -r). a > 0, is
Xa(s) = 9lexp[-at]u(t)
=
l.--,
t I \
-l
l'.r/,-,=r-o'
Note that the unilateral l:place transform of
Relslca
exp[arlu(-l)
is zero.
Exanple 6.42
According to Equation (5.4.2), the bilateral Laplace lransform of
.r(r) =
,4
exp[-ar]a(r) + Br2 exp[-br]z(-r),
a and
b>0
ts
Xs(s) =
A
r'.' o
+ :rlB?t)2exp[brlil(r)],--,
\
=ls+a*(t',t
\'("-t'J,.-,'
A28
=tttt
-a<
(' i-6;i'
where
Relsl
Re[sl
> -anRc[s]
<-b
<-b
9lB(-r)2 exp[Drlu(r)] follows from entry 7 in Table 5.1.
Not all signals possess a bilateral Laplace transform. For cxample, the periodic
exponential exp[rout]does not have a bilateral Laplace transfornr because
9r(exp[.;'rour]] =
=
cxpt-(s
/
-
i''r,,,)t)dt
li.*pt-(, - i,u)rl tt + f-exp[- (s -
J
--
Jn
1-)tldt
For the first integral to converge, we need Re[s] < 0, and for lhc second integral to
converge, we need Re ls) > 0. These two restrictions are contradictory, and there is no
value of s for which the transform converges.
In the remainder of this chapter, we restrict our attention to the unilateral l-aplace
transform, which we simply refer to as the Laplace transform.
5.5
PROPERTIES OF THE UNILATERAL
LAPLACE TRANSFORM
There are a number of useful properties of the unilateral Laplacc transform that will
allow some problems to be solved almost by inspection. In this scction we summarize
many of these properties, some of which may be more or less obvious to the reader. By
232
The Laplace
Translorm
Chaptor s
using these properties. it is possiblc to derive many of the transform pairs in Table 5.1.
In this section, we list several of these properties and provide outlines of their proof,
6.6.1 Linearity
If
x,(t)
<-r
X,(s)
rr(t) er Xr(s)
then
atr(t) + bxr(t) <+ axls) + bxz$)
(s.s.1)
where a and D are arbitrary constants. This property is the direct result of the linear
operation ofintegration. The linearity property can be easily extended to a linear combination of an arbitrary number of components and simply means that the Laplace
transform of a linear combination of an arbitrary number of signals is the same linear
combination of the transforms of the individual components. The ROC associated with
a linear combination of terms is the intersection of the ROG for the individual terms.
Example 6.5.1
Suppose we want to lind the laplace transform
of
(A + B expl-btllu(t)
From Table 5.1, we have the transform pair
u(r)
<-+
:rl and
exp[- Dr]z()
* ;;;
Thus, using linearity, we obtain the transform pair
Au(t) + B exp[-
btlu(tl*
The ROC is the intersection of Relsf
Re [s I > max ( -b, 0).
)
f . #;
-D
and Relsf
> 0, and, hence,
is given by
6.6.2 Time Shifting
If x(t)
<-+
X(s), then for any positive real number
r(t - t ;u1l -
,o) e+
lu,
exp[-ros]X(s)
(s.s.2)
The signal x(t t)u(t - ro) is a l,-second right shift of x(r)u(). Therefore, a shift in
time to the right corresponds to multiplication by exp[-los] in the L:place-transform
domain. The proof follows from Equation (5.3.1) with
- ro)u( - ro) substituted for
r(r), to obtain
-
r(
Sec.
5.5
Transform
Properties ol the Unilateral Laplace
9[.r(r
-
- ,,ll = J, -r(r -
,,t)r,(,
t,,)rr(r
-
r,,)
233
cxp[-.rr]rlr
= J,['..1, - r,,)cxp[-srlr/r
,,
Using the transformation of variahles, = ? + ,r. we have
illx(r
-
r',).r(,
-
,,,)l =
=
I:r(t
exp
[-
)
exp[-s(t + t,,)] r/t
t'
tur
| Jrr| x(r ) exP [-
sr] dt
= exp[-r,flX(s)
s in the ROC of x(t) are also in the ROC of .r(t
Note that all values
the ROC associated with .r(r
-
-
tr). Therefore.
ru) is the same as the ROC associated with r(r).
Example 6,63
- o)lza\. This signal can be wrilten as
rect((, - a)/2o) = u(t) - u(t - 2al
Consider lhe reclangular pulse -r(r) = rect((r
Using linearity and time shifting. we find that the l-aplace lransform
x(r)
=:-
?e].
exp[-2asll= t tcIpL
ofr(l)
is
Re(s]>0
It should be clear rhar lhe time shifting property holds for a right shift only. For examfor ) 0, cannot be expressed in terms of the
ple, the t aplace transform of x(t +
'0
'0),
laplace transform of xG). (WhY?)
6.6.3 Shifting in the s Domain
If
x(t)
++
X(s)
then
exp [sor]x(t) e+
X(s
-
so)
(s.5.3)
The proof follorvs directly from the definition of the Laplace transform. Since the new
tranjform is a shifted version of X(s), for any s that is in the RoC of x(t), the values
s + Re[s,,] are in the ROC of exp[sot]r(t).
Exampte 6.6,3
From entry 8 in Table 5.1 and Equation (5.5.3). the Laplace transform of
r(t) = A exP[-atl
cos((oo,
+ e)u(,)
2U
The Laplace
Transform
Chapter 5
ts
X(s) = 91a
exp
[-
atl(cosr,rol cos 0
-
sinront sin 0 )rr(t) ]
-
= 9lA expl- atl cosr,rot cos0 u(t)l
glA exp[-at]
sinr,rot sin0
u(t)l
_ ,4(s + a)cg-st _ __4gr_lI9
(s + a;2 + roo2 (s + a)2 + tofi
_
-
- r,losin0], Re[s] > _a
z4[(s + a) cos_O
(s + atz +.ni
6.6.4 Time Scdine
If
.r(t)
<+
X(s),
Re [sf
> or
then for any positive real number q.
x(cr)
<-r:r(;),
Rels| >
co,
(5.s.4)
The proof follows directly from the definition of the laplace lransform and the appropriate substitution of variables.
Aside from the amplitude factor of l/o, linear scaling in time by a factor of c corresponds to linear scaling in the s plane by a factor of 1/c. Also. for any value ofs in
the ROC of .r(t), the value s/a will be in the ROC of r(ot): that is, the ROC associated
with.r(cr) is a compressed (c > 1) or expanded (a < l) version of the ROC of x(t).
Ilxarnple 6.6.4
Consider the time-scaled unit-srep signal a(ct), where c is an arbirrary positive number.
The l:place transform of a(or ) is
elu(or)l
=**=+
Relsl
>0.
This result is anticipated, since u(cr) = u(t) for o > 0.
6.6.6 Differentiation in the fime Domain
If
.r(t)
<+
X(s)
then
+)
*sX(s) -.r(o-)
(55.s)
Sec.
5.5
Properties of the Unilateral Laplac€
Translorm
235
The proof of this property is obtained by computing the lransform of dt(t)/dt. This
transform is
4+l=
r+expr-srldt
Integrating by parts yields
4+\=
:
expr-srl,,,,l; -
[''1'11-'1
expr-'rr] dr
lim [exp[-stlr(t)l-.r(0-) + s X(s)
l')-
The assumption that X(s) exists implies that
lim [exp[-sr]x(t;l = g
for s in the ROC. Thus,
*{*:,'}=
sx(s)-x(o-)
Therefore, differentiation in the time domain is equivalent to multiplication bys in the
s domain. This permits us to replace operations of calculus by simple algebraic operations on transforms.
The differentiation property can be cxlendcd to yicld
(s.s.6)
Generally speaking, differentiation in the time domain is the most important ProPerty
(next to linearity) of the l-aplacc transform. It makes the l,aplacc transform useful in
applications such as solving differential equations. Specifically, wc can use the l:place
transform to convert any linear differential equation with constant coefficients into an
algebraic equation.
As mentioned. earlier, for rational L:place transforms, the ROCI does not contain
any poles. Now, if X(s) has a first-order pole at s : 0, multiplying by s. as in Equation
(5.5.5), may cancel that pole and result in a new ROC that contains the ROC of r(r).
Therefore, in general, the ROC associated with dt(t)/dt normally contains the ROC
associated with r(l) and can be larger if X(s) has a first-order polc at s = 0.
Erample 6.6.6
The unit step function.t(t) = r(r) has the transform X(s) a l/s, rvith an ROC defined by
Re ls) > 0. Thc derivative of n (r) is the unit-impulse function, rvhose f-aplace transform
is unity for all s with associated ROC extending over the entirc.r planc.
Example 6.6.6
Ler,r1r; = sin2(,)t u(l), for which r(0-) = 0. Note that
x'(t) = 2- sin r,rt cos tol u (r) = r" .;r
rr, ,,,,
236
The Laplace
Translorm
Ohapter 5
From Table 5.1,
9[sin
2r,r
2a
t a (l) ]
s2
+
4r,r
l
and therefore.
Stlsinrorr
u(r)l =
i1ri2i*i,
Example 6.6.7
one of the important applications of the Laplace transform is in solving differential equations with specified initial conditions. As an example, consider the differential equati;n
y"(t) + 3y'(rl + 2yQ) = a, /(0-) = 3. y,(o-) = I
Let f(s) = 9ly()l be the l:place transform of lhe (unknown) solution y(). Using the
differentiation-in-iime property, we have
:tly'(t)l = sY(s) - y(0-) = sY(s) - 3
9ly'()l =s2Y1s; - sy(o-) - )'(0-) = sryls;
-
3s
-
I
If we take the l:place transform of both sides of the differential equation and use the lasr
two expressions, we obtain
s2Y1s;+3sY(.s)+2Y(s1 =
3r*
16
Solving algebraically for Y(s), we ger
Y(s) =
3s+10
7
s+l
(s+2)(s+l)
4
s+
From Table 5.1, we see that
y(t) = 7 exp[-r]u(r)
-
4 expl-2tlu(t)
Ernrnple 6.68
consider the RC circuit shown in Figure 5.5.1(a). The input is the rectangular signal shown
in Figure 5.5.1(b). The circuir is assumed inirially relaxed (zero iniriat condirion).
Oo
(a)
(b)
Figure
55.1
Circuit for Example
5.5.8.
S€c.
5.5
Properiies of the Unilaleral Laplace Translorm
237
The differential equation governing the circuit is
arlrt +
|f
r1r
)h
= u(t)
The input o(r) can be represented in terms of unit-step functions as
u\t) = volu(t
- a)-u(t - b)l
Taking the laplace transform of both sides of the differential equaaion yields
rut";
*
/-fQ
=
!!
1exp1-arl
-
exp[-Ds]l
Solving for /(s), we obtain
\i
=
;:#Rc[exp[-asl -
exp[-sbl
I
By using the time shift propeny, we obtain the current
xi =*["*['#3]
u(t
- a) - *o[t*.')],,,- r,]
The solution is shown in Figure 5.5.2.
yo
R
Figure 55.2 The current waveform
in Example 5.5.8.
6.6.6 Intcgration in the Time Domain
Since differentiation in the time domain corresponds to mulriplication by s in the s
domain, one might conclude that integration in lhe time domain should involve division by s. This is always true if the integral of r(r) does not grow faster than an exponential of the form A exp [-al], that is, if
lim exp [- sr]
for all s such that Re lsl > a.
The integration property can be stated
[ x|'l dr = O
Jn'
as follows:
For any causal signal x(r),
if
The Laplace
23t)
y(i = |
J6
Translorm
Chapter 5
xg)dr
then
Y(s) =
1X(,)
(s.s.7)
s
To prove this result, we stan with
x(s)
:
/-
x()
exp
[-srl
dr
Dividing both sides by s yields
{iq:
I:,u,*54,,
lntegrating the right-hand side by parts, we have
+
:
y(,)
ry54
l- I-ru,
exp[-sr]dr
The first term on the right-hand side evaluates to zero at both limits (at the upper limit
by assumption and at lhe lower linrit because y(0-) = 0), so that
x(s)
s
=
su(t)l
Thus, integration in the time domain is equivalent to division bys in the s domain. Integration and differentiation in the time domain are two of the most commonly used
properties of the l:place transform. They can be used to convert the integration and
differentiation operations into division or multiplication by s, respectively, which are
algebraic operatioru; and, hence, much easier to perform.
6.6.7 Ilifferentiation in the c Domain
Differentiating both sides of Equation (5.3.1) with respect to s, we have
dX(sl t- (-t)r(t)exp[-sr]
=
,F J
dt
Consequently,
-rxgl
<->
ff
(s.s.8)
Since differentiating X(s) does not add new poles (it may increase the order of some
existing poles), the ROC associated with -tr(l) is the same as the the ROC associated
with r(t).
By repeated application of Equation (5.5.8), it follows that
Sec.
5.5
Propenies of the Unilateral Laplace Translorm
239
e d'xlr)
(s.-5.e)
(-r)".r(r)
Erample 6.EO
The lzplace transform of the unit ramp function
tion (5.5.8) as
n(s) =
-
r(t) = n,1r;
"on
nc obtained using Equa-
d
-;r{a(t)l
dr I
dss -s2
Applying Equation (5.5.9), we have, in general.
r'r1r;
- "f'li
(ss.lo)
65.E Modulation
If
r(r)
then for any real number
(-)
X(r)
too,
* | [x{" + ir,l.) + X(s - ior,,)]
x(l) sinr,r,p ,. l1IXt, + jt,ru) - X(s - iro,,)l
.r(l)
cos ronr
(5.5.11)
(5.5.12)
The proof follows from Euler's formula.
exp [lro,,t
]=
cos too,
* i sin to,,l
and the application of the shifting property in the s domain.
Exemple 65.10
The laplace transform of (cosroor)u(t) is obtained from the Laplace transform of u(r)
using the modulaiion property as follows:
y[(cos<,rrr)u(r)] =
tl I *, I \
_;.,u/
zl, + ,..uo
.t
- t'+
.ufi
Similarly, lhe Laplace lransform of exp[-ar] sin torl u(t) is obtained from the Laplace
transform of exp [-at I rr (l ) and the modulation property as
The Laplace
240
e[exp[-arl(sinr,,orla(r)l =
I(#..,
Transform
Chapter 5
#o*o)
-L__
=_
(s+a)2+roo2
6.6.9 Convolution
This property is one of the most widely used properties in the study and analysis of linear systems. Its use reduces the complexity of evaluating the convolution integral to
simple multiplication. The convolution property states that if
.r() er x(s)
h() <+ H(s)
then
r(r) n /r(r) er X(s)H(s)
where the convolution of
x(t)
and ft (r) is
x(t) * h(t) =
Since both ft (l) and
x(t)
(s.s.13)
[
xft)h(t
J_-
- r)dr
are causal signals, the convolution in this case can be reduced to
x(t) s h(t)
= Jg[' xk)h(t - t) dr
Taking the Laplace transform of both sides results in the rransform pair
r(r) x Ir(r) <+
f [f
,f"l
h(t
- t)dt]exp[-sr]dr
Interchanging the order of the integrals, we have
r14 * tO
* f-,t"1[f
Using the change of variables p
0forp<0yields
r(r) x h(r) <+
h(t
- iexp[-sr]ar]at
: t - t in the second integral and noting that ft (p) :
exp[-st ]
exp[-sp]dp ,r,
/- r(t )
[],'0,*,
]
or
x(r1x
11111e+
X(s)H(s)
The ROC associated with X(s)H(s) is the intersection of the ROCs of X(s) and
because of the multiplication process involved. a zero-pole cancellation can occur that results in a larger ROC than the intersection of the ROCs of X(s)
II(s). However,
Sec.
5.5
241
Properties ol tho Unilateral Laplace Transtorm
and H(s). In general, the ROC of X(s)H(s) includes the intersection of the ROCs of
X(s) and H(s) and can be larger if zero-pole calrcellation occurs in the process of multiplying the two transforms.
Eranple 65.11
The integration property can be proved using the convolution property. since
[
.r(r
)
Therefore. the transform of the integral
of rr(r). which is l/s.
dt = :(r) * r(r)
ofr(r)
is the product of
X(s) and the transform
Example 65.12
.
Let r(l) be the rectangular pulse rect ( (, - o)/?a) centered at , = / and with width 2. The
convolution of this pulse with itself can be obtained easily with the help of the convolution prop€rty.
From Example 5.5.2. the transform of .r(t) is
gIPL-lsl
x(s) = 1-The transform of the convolution is
Y(s)
-
1r1,; =
[l -elet' -2"]]'
I 2expl-2asl . exp[- 4asl
=s2---7
-'---,,
Taking the inverse l:place transform of both sides and recognizing that 1/s2 is the trans'
form of lu ( ) yields
v(') ='r(')
"
'(')
:
- l,!u -7r":' ,;:';(t
'i,,','r'
-
4atu(t
-
4a\
This signal is illustrated in Figure 5.53 and is a triangular pulse, as expected.
y(r)=r(r)..r(r)
flgure
SSJ
Convolulion of two
rectangular signals.
The Laplace
242
Translorm
Chapter 5
In Equation (5.5.13). II(s) is called the transfer function of the system whose
impulse response is &(t). This function is the s-domain representation of the LTI system and describes the "transfer" from the input in the s domain, X(s), to the output in
the s domain, Y(s), assuming no initial energy in the system at t 0-. Dividing both
sides of Equation (5.5.13) by X(s), provided that X(s) # 0. gives
:
H(,): ;I:]
(s.s.l4)
That is. the transfer function is equal to the ratio of the transform Y(s) of the output
to the transform X(s) of the input. Equation (5.5.14) allows us to determine the
impulse response of the system from a knowledge of the response y(t) to any nonzero
input r(t).
Example 65.18
Suppose that the input
LTI
x(l) = exp [-2tla(t)
is applied to a relaxed (zero
initial conditions)
system. The oulput of the syslem is
)
r0) = l(exn[-r] + exp[-21 - exp[-3rl)a(r)
Then
!
x(s)=s+2
and
3(s
+
l)
3(s +
2)
3(s +
-1)
Using Equation (5.5.14). we conclude that the transfer function H(s) of the system is
rr('):i-ffi
3G*r1ll!)
_ 2(s2+tu+7)
3(s+l)(s+3)
:3[,.#.*]
from which it follcws thal
+
alry =
Jalr;
Jtexnl-rl
+ exp[-3rtlu(r)
Erample 6.6.14
Consider the LTI system describcd by the differential equation
y-(t) + Zy"(t)
- y'ltl + 5.v(t) = 3r'1r; * r,r,
Sec.
5.5
Properties of the Unilaioral Laplace Translorm
243
Assuming thar the system was initially relaxed. and taking the Laplace transform of both
sides. we obtain
s3Y1s; + 2r2y(s)
Solving for H(s)
= y1t1111(s),
-
sY(s) + 5Y(s) = 3sx(s) + x(s)
we have
.
HIs)
3s+1
----.-------= s'*2s'-s+5
5.5.10 IDttiaI-VaIue Theorem
Let .rO be infinitely differentiable on an interval around
interval); then
r(0.)
(an intinitesimal
r(0. ) = J-+o
lim sX(s)
(s.s.1s)
Equation (5.5.15) implies that the behavior of r(t) for small I is determined by the
behavior of X(s ) for large s. This is another aspect of the inverse relationship between
time- and frequency-domain variables. To establish this result, we expand r(t) in a
Maclaurin series (a Taylor series about t = 0*) to obtain
.r() =
[x(o-)
+ x'(0*)t
*
...
where r(a)(O*) denores the n th derivative of
Laplace transform of both sides yields
x(s) = r(ol) +
* rt')10t)
ri*
],r,r
x(r) evaluated at I = 0*, Taking
#....
the
* ni(.tJ *...
=,i,''"'(0.)#
Multiplying both sides by s and taking the limit as s + @ proves the initial-value theorem. As a generalization, multiplying by s'* t and taking the limit as 5 -v co yields
,t')10*) = lim [s'*lx(s)
-
s'x(0*)
- s'-rr'(0*) -..' - r.rta-rt10+)]
(5.5.16)
This more general form of the initial-Value theorem is simplified ', rt')10*) = 0 for
n < N. In that case,
r(N)(g+
) = lim sN*lX(s)
(s.s.17)
This property is useful, since it allows us to compute the initial valuc of the signal r(t)
and its derivatives directly from the Laplace transform X(s) without having to find the
invene x(t). Note that the right-hand side of Equation (5.5.15) can exist without the
existence ofr(0'). Therefore, the initial-value theorem should be applied only when
.r(0*) eilsts. Note also that the initial-value theorem produces.r(0 '), not x(0-).
The Laplace
244
Translorm
Chapter 5
Example 65.15
Ttie initial value of the signal whose l:place transform is given by
xG)=G+ifu,
a+b
is
r(o*) = 1u,1,Jift1,
=.
The result can be verified by determining.r() ftrst and then substituting r =
example, the inverse Laplace trausform of X(s) is
,<,1
so that
:
;i
tla
r(0*) = c. Note
explatl-bexp[Dr]lu()
that
-f
,[exp[ar]
-
0'. For this
explD4lz(r)
r(0-) = 0.
6.6.11 Final-Value Theorem
r(l)
The final-value theorem allows us to compute the limit of signal
Laplace transform as follows:
lS r0l
= I'S
rxt"l
as,
+
@
from its
(s.s.18)
The final-value theorem is usiful in some applications, such as control theory, where
we may need to find the tinal value (steady-state value) of the output of the system
without solving for the time-domain function. Equation (5.5.18) can be proved using
the differentiation-in-time property. We have
I
x'(r) exp[
Jo
Taking the limit
as s
+
!,g
- stldt:sX(s) - x(0-)
(5.5.19)
0 of both sides of Equation (5.5.19) yields
f
r'Ol exp[
- sr]dr = h'u hxG) - r(0-)l
or
t"
io ''0)
d'= l'S [sx(s) - r(o-)]
Assuming that lirq x(t) exists. this becomes
lS ,01 - ,r(0-) = litu s xG) - r(0-)
.
which, after simplification, results in Equation (5.5.18). One must be careful in using
the final-value theorem. since lim s X(s) can exist even though r(r) does not have a
'l
Sec.
5.5
Propertiss o, the Unilateral Laplace Transform
limit as r ->
cc.
Hence. it is important to know rhat
final-value theorem. For example,
245
liq x(r)
exists betore applying the
if
xG): r,
r---l
,l
then
lirq
sx(s) = ln
u-
o
rL =
But r(t) = costor, which does not have a limit as 1 -e o (cosor oscillates between +l
and -1). Why do we have a discrepancy? To use the final-value theorem, we need the
point s = 0 to be in the ROC of sX(s). (Otherwise we cannot substitute s = 0 in
sX(s).) We liave seen earlier that for rational-function Laplace transforms, the ROC
does not contain any poles. Therefore, to use the frnal-value theorem, all the poles of
sX(s) must be in the left-hand side of the s plane. In our example, sX(s) has two poles
on the imaginary axis.
Ernrnple 6.6.16
The input.r(t) =
function is
eu()
is applied to an automatic position-control system whose transfer
H(s) =
s(s+b)+c
The final value of the output y(r) is obtained as
f11
r0) = l,$ , Yf"l = linl s x(s)H(s)
9
=h,[4
r-ro I s(s + b )+c
s
=A
assuming that the zeros ofs2 + Ds + c are in the left half planc Thus. after a sufficiently
long time, the output follows (tracks) the input r(r).
f,'.ra'nple 6.6.17
Suppose we are interested in the value of the integlal
Jo
t" exvl- atl at
Consider the integral
Y0) =
[
t'exP[-at] dt =
l,.x(t)
dr
Note that the final value of y(r) is the quantity of interest; that is.
246
The Laplace
Translorm Chapters
TABLE 5.2
Somo Solecled Propertlos o, tho Uplaco Translorm
l.
Linearity
X
(s.s.1)
""X,(s)
,.I
2. Time shift
.r(-rn)a(-ro)
3. Frequency shift
exp
4. Time scaling
r(ar). o >
5. Differentiation
tlx(t)/dt
s
X(s) - .r(0-)
(5.s5)
6. lntegration
t'
I r(t) dr
I
7. Multiplication
ll.
9.
10.
I
)
o,.r (r)
by I
Modulation
Convolution
I
nitial value
l. Finnl value
X(s) exp (-sro)
(sor)r(r)
X(s
-
(55.2)
(s.s3)
so)
t/a X(s/a)
0
(5.5.4)
Jn
xttl
(55.7)
"
,r(r)
_dx(s_)
(55.8)
ds
r(t)
cos
r(l)
sin to,,l
I
orr
+
+ioo)I
ztx(s - iroo) X(s
I
4lx(s loo)
-
X(s +ioo)l
(5.s.11)
(s.5.12)
r(r) {,rt(r)
x(s)rr(s)
(s.5.t3)
.r(0.)
lim s X(s)
(5.s.rs)
lH't'l
lia s X(s)
(sJ.t8)
lH Yt,l = l,g "lx(s) = 1ir 1'1",
From Table
.5. I .
xG) =
1"
irJy,i
.
Thercfore.
/=r"exp1-arl
at =
ol,!.,
Table 5.2 summarizes the properties of the laplace transform. These properties.
along with the transform pairs in Table 5.1. can be used to derive other transtorm pairs.
we saw in section 5.2 that with s = o + fro such that Relst is inside the Roc, the
l-aplace transform of .r(r) can be interpreted as the Fourier transform of the exponentially weighted signal .r(r) exp [-or]l thar is.
Sec.
5.6
247
The lnverse Laplaco Transform
X(o +
iot)
=
[_-r|rexp[-or]
expl-iottlttt
Using the inverse Fourier-transform relationship givcn in Equation 14.2.5). we can find
x (l ) exp [-ot] as
.r(t) exp [- ot ] =
* [_.r,
+ iro)exp[lo,r]
r/r,,
Multiplying by exp[ot]. we obtain
,<i : jn [_,rr"
Using the change ofvariables s = o
,o
--
*
7
r,r,
k[
+ iro) exp[(o
+ iot)tltt'',
we get the inverse Laplacc-transform equation
'*
(5.6.1)
*urexp[sr]ds
The integral in Equation (5.6.1) is evaluated along the straight line o + lro in_the comto o * /-, where o is any fixed real number for which Re[s]
plex plan-e from o = o ii a point in ttr" ROC of X(s). Thus, the integral is evaluated along a straight line
that is pirallel to the imaginary axis and at a distance o from it.
Evaiuation of the integral in Equation (5.6.1) requires the use of contour integration
in the complex plane, which is not only difficult, but also outside of the scope of this
text; hence, we will avoid using Equation (5.6.1) to compute the invcrse l-aplace transform. In many cases of interest, the l-aplace transform can be expressed in the form
l-
x(,) = ;[:]
(s.6.2)
where N(s) and D(s) are polynomials in s given by
N(s) = 6,nt- + br,-,s'-l + "'+ bls + D(l
D(s): ansn + an-rs'' l +"'+ ars + 40,
a,*o
The function X(s) given by Equation (5.6.2) is said to be a rational function of s, since
it is a ratio of two folynomiali. We assume thal m < r; that is, lhe degree of N(s) is
strictly less than thl digree of D(s). In this case, the rational function is proper in s.
lf. m = n i.e., when thi rational function is improper, we can use long division to
reduce it to a proper rational function. For proper rational transforms, the inverse
Laplace transfbrm can be determined by utilizing Partial-fraction expansion techniques. Actually, this is what we did in some simple cases, ad hoc and without diffi'
culiy. Appendii D is devoted to the subject of partial fractions. We recommend that
the ieadii not familiar with partial fractions review that appendix before studying the
following examples.
Eramplo 6.8.1
To find the inverse laplace transform of
xtr) =
tr.;tlr*J_ a;l
248.
The Laplace
we factor the polynomial D(s) = 5r + 3s2
-
4s and use the
Transtorm
Chapter 5
partial-fraitions form
'!-r- * 4t.
x(s)=As * s+4
s-1
Using Equation (D.2). we find that the coefficients ,l,,
Ar=
Az=
At=
i = 1,2,3, arc
_iI
7
zo
3
5
and the inverse [aplace transform is
,(,) = -1i,1r1 * fr
"rp1-
4tlu(t) +
J
exptrlu(r)
f,sornple 6.62
In this example, we consider the case where we have repeated factors.
l:place transform
Suppose the
is given by
xc)
=
F#;*_,
The denominator D(s) = s3 - 4s2 + 5s
-
2 can be factored as
D(s)=(s-2)(s-lf
Since we have a repeated factor of order 2, the corresponding partial-fraction form is
x(s)
The coefticient
I
BAzAI
*
*;-'i
=;-,
4" -'rF
can be found using Equation (D.2); we obtain
B=2
The coefficients
A,,i = l,2,
are found using Equations (D.3) and (D.4); we get
Az= |
and
Ar=
d lx2 - 3s\
as l, _ r_/1,",
I
"
=
1r_2$- r)_r4l-1'l
(" - 2)2
so that
2l
x(s)=;-*G-il-
Il"-, =,
Sec.5.6
The lnverse Laplace Translorm
249
The invene Laplace transform is thcrefore
x(tl = 2exp[2lrr(t) + I exp[tlz(t)
Erample 6.63
ln this example, rre treat the
case
of complex conjugate poles (irreducible second-degree
faclors). Lrt
xG) =
rr*,;1
ii
Since we cannol factor lhe denominator. we complete the square as follows:
D(s) = (s +
2\2
+32
Then
*'---t'
(s +212 +3:
---11?-" '
x1s1= (s+2)2+32
By using the shifting properly of the transform, or alternatively. by using entries l2 and
13 in Table 5.1. we find the inverse liplace lransform lo be
r(r) = exp[ - ?,](cos3r)rr() + | exp[- 2](sin
3r)rr(r)
Exanple 6.6.4
As an example of repeated complex conjugate poles, consider the rational function
,,r,,=5rr_3=rr+7.s_3
^\rr_
(sz + l;2
Writing X(s) in partial-fraction form, we have
x(,):t+P.s#;
and therefore,
5s3
-
3s2
+
7s
- 3=
(Ars + B,)(s2 +
l) + Ars + B,
Comparing the co€fficients of the different powers ofs, we oblain
/r
=
5,
Br= -3'
Az=2,
Bz=
0
Thus,
* -?
xlsr=-_$-----1
/-"2+l
s2+l'(s2+112
and the inverse Laplace transform can be determined from Tablc
x(r) =
(5 cosr
-
3
sinl + tsint)u(t)
-5.1
to be
, The Laplace Translorm Chapter 5
..
5,7
SIMULATION DIAGRAMS
FOR CONTI NUOUS-TI ME SYSTEMS
In Section 2.5.3,we introduced two canonical forms to simulate (realize) LTI systems
and showed that, since simulation is basically a synthesis problem. there are several
ways to simulate LTI systems, but all are equivalent. Now consider the Nth order system described by the differential equation
(r,.
P.
o,o,)t(t)= (5,r,r,),1,y
(s.7.r )
Assuming that the system is initially relaxed, and taking the l-aplace transform of both
sides. we obtain
(,, . t,,,,)r1,) = (#,rr,)ru,
(s.7.21
Solving for Y(s)/X(s), we get the transfer function of the system:
) b,,'
H(s) = *=isil
+)
(5.7.3)
a,s'
i-0
Assuming that N = M, we can express Equation (5.7.2)
sn[y(s)
-
bpX(s)] +sN-r[ap-rY(s)
-
as
Dr-,X(s)] +...+aoY(s)
-
boX(s)
=g
Dividing through by sN and solving for Y(s) yields
Y(s) = brx(s)
+ 1
"-,
* 11ar-,x1r; - aN-'v(s)l + "'
[D,X(s)
-
a,Y(s)] +
+
l; [DnX(s) - ay(s)]
(5.7.4)
Thus, Y(s) can be generated by adding all the components on the right-hand side of
Equation (5.7.4). Figure 5.7.1 demonstrates how H(s) is simulated using this technique.
Notice that the figure is similar to Figure 2.5.4, except that each integrator is replaced
by its.transfer function 1/s.
The transfer function in Equation (5.7.3) can also be realized in the second canonical form if we express Equation (5.7.2) as
M
Y(s) =
)
--J$_,
sN
=
b,,'
x(s)
+ ) a,si
i-0
(5',')
(s.7.s)
'1'1
l-
U
tr
E
o!
E
oo
o
a
t:
(,
bo
ll
251
The Laplac€
Translom
Chapter 5
where
I
v(s) =
x(s)
sil+ )
(s.7.6)
o,si
or
("*
!'r,r')
r'1ry = X(s)
(s.7.7)
Therefore. we can generate Y(s) in two steps: First. we generate V(s) from Equation
(5.7.7) and then use Equation (5.7.5) to generate Y(.s) from V(s). The result is shown
in Figure 5.7.2. Again, this figure is similar to Figure 2.5.5, except that each integrator
is replaced by its transfer function l/s.
Example 6.7.1
The two canonical realization forms for the system wilh the transfer function
H(s) =
s2-3s+2
sr+612+lls+5
are shown in Figures 5.7.3 and 5.7.4.
As we saw earlier, the Laplace transform is a useful tool for computing the system
transfer function if the system is described by its differential equation or if the output
is expressed explicitly in terms of the input. The situation changes considerably in cases
where a large number of components or elements are interconnected to form the complete system. In such cases, it is convenient to represent the system by suitably interconnected subsystems, each of which can be separately and easily analyzed. Three of
lhe most common such subsystems involve series (cascade), parallel, and feedback
interconnections.
In the case of cascade interconnections, as shown in Figure 5.7.5,
Y1(s) = H,(s)X(s)
and
Yr(s) = Hr(s)Y,(s)
= [H,(s)H,(s)lX(s)
which shows that the combined transfer function is given by
H(s) = H,(s)Hr(s)
(5.7.8)
We note that Equation (5.7.8) is valid only if there is no initial energy in either system. lt is also implied that connecting the second system to the first does not affect the
output of the latter. In short. the transfer function of first subsystem. Ht(s), is computed unrler thc assumption that the second subsystern with lransfer function H,(s) is
not connected. In other rvords. the inputioutput relationship of the first subsystem
c
+
c
a)
CJ
+
cl)
tr
oo
E
+
.E
at
|.--
EO
253
2il
Th€ Laplace
Translorm
Chaptor 5
.r(r)
v(t)
Flgure
5.73
Simulation diagram using first canonical form for Exam-
ple 5.7.1.
Egure 5.7.4 Simulation diagram using second canonical form for Erample'5.7.1.
remains unchanged, regardless of whether Hr(s) is connected to it. If this assumption
is not satisfied. H,(s) must be computed.under loading conditions. i.e., when Hr(s) is
connected to it.
If there are N systems connected in cascade. then their overall transfer function is
H(s) = 17,1r1}I2(s) ... Hn(s)
(5.7.e)
Sec.
5.7
Simulation Diagrams lor Contlnuous-Time Systems
t'1(r)
,/t(r)
ft
255
Flgure 5.7,5 Cascade
interconncction of two subsystems.
(s)
Y! (s)
,t/,(r)
Flgure 5.7.6 Parallel
interconnection of two subsystems.
Using the convolution property, the impulse response of the overall system is
h(t) = h,(t) {'fr2(r) n ... * ft/v(,)
If two subsystems are connected in parallel,
system has no initial energy, then the outPut
as shown
(5'7'10)
in Figure 5.7.6. and each sub-
Y(s)=Y'(s)+Yr(s)
= I/r(s)x(s) + Hr(s)X(s)
= [Hr(s) + H,(s)]X(s)
and the overall transfer function is
H(s)=f/,(s)+Hz(s)
(s.7.1 1 )
For N subsystems connected in parallel. the overall transfer function is
H(s) = H,(s) + H,(s) + ... + HN(.r)
(s.7.t2)
From the linearity of the L:place transform, the impulse response of the overall system is
(5.7.13)
h(t) = h,(t) + i,(,) + ..' + hNQ)
These two results are consistent with those obtained in Chapter 2 for the same
in tercon necl ion s,
Eranple 6.72
The transfer function of the system described in Example 5.7.1 also can be written as
s-1s-2
I
H(s)=;+i
iiz,+r
This system can be realized as a cascade of three subsystenrs. as shown in Figure 5.7.7'
Each iubsystem is composed of a pole-zero combination. The same system can be realized
in parallel, too. This can be done by expanding H(s) using the method of partial fractions
as follows:
256
The Laplace
,arrffiru,
Translorm
Chapter 5
Figure5.7.7 Cascade-form
simulation for Example 5.7.2.
t2
t +2
,
l0
Iigure 5.74 Parallel-form
simulation for Example 5.7.2.
s +3
H(s) =
A parallel interconnection
10
*
-'2-Js+ I s+2 s+3
is shown in Figure 5.7.8.
The connection in Figure 5.7.9 is called a positive feedback system. The output of
the first system Hr(s) is fed back to the summer through the system Hr(s); hence the
name "feedback connection." Note that if the feedback loop is disconnected, the transfer function from X(s) to Y(s) is H,(s), and hence H,(s) is called the open-loop transfer function. The system with transfer function Hr(s) is called a feedback system. The
rvhole system is called a closed-loop system.
/r2
(s)
Iigure 5.7.9 Feedback connection.
We assume that each system has no initial energy and that the feedback system does
not load the open-loop system. Let e(r) be the input signal to the svstem with transfer
function II,(s). Then
Y(s)
:
E(s)H,(.s)
E(s) = .Y1.'; + //r(.s)Y(.s)
so that
Y(.s) =
s,1r; [x(s) + H,(s)Y(s)]
Solving for the ratio Y(s)/,Y(.s) yields the transfer function of the closed-loop system:
Sec.
5.8
Applications ot lhe Laplace Translorm
257
(s.7.1.t)
Thus, the closed-loop transfer function is equal to the open-loop transfer function
divided by I minus the product of the transfer functions of the open-loop and feedback
systems. If the adder in Figure 5.7.9 is changed to a subtractor. the system is called a
negative feedback system. and the closed-loop transfer function changes to
H(s) =
5.8
,r illlL,,
(s.7. l s)
APPLICATIONS OF THE LAPLACE
TRANSFORM
The Laplace transform can bc applied to a number of problenrs in system analysis and
design. These applications depend on the properties of the l-aplace transform, especially those associated with differentiation. integration, and convolution..
In this section. we discuss three applications. beginning rvith the solution of differential equaiions.
6.E.1 Solution of Differential Equations
One of the most common uses of the Laplace transform is to solve linear. constantcoefficient differential equations. As we saw in Section 2.5, such equations are used to
model continuous-time LTI systems. Solving these equations rlepcnds on the differentiation property of the laplace transform. The procedurc is st raightfonvard and systematic, and we summarize it in the following stcps:
l.
For a given set of initial conditions, take the Laplace transfornr of both sides of thc
differential equation to ohtain an algebraic equation in Y(s ).
2. Solve the algebraic equalion for Y(s).
3. Take the inverse Laplace transform to obtain y(r,).
Erample 6.8.1
Consider the second-ordcr, linear. constanl-coefficient diffcrcr:tial cquation
.v"(r) + sr,',(r) +
6f(/) = exp[-r]u(r),
.v',(0
)- Ilnd.y(0-)=2
Taking the [-aplace transfornr of hoth sidcs resul]s in
[s:t'(s) Zs-ll+.s[sY(.r)-2] +6]'(s)-. f
Solving for Y(s ) yields
r(s)=.2r-+l3s1t2(.r+l)(s:+5s+6)
l6e
2(s+ l) s+2 2(s+3t
,
The Laplac€ Transform Chapl€r 5
258
Taking the inverse Laplace transform. we obtain
y(i
= ()exp[-r] + 6exp[-2r] - 2"*o,
-
srl),r(r)
Higher order differential equations can be solved using the same procedure.
6.8.2 Appltcatlon to RLC CircultAnalysls
In the analysis of circuits. the Laplace transform can be carried one step furrher by
transforming the circuit itself rather than the differential equation. The s-domain current-voltage equivalent relations for arbitrary R. L, and C are as follows:
Resietore.
The s domain current-voltage characterization of a resistor with
resistanae R is obtained by taking the l.aplace transforrn of the current-voltage relationship in the time domain, Ri^(t) = aa(t). This yields
7^(s) = RIr(s)
(s.8.r )
Induotors.
For an inductor with inductance L and time-domain current-voltage relationship Ldir(t)/dt = at.Ql. the s-domain characterization is
l/r.(t)
=sllr(s) - LiL(0-)
(5.8.2)
That is, an energized inductor (an inductor with nonzero initial conditions) at , ='0 is equivalent to an unenergized inductor al , = 0- in series with an impulsive voltagc
source with strength LiL@-). This impulsive source is called an initial-condition generator. Alternatively, Equation (5.8.2) can be written as
/r.(s) =
,l
,, trl +
(o-)
"
(s.8.3)
That is. an energized inductor at t = 0- is equivalent to an unenergized inductor at
r - 0- in parallel with a step-function current source. The height of the step function
is i,.(0-).
Capacitore.
For a capacitor with capacitance C and time-domain current-voltage relationship Ctlur.(t)/ th = ,({r). the.r-domain characterization is
(.(s) = .sC tz,.(s) -
Ca,
(0-)
(s.8.4)
That is. a charged capacitor (a capacitor with nonzero initial conditions) at r = 0- is
equivalent to an uncharged capacitor at I = 0- in parallel with an impulsive curent
source. The strength of the impulsive source is Co(O - ), and the source itself is called
an initial-condition generator. Equation (-5.tt.4) also can be written as
rz.{s) =
r!
r,
trl +
t1(:-)
(s.8.5)
S€c.
5.8
Applications ol th6 Laplace Translorm
Thus, a charged capacitor can be replaced by an uncharged capacitor in series with a
step-function voltage source. The height of the step function is u.(0-).
we can similarly write Kirchhoffs laws in the s domain. The equivalent statement
of the current law is that at any node of an equivalent circuit. the algebraic sum of the
currents in the s domain is zero; i.e..
)
rr(') = o
(s.8.6)
k
The voltage law states that around any loop in an equivalent circuit, the algebraic sum
of the voltages in the s domain is zero; i.e.,
)
v*1'1 = e
(5.8.7)
k
caution must
be exercised when assigring the polarity of the
initial-condition generators.
Eranfle 6AJ
Consider the circuit shown in Figure 5.E.t(a) with dr(o-) = -2. uc(o-) = 2, aod
u(t). The equivalent s-domain circuir is shown in Figu'e 5,8.1(b).
Writing the node equation at node 1, we obtain
2
- rQ-t/,--.? - sy(s) - y(s, = s
Rr- 2tt'
+
c.l F+
Rz= I
o
2
x(s) =
v
(t)
+
ls-
!
s
(b)
Flgure
53.1
Circuit for Example 5.8.2.
I/(s)
r(r) =
-^ The Laplace Translorm Chapter
260
Solving for Y(s)
5
lelds
2s2+6s+l
y(s)=dF+3s+,
I
5rl3 +
5
(s+3/2)'1+$/1142
3s
1 5
s+3/2
3s 3 (s + 3/2)2 + (11/4, '
s
t/ilz
V3 1, + 3/U, + O/alDz
Taking the inverse l:place uansform of both sides results in
r(r) = |arr).
i"*[j,]("o,f
,),r,r *
rl *,[r,,](.,^f ,),<o
The analysis of any circuit can be carried out using this procedure.
6.8.3 Application to Control
One of the major applications of the laplace transform is in the study of control systems. Many important and practical problems can be formulated as control problems.
Examples can be found in many areas, such as communications systems, radar systems,
and speed control.
Consider the control system shown in Figure 5.8.2. The system is composed of two
subsystems. The fust subsystem is called the plant and has a known transfer function
H(s). The second subsystem is called the controller and is designed to achieve a certain system performance. The input to the system is the reference signal r(l). The signal nr(t) is introduced to model any disturbance (noise) in the system. The difference
between the reference and the output signals is an error signal
e0)=r(t)-y(t)
The error signat is applied to the controller, whose function is to force e(l) to zero as
r -+ o; that is,
lim e(t; = 6
This condition implies that the system output follows the reference signal r(r). This
type of sJmtem performance is called tracking in the presence of the disturbance ar(t).
H.(s)
Figure
5.82
Block diagrarp of g control system.
Sec.
5.8
Applications ol the Laplace Transform
261
The following example demonstrates how to design the controllcr to achieve the tracking effect.
Example
'
6.tJ
Suppose that thc
LTI
system we have to control has the lransfcr function
rrrrl = {01
D (s)
(s.8.8)
Lrt the input be r(l) = Au(t) and the disturbance be ra0) = Bx(/), where A and B are
constants. Becausi: of linearity. we can divide the problem into trvu simpler problems, one
with input r(t) anrJ the olher wirh input ur(r). That is. the ourpul I(r) is expressed as the
sum of lwo components. The first component is due to r(t) when rrr(l) = 0 and is labeled
.v,(r). It can be easily verified that
y,(,) =
i{#:i,o,,fr, "u,
where R(s) is the l:place transform of r(r). The second component is due to zo(r) when
r(l) = 0 and has the laplace transform
y,6) =
G-#-lo.r(,
w(.)
where W(s) is thc Laplacc transform of the disturbance ?o(/).'l-hc complete output has
the Laplace transform
Y(s)=Y,(")+Yr(s)
=,
=
We havc to design
l/.(s)
h*5%,
R(s)
+, * #1;q|r1.i *('r
ir_('ll4QA
1_g,l
s[1 + H.(s)H(s)]
such that
(s.8.e)
r(r) tracks.v(r); that
lg1 Y(t)
:
is,
a
Lrt H.(s) = N.(s)/D.(s). Then we can write
rv(s)[N.(s)A + D"1s) 8]
ttD(t)D.ir) + N(rllv.trll
y(s'
'-
[,et us assume thal the rcal parts of all lhe zeros of D(s)D.(.s)
neBative. Then by using the final-value theorem, it follows thar
flg
rttl:
li4
+ N(s)N.(s) are stricrly
sY(s)
+_D.(s)Bl
_,,_
- i-d !Ell4L{94
+
o(')o.(') N(s)N.(s)
For this to be equal to A, one needs that
li$ A.t"l
stituting in the expressron for Y(s), we obtain
= 0 or
D,.(.s ) has a zero at s
(s.8.r0)
= 0. Sub-
.
,,262
The Laplac€
fsro=#ffi#
Eranple
Translorm Chapter 5
=,
6.8.4
Consider the control system shown in Figure 5.8.3. This system represents an automatic
position-control system that can be used in a tracking antenna or in an antiaircraft gun
mount. The input r(r) is the desired angular position of the object to be tracked, and the
outPut is the position of the anlenna.
.10|
Flgure 5.E3 Block diagram of a
tracking antenna.
..:,
.l;,.i
The first subsystem is an amptilier with transfer function Hr(s) = 8, and the second sub-
systemisamotorwithtransferfunctionH2(s):1/[s(s+o)],where0<c<V32.btus
investigate the step resPonse of the system as the parameter o changes. The output Y(c) is
Yc) =
:"t'r
=1-f*]#oA,rr
8
i1s'+"s+s)
s+o
I
rl
ss2
+aJ+8
The restriction 0 ( a ( \6j is chosen ro ensure that the roots of the polynomial s2 + as
* 8 are complex numbers and lie in the left half plane. The reason for this will become
clear in Section 5.10.
The step response of this system is obtained by taking the inverse l.aplace transform
of Y(s) to yield
,or = (r -
"*[+]{*'[\l[ --(T,]
,])..u,
The step response y(l) for two values of q, namely, c = 2 and a = 3, is shown in Figure
5.8.4. Note that the response is oscillatory with overshoots of 30% and l4olo, resPectively.
The time required for the response to rise from l0% to fr)o/o of its final value is called the
rise time. The first system hai a rise time of 0.48 s, and the second system has a rise time
of O.60 s. Systems with longer rise times are inferior (sluggish) to lhose with shorter rise
times. Reducing the rise time increases the ovenhoot. however. and high overshoots may
not be acceptable in some applications.
Soc.
5.9
Slate Equations and lhs Laplace Translorm
263
v(t)
l.:m
t.l7
I
Hgure 5J.4
5.9
Step response of an antenna tracking sysrem.
STATE EQUATIONS AND THE LAPLACE
TRANSFORM
We saw that the laplace transform is an efficient and convenient tool in solving differential equations. In Chapter 2, we introduced the concept of state variables and
demonstrated that any LTI system can be described by a set of first-order differential
equations called state equations.
Using the time-domain differentiation property of the laplace transform, we can
reduce this set of differential equations to a set of a.lgebraic equations. Consider the
LTI system described by
v'G)=Av(r)+br(r)
y(t)=cv0)+dx(t)
Taking the Laplace transform of Equation (5.9.1). we obtain
sY(s)
which can be written as
-
v(0-) = AY(s) + bX('t)
(s.e,1)
(s.e,2)
The Laplace
(sI
where I
is
-
Transtorm
A)v(s) = v(0-) + bX(s)
the unit matrix. l,eft multiplying borh sides by the invenie of (sI
V(s) = (sI
Chapter s
-
- A)-r v(0-) + (sI - A)-r bx(s)
A), we obtain
(s,e.3)
The [-aplace transform of the output eguation is
Y(s)=iY1t;+dX(s)
Substituting for V(s) from Equation (5.9.3), we obtain
Y(s) = .1r1
A)-r v(0-) + [c(sl - A)-'b + d]X(s)
-
(S.9.4)
The first term is the transform of the output when the input is set to zero and is identified as the Laplace transform of the zero-input component ofy(l), The second term
is the l-aplace transform of the output when the initial state vector is zero and is identified-as the Laplace transform of the zero-state component of .vG). In chapter 2, we
saw that the solution to Equation (5.9.1) is given by
vO = explArl v(0-) + [' exp[a(r
J6
- r)]bx(r)dt
(s.e.s)
(see Equation (2.6.13) with ro = 0-.) The integral on the right side of Equation (5.9.5)
represents the convolution of the signals exp [Al] and bx(r). Thus, the Iaplace transformation of Equation (5.9.5) yields
Y(s)
:
9lexp[Ar]] v(0-) + 9[exp[Ar]l bx(s)
(s.e.5)
A comparison of Equations (5.9.3) and (5.9.6) shows that
9{exp[Ar]l = (sI - A)-t =,D(s)
(s.e.7)
yhere o(s) represents the Laplace transform of the state-transition matrix erp[Ad.
O(s) is usually referred to as the resolvent matrix.
Equation (5.9.7) gives us a convenient alrernative method for determining exp[Al]:
___
we frrst form the matrix sI - A and then take the inverse Laplace transform of
(sI - a1-t
With zero initial conditions, Equation (5.9.4) becomes
y(s)
:
[c(sr
- A)-r u + d]X(s)
(s.e.8)
and hence, the transfer function of the system can be written as
II(s) = c[sl - A]-t b + d = cO (s)b + d
&ample
6.9.1
Consider the system described by
''(,)
=
y(r) =
[-, l]"u, . [l],u,
[-l -l]v(r) + 2r(r)
(s.e.e)
Sec.
5.9
26s
Stats Equations and the Laplace Translorm
with
=
"(o )
[;,]
The resolvent matrix of this system is
-a l-'
*,rr=['r*' ,-3-]
Using Appendix C. we obtain
Fs-3 4 I
l[--r-1
-_.-r -l
,,
t". t]! --,l u.,t,.,'r-
o(s) = 1=---'-t
(s+3)(s-3)+8
=
|
Ltr
I
-
,Xr
- tl t' * rlfr - rl.l
The transfer function is obtained usiog Equation (5.9.9):
[
lr(s) =
1-, -,l l
("
.
,-3
']t
4
- 1) (s -,'lt"-
"
L(r+tX'-1) G+txr-D ]trt.,
2s:-4s-lE
(s+l)(s-1)
Taking the inverse l:place tralsform. we obtain
,l0) = 2[6(t) + 3 exp[-r]uO
-
s
exp[t]z(t)l
The zero-input response of the system is
Yl(s) =
_
61t1
-
A)-rv(0-)
-2(s +
13)
(s+l)(s-1)
and the zero-state response is
- A)-'bx(s) + 2x(s)
_2'2-4s-lEyr.\
(s + 1)(s - l) "r"t
Yr(s) = c(sl
The overall response is
,r,r=ffiffi*ffi$*t'r
The step response of this system is obtained by substituting X(s)
=
1/s, eo that
The Laplace Translorm Chapter 5
EHd
yr,r =
=
1,*l,figO.,ti.-,ii,
_,i,
--:l$-:-t-!-
s(s+l)(s-l)
_18+ 6 _ 24
s s+l s-l
Taking the inverse laplace transform of both sides yields
y(r) = [18 + 6 exp[-r]
k -rr""*
kt
us
ftid
-
24
exp[r]lu(r)
the state-tntrsition matrix of lhe s,6tem in Example 5.9.1. T'he resolvent matrix is
rD(s )
l)(s
-2
- l)
(s
+ l)(s -
s+3
:r]
The various elements of O(r) are obtained by taking the inverse laplace transform of each
entry in the matrix O(s). Doing so. we obtain
.(,)
=
[r.:;ir-,i,_-".#i, _ffi L-,ir..ffiir],or
.10 STABILITY lN THE s DOMAIN
stability is an importut issue in system design. In chapter 2, we showed that the stability ofa system can be examined either through the impulse response ofthe system
or through the eigenvalues of the state-transition matrix. Specifically, we demonstrated
that for a stable system, the output, as well as all internal variables, should remain
bounded for any bounded input. A system that satisfies this condition is called a
bounded-input, bounded-output (BIBO) stable system.
Stability can also be examined in the s domain through the transfer function H(s).
The transfer function for any LTI system of the type we have been discussing always
has the form of a ratio of two polynomials in s, Since any polynomial can be factored
in terms of its roots, the rational transfer function can always be written in the following form (assuming that the degree of N(s) is less than the degree of D(s)):
H(s)' =
A' *... + A-4-'-'*
s-sl s-J2
J -s^r
(s.l0.r)
The zeros s* of the denominator are the poles of H(s ) and, in general. may be complex
numbers. If the coefficients of the goveming differential equation are real. then the
complex roots oocur in conjugate pairs. If all the poles are distinct. then they are sim-
Sec.
5.10
Stabiliry in the s Domain
267
ple poles. If one of the poles corresponds to a repeated facror of the fonir (s - sr)',
then it is a multiple-order pole with order rn. The impulse response of the system, i(t),
is obtained by taking the inverse Laplace transform of Equation (5.10.1). From entry
6 in Table 5.1. the &th pole contributes the term ho$) = Ao exp [.r*r] to i (t). Thus, the
behavior of the system depends on the location of the pole in the s plane. A pole can
be in the left half of the s plane, on the imaginary axis, or in the right half of the s plane.
Also, it may be a simple or multiple-order pole. The following is a discussion of the
effects of the location and order of the pole on the stability of [,TI systems,
L. Simple Poles in the Left Half Plane. In this case, the pole has the form
s*=ooljro*.
oo(0
and the impulse-response component of the system, ho(t), corresponding to this pole is
hoQ)
= Aoexp[(oo + jtro)r] + Af exp[(oo - lr*)rl
= l,4ol exp[oor](exp[i(toor + 9r)] + exp[-i(oror + 9r)])
= Zlerl exp[oot] cos(orot + p*).
or < 0
(s.10.2)
where
Ar= lA*l exp[rprl
As,
increases, this component of the impulse response dccays to zero and thus
results in a stable system. Thereforc, systcrns with only sinrplc poles in the left half
plane are stable.
2. Simple Poles on the Imaginary Axis. This case can be considcred a special case of
Equation (5.10.2) with oo = 0. The kth component in the impulse response is then
holt'1
:zlerl
cos(ur*t + B^)
Note that there is no exponcntial dampingl that is, the rcsponse does not decay as
time progresses. It may appear that lhe response to the bounded input is also
boundcd. This is not truc if the system is excited by a cosinc function with the same
frequency to^. In that case, a multiple-order pole of the fornr
__
1s2
B"
+ ol12
appears in the L:place transform of the output. This term gives rise to a time response
B
2ro '
stn
'ot
that increases without bound as I increases. Physically', o^ is the natural frequency
of the system. If the input frequency matches the natural lrcquency, the system resonates and the output grorvs without bound. An example is the lossless (nonresistive) LC circuit. A system rvith polcs on the imaginary axis is sometimes called a
marginally stable system.
3. Simple Poles in the Right Half Plane. If the system function has poles in the right
half plane, then the sys:em response is of the form
268
The Laplace
h*(t)
:2lArl
explootlcos(oor +
po),
qr
Translorm
)
Chapter s
0
Because of the increasing exponential term, the output of the system increases without bound, even for bounded input. Systems for which potes are in the right half
plane are unstable,
4.' Multiple-order Poles in the Lefi Half Plane. A pole of order rn in the left half plane
gives rise to a response of the form (see entry 7 in Table 5.1)
h*
= lA*l r- exp [oor] cos (trl*, + pr ),
oo
(
0
For negative values ofo1, the exponential function decreases faster than the polynomial t"'. Thus, the response decays as t fuicreaies, and a system with such poles is stable.
5. Multiple-order Poles on the Imaginary Axb, In this case, the response of the system
takes the form
hk= lAkl t-cos(root + p*)
This term increases with time, and therefore, the system is unstabte.
6. Multiple-order Poles in rhe Right Half Plane.Tlre system response is
hr
= lAol fl exp[ort]
cos(orr,
+
pr),
oo
)
0
Because o^ > 0. the response increases with time, and therefore, the system is unstable.
In sum. a LTI (causal) system is stable if all iS poles are in the open left half plane
(the region of the romplex plane consisting of all points to the left of, but not including. the lo-axis), A LTI system is marginally stable if it has simple poles on the jro-axis.
An LTI system is unstable if it has poles in the right half plane or multiple poles on
the ,1t'r-axis.
.11 SUMMARY
.
fhe bilateral laplace transform ofr(l)
Xa(s)
?
a
o
is defrned by
= [ ,(r) exp[ -
sr] dr
The values of s for which X(s) converges (X(s) exists) constitute the region of convergence (ROC).
The transformation r(r) e+ X(s) is not one to one unless the ROC is specifred.
The unilateral Laplace transform is defined as
X(s) =
i
Jl
,(r) exp[ - s,] d,
The bilateral and the unilateral Laplace transforms are related by
Xa(") = X, (s) + Y[.r_(-r)x(r)1.-_.
where X*(s) is the unilateral Laplace transform of the causal part of x(r) and.r_(t)
is the noncausal part of .r(t).
Sec.
r
5.11
Summary
269
Differentiation in the time domain is equivatent -ttFrnultiplication by s in the
s
domain: that is.
Ir/r(r]l
u1_a',-l = sx(s) -'r(o-)
r
Integration in the time domain
is equivalent to
division by s in rhe s domain; that is,
,l )=-x(s)
LlI'-t x(t\ar
.tJ_.
J s
-
r
Convolution in the time domain
is
equivalent to multiplication in the s domain: that is,
y(t) = .r(t) * &(t) er Y(s) = .11'11r,r,
.
The initial-value theorem allorvs us to compute the initial valuc of the signal
and is derivatives directly from X(s):
r(')(0') :1g1 [s'*tX(s) - s't(0*) - s"-rr'(0*)
r
.
o
...
-
sjr(,,-r)(0+)]
The final-value theorem enables us to find the final value of ,r(r) from X(s):
lg r(,):
.
-
r(l)
liru sX(s)
Partial-fraction expansion can be used to find the inverse laplace transform of signals whose Laplace transforms are rational functions of s.
There are many applications of the Laplace transform: among them are the solution
of differential equations, the analysis of electrical circuits, and the design and analysis of control systems.
If two subsystems with transfer functions H, (s) and H2(s) are connected in parallel,
then the overall transfer function H(s) is
H(s)=Hr(s)+l/2(.t')
o If two subsvstems rvith transfer functions 11,(s) and Hr(.s) are connected in series,
then the overall transfer function
II(s)
It(s):
o
e
H,(s)Hr(s)
The closed-loop transfer function of a negative-feedback system with openJoop
transfer function Il,(s) and feedback transfer function Hz(s) is
H(s) =
.
is
.r #ifiro
Simulation diagrams for LTI systems can be obtained in lhe frequency domain.
These diagrams can be used to obtain representations of state variables.
The solution to the state equation can be written in the s domain as
V(s)
:
o(s)v(0-) + o(.r)bx(s)
Y(s) = 3Y1t1 + dX(.s)
27O.
r
The Laplac€
Transform Chapter s
The matrix
rD(s)
= (sI
- A)-t= glexp[Ar]]
is called the resolvent matrix.
o
The transfer function of a system can be written as
H(s)=ctD(s)b+d
o
5.12
An LTI system is stable if and only if all is poles are in the open left half plane. An
LTI system is marginally stable if it has only simple poles on the jo-axis; otherwise
it is unstable.
CHECKLIST OF IMPORTANT TERMS
Bllateral Laplac€ translorm
Cascade lntorconnecfl on
Causal pan ol r(r)
Contsollel
Convolutlon proporty
Fecdback lnterconnecllon
Flnal-value theorem
lnhlal-condltlona generator
lnltlal-Yatus theorem
tnvenre laplace translorm
Klrchholfe cunent law
Klrchhotfa Yoltage taw
Left halt plane
Muluple-ordsr pole
Negatlve leodback
5.13
Noncaueat part ot r(r)
Parallel lnterconnectlon
Partlal-rracff on expanslon
Plant
Poles o, O(s)
Poeltlve lesdback
Ratlonai lunctlon
Reglon of convergence
Slmple pole
Slmulatlon dlagram
s plane
Transler functlon
Unllateral Laptace tranelom
Zero-lnput reaponee
Zereetate rcaponee
PROBLEMS
5.1. Find the bilateral laplace transform and the ROC of the following functions:
(e) exp [r + l]
(b) exp[bt]u(-r)
(cl lrl
(d) (l - lrl)
(e) exp [ -2lr l]
(f)
t" exp[-l]z(-r)
(g) (cosat)u ( -t)
(h) (sinhat)a(-t)
52
Use the definition in Equation (5.3.1) to determine the unilateral Laplace transforms of
the following signals:
(i)
.r,(t) = rrect[(r
-
l)/21
Sec.
5.13 Problems
271
(ii).rr(r) = r,(f) + i 60)
Ir]
(iii),r(,) = recr[r.l
53.
Use Equation (5.4.2) to evaluate thc bilateral Laplace transform of the signals in Problem 5.1.
5.4. The Laplace transform of a signal .r(t) that
.X(.t
)
is zero for I
<
0 is
s3+2s2+3s+2
=;.r+^j +1,.1'+b+2
Determine the Laplace lransform of the following signals:
(a) y0) =
"(i)
(b) y0) = tr(t)
(c) y(t) = tr( - l)
dr (t\
(dl
y() =
_i.
(e) y(r) = (r
-
l)x(r
-
D
* d';!)
rt
(f)y(t)=lx(r)dt
J6
55.
Derive entry 5 in Table 5.1.
5.6. Shorv that
y.tt"u(t)t =
Oljit,,,
o
where
r(u) =
5.7.
f
r'-'exp[-rldr
Use the property
f(a + l) = uf (u)
to show that the result in Problem 5.6 reduces to entry 5 in Tahle 5.1.
5.& Derive formulas 8 and 9 in Table 5.1 using integration by parts.
5.9, Use enrries E and 9 in Table 5.1 to find the Laplace transfornrs of sinh(roor)u(t) and
cosh (to6r) u (t).
5.10. Determine the initial and final values of each of the signals whose unilateral l-aPlace transforms are as follows without computing the inverse Laplace trartsform. If there is no final
value, stale why not,
(a)-, t
J+A
(b)
(e
I
i" *
o1;
6
fu.rzi
272
The Laplace
(d)
Transtorm
Chapter 5
sri;
s2+s+3
(e)F+4s,+zsE
o F:+-,
S.fL Find.r0)
for thc following laplace tranforms:
(a) , s*2 ^
s--s-z
(b)#+
(c) 2s3+3s2+6s+4
G,J;xs,+r+2)
(d)
(e)
.',
(8)
c2
2
",i+
s2-s+1
f
_ 2s7J;
f,#=
2s2-6s + 3
s, _ 3s;,
rorsffl
o)
:
7
(BE,
u,#6
5.11L Find the following convolutions using laplace transforms:
(a) exp[at]z(t) * explbtlu(t), a * b
@) exP [at]a(t) * exp [ar]z (r)
(c) rect (r/2) * a(t)
(d) ,l,(t) r exP[at]z(t)
(e) exp [-Dt] z(t) * z (t)
.
(I) sin (at)z (t) * cos(Dt)u(t)
(g) exP[-2r]z(r) rect [0 - 1)/2]
(!) [exp(-2.r)z(r) ' + 6(r)] . u(t - r)
5.13. (a) Use the convolution property to find the time signals corresponding to the following
I-aplace transforms:
r,l
#;r- (tr) ("+
@) can you infer the inverse l:place transform of l/(s - a)' from your answers in part (a)?
5.14 We have seen that the outpul of an LTI system can be determined as y(s) = ,rG)X(s),
where the system transfer function H(s) is the kplace rransform of the system'i;pulse
Sec.5.13
Problems
273
h(t). Ler H(s1 = N(s)/D (s), where N(s) and D(s) are polynomials in s. The
roos of N(s) are the zeros of l/(s), while the roors of D(s) are rhe poles.
(a) For the transfer function
response
s2+3s+2
H(r) = _
si sr-i s, _l
plot the locations of the polcs and zeros in the complex s plane.
(b) Whar is fi(r) for rhis sysrem? Is ft(r) real?
(c) Show that if /r(l) is rcal, H(s*) = H*(s). Hence show that if s = s6 is a pole (zero) of
l/(s), so is s = sot. That is poles and zeros occur in complex conjugate pairs.
(d) Verify that the given H(s) satisfies (c).
5.15. Find the system transfer functions for each of the systems in Figurc P5.15. (/rinr.'You may
have to move the pickoff, or summation, point.)
r(r)
v(,)
v
(t)
Figure P5.15
5.16. Draw ihe simulation diagrams in the first and second canonical fornrs for the LTI system
described by the transfer function
HG)
.G,
=,,
#|.ri-,
,rl
The LaplaceTranslom Chapter 5
5.17. Repeat Problem 5.16 for the system described by
H(s) =
sl+3s+l
s3+3s2+s
5.1& Find the transfer function of the syslem described by
2Y"(t) + 3Y'(t) + aYQ) =
u'(t) - x(t,
(Assume zero initial conditions.) Find the system impulse response.
5.19. Find lhe transfer function of the system shown in Figure P5.19.
,r4(s)
/15
/I, (s)
(s)
IJ2(s)
/Ir(s)
tigure P5.19
520. Solve the following differential equations:
y'(t) + 2y(t) = u(t). Y(0-) = I
y'(t) + 2y(t) = (cosr)z(t). y(0-) = I
y'(t) + 2y(t) = exp[-3r1il(). y(0-) = I
/(,) + 4y'(t, + 3/(,) = u(,), y(0-) = 2.y(0-) : I
y"(t) + 4y'(t) + 3y(t) = exp[-3t]a(t). y(0-) = 0. y'(0-) = I
(I) y-(r) + 3y'(t) + 2y'(t) -6r(r) = e*1-21uftr. y(0-) =y'(0-) =.v10-) = 0
s2L Find the impulse response i () for the systems described by the following differential equatiom;
(a) y'(t) + 5Y() = r(t) + 2t'(t)
(b) y"(r) + 4y'(t) + 3-v(r) = Zr(t) - 3x'(t)
(c) y-(t) + y'(t) - Zy(t) = x'(r) + ,r'(t) + 2r(t)
szL One major problem in systems theory is syslem identification. Obsewing the oulPut of an
(a)
(b)
(c)
(d)
(e)
LTI system in response to a known input can provide us with the impulse res?onse of the
system. Find the impulse response of the syslems whose inPut and outPuf are as follows:
#,_
Sec.
5.13
Problems
275
(s) -r(t) = 2exPl-2rlu(tl
y(r) = (1 -, + exp[-,1 + exp[-Z])z(r)
(b) .r(t) = 2x 11;
y(t) = n() - exp[-2r]rr(r)
(c) .r(t) = exP [-2rla(t)
y(r) = exp[-r] - 3 exp[-2r])u(r)
(d) r(r) = s11;
y(r) = 0, - 2 exp[-3r])z(r)
lel r(tl = 71111
y(r) = exp[-2r] cos(4r + 135")z(r)
(f) r(r; = 3u,1r,
y(t) = exp[-4r][cos(4t + 135') - 2sin(4r + 135")]r(,)
523. For the circuit shown in Figure P5.23, leto.(0-)=lvolt.ir(0-)=2anrperes,and.r(r)=
zO. Find y(t). (lncorporate the initial energy for the inductor and the capacitor in your
transformed model.)
l.lH
Flgure P523
For the circuit shown in Figure P5.24, let o.(0-) = I volt, ir(0-) = 2 amperes, and.r(r) =
u(t). Find y(). (Incorporate the initial energy for the inductor and rhe capacitor in your
transformed model.)
+
r'(, )
Figure P524
525. Repeat Problem 5.23 for r() = (cosr)u(l).
s26. Repeat Problem 5.24 forx(l) = (sinZ)z(r).
szt. Repeat Problem 5.23 for the circuit shown in Figure P5.27.
5r& Consider the control system shown in Figure P5.28. Forx() = u(r). //,(.s) = K,andlrr(s)
= l/(s(s + a)), find the following:
(a) Y(s)
(b) y() for( = 29,a = 5,a = 3, and a = I
Consider the control system shown in Figure P5.29.
Let
x(t) = Au(t)
The Laplace
276
Transform
Flgure P527
Iigure PSJS
H.k)
Ugure P52!l
H.(") =
s*
1
s
I
II(s) =
sf2
(a)
Show that
lim
y(;
= 4.
(b) Determine the error
(c)
signal
e(t).
Does the system track the input
if H.(s) =
(d) Does the system work if H.(s) =
ffi,
- ,1,
u*r,
U
If not,
why?
Find exp [At] using the Laplace transform for the following matrices:
(,)
A:[l N
61
Fr -rl
e=[z
o]
,",
n=[l
(,,)
A=ti
?]
I r ool
tel l=l-t l rl
L-r o ol
?]
fz 1ll
ro n=lo 3 rl
Lo -r rl
Chapter 5
Sec.
53L
5.13
Problems
Consider the circuit shown in Figure P5.31. Select the capacitor voltage and the inductor
current as state variables. Assume zero initial conditions.
(a) Write the state equations in the transform domain.
(b) Find l/(s) if the input r(r)
(c) What isy(t)?
is the unit step.
Elgure
P53l
Use the L:place-transform method to find the solution of the following state equations:
, [;it]l [-t -3][;;8] [t[s.]l [l]
, [;i[l]l [? -l][18] [;:[s-]l : t-?l
=
=
=
Check the stability of the systems shown in Figr,re P5.33.
.n
5
s
"+2
s
*?
?+r
;;t
?,
Flgure PS33
Chapter 6
Discrete-Time Systems
INTRODUCTION
In the preceding chapters, we discussed techniques for the analysis of analog or continuoui-time signals and systems. In this and subsequent chapters, we consider corresponding techniques for the analysis of discrete-time signals and systems.
Discrete-time signals, as the name implies, are signals that are defined only at discrete instants of time. Examples of such signals are the number of children born on a
specific <tay in a year, the population of the United States as obtained by a census, the
interest on a bank account, etc. A second type of discrete-time signal occurs when an
analog signal is converted into a discrete-time signal by the process of sampling. (We
will have more to say about sampling later.) An example is the digital recording of
audio signals. Aoother example is a telemetering system in which data from several
measurement sensors are transmitted over a single channel by time'shaing.
In either case, we represent the discrete-time signal as a sequence of values x(t,),
where the t, correspond to the instants at which the signal is defined. We can also write
the sequence as x(n), with a assuming only integer values.
As with continuous-time signals. we usually rePresent discrete-time signals in functional form-for example,
.r(n) =
].o.rn
(6.1.1)
only over a finite interval, we can list the values of
the signal as the elements of a sequence. Thus, the function shown in Figure 6.1.1 can
Alternatively, if
be written as
278
a signal is nonzero
Sec.6.1
lnlroduction
279
\'(,l)
Egure 6.1.1 Example of a discretetime sequcnce.
.r(r ) =
l'I I I
3
t+'z':'o'o'
(6.t.2)
;)
1
where the arrow indicates the value for n = 0. In this notatron. it is assumed that all
values not listed are zero. For causal sequences. in which the [irst entry represents the
value at n = 0, we omit the arrow.
The sequence shown in Equation (6.1.2) is an example of a .l'inite-lengtft sequence.
The length of the sequence is given by the number of lerms in the sequence. Thus,
Equation (6.1.2) represents a six-point sequence.
6.1.f Classification of Discrete-Time
Signals
As rvith continuous-timc sigrrals. discrctc-tinrc signals catt bc classified into different
categories. For examplc, we can define the encrgy of a discre tc-time signal r(n ) as
N
E = lrm
The average power of lhe signal
P
,I^
(6.1.3)
l..tn)l'
is
=,[i.
,,f
,i,l't'rl'
(6.1.4)
The signal x(n) is an energy signal if E is finite. It is a power signal if E is not finite, but
P is finite. Since P = 0 when E is finite, all energy signals arc also power signals. However, if P is finite, E may or may not be finite. Thus, not all power signals are energy
signals. If neither E nor P is tinite, the signal is neither an encrgy nor a power signal.
The signal:(n) is periodic if, for some integer N > 0,
x(il + N) = r(n)
for all
n
(6.1.s)
The smallest value of N that satisfies this relation is the fundamcntal period of the signal.
If there is no integer N that satisfies Equation (6,1.5), x(rr ) is an aperiodic signal.
Example 6.1.1
Considcr the signal
x(n) = I sin(2rrlon + $o)
Then
Discrete-Time
Systoms
Chapt€r 6
x(n + N) = A sin(2t fo(n + N) + 0o)
= A sin(2r fon + $o) cos(2nf6N) + A cos(Zr fnn + $o)sin(2tfiN)
Clearly, .r (n + N) rvill be equal to : (n ) if
N=im
where rn is some integer. The fundamental period is obtained by choosing ra as the smallest integer that yields an integer value for N. For example, ifro = 3/5, we can choose zl =
3togetN=5.
O-n rhe other hand, if
/o =
f4,
ry
*itt not be an integer,
and thus, .r(n ) is aperiodic.
Let x(n ) be the sum of two periodic sequences rr (n ) and xr(n ), with periods Nr and
N, respectively. Let p and q be two integers such that
PNr=qNr:N
Then
.r
(6.1.6)
(z ) is periodic with period N, since
.r(n + N) = xr(n
*pNr)
+
4(n + qNr) = rr(rr) + xr(n)
Because we can always find integers p and 4 to satisfy Equation (6.1.6), it follows that
the sum of two discrete-time periodic sequences is also periodic.
Erample 6.19
Let
'(') = ""'(T).
''(T.;)
It can be easily verified, as in Example 6.1.1, that the two terms in.r(z) are botb periodic
with perioG Nr = 18 and N2 = 14, respectively, so that x (n ) is periodic with period N = 126.
The signal r(n ) is even
if
x(n\
and is odd
: 11-n7
for all z
(6.1.7)
if
x(n) =
The even part of
x(r)
-r1-r)
for all n
(6.1.8)
can be determined as
1
x"(n):;b@) + r(-z)l
(6.1.e)
whereas its odd part is given by
xo@)
=l1rr- x(-a)l
(6.r.10)
Sec.6.1
281
lntroduclion
6.1.2 Tranformations of the Independent Variable
For integer values ofk. the sequence x(n - k) repfesents lhe sequence.r(n) shifted by
k samples. The shifr is ro rhe righr if k > 0 and ro the left if k < 0. Sinrilarly, the signal
_r( -n ) corresponds to reflecting -r(n ) around the timc origin n = 0. As in the conlinuous-time case. lhe operations of shifting and reflecting are not commutative.
While amplitude scaling is no different than in the continuous-timc case, time scaling must be interpreted with care in the discrete-time case, since thc signals are defined
only for integer values of the time variable- We illustrate this by a ferv examples'
Example 6,1.3
Lct
(n) = 'rz rn,
"- "
and suppose we w nI to find (i) 2r(5rrl3) and (ii) r(2rr).
With r'(n ) = 2t(5rrl3). we havc
r(0) = 2t(0) = 2.t'(l) = 2r(5/-3) = 0.r(2) = 2.t(l(1,/l) =
= 2r(5) = 2cxp(-5/2).v(a) = zt(10/3) = (, ctc'
t
0.
.v(3)
Here we have assumed that -r(n ) is zero i[ n is not an integer. lt is clcar ]hat the general
expression for y(n ) is
rr = 0, 3, 6, etc,
y(n ) =
otherwise
Similarly, with z (n ) = x(zn), wc have
z(o) = r(0) =
1.
The general expression for
z(l)
z
=
r(2) = exp[-11, z(3) = x(6) = cxp[-3],
(n ) is therefore
.(,,) =
,,'
{;:or
etc'
;:3
The preceding example shows that for discrete-time signals, time scaling does not yield
just i stretched or compressed version of the original signal, but may give a totally different waveform.
B3arnplg 6.1.4
[-t
..(r) =
ft.
n even
t_r,
nodd
Then
t'in) = x(V11 =
1
for all n
Discrete-Timo
282
Systems
Chapier 6
F rnrnple 6.1.6
Consider the wavelorm shown in Fig. (6.1.2a), and let
y(,)=..(-i.3)
.r
(
,(i )
rr)
(b)
^Fi * i)
.(-3)
Flgure
r(-n/3
6.12
Signals for Example 6.1.5 (a)
r(a), (b) r(n/3),
(c)
r(-al3),
and (d)
+ 2/31.
We determine y(a) by writing it as
,(r)=r[-?]
We first scale r(.) by a factor of 1,/3 to oblaih r(n/3) and then reflect this about the vertical axis to obtain r(-n/3). The result is shifted to the right bJ two samples to obtain
y(n ). These steps are illustrated in Fies. (6.1.2bF(6.1.2d). The resulting sequence is
y(r) = [-2, 0, 0, 0, 0,0, 1, 0, 0, 2, 0, 0, -11
t
6.2 ELEMENTARY
DISCRETE-TIME SIGNALS
Thus far, we have seen that continuous-time signals can be represented in terms of elementary signals such as the delta function, unit-step function, exponentials, and sine
and cosine waveforms. We now consider the discrete-time equivalents of these signals.
We will see that these discrete-time signals have characteristics similar to those of their
Sec.
6.2
283
Elementary Discrete-Time Signals
continuous-time counterparts, but with some significant differences. As with continuous-time systems, the analysis of the responses of discrete-time lincar systems to arbitrary inputs is considerably simplified by expressing the inputs in terms of elementary
time functions.
62.1 Discrete impulse and Step Functions
We define the unit-impulse function in discrete time as
6(,) =
{;: :;i
(6.2.1)
in Figure 6.2.1. We refer to E(z) as the unit sample occurring at n = 0 and
the shifted function 6(n - /<) as the unit sample occurring at n = lc. That is,
as shown
u("
- o) :
{l:
:;I
(6.2.2)
Whereas 6(n ) is somewhat similar to the continuous-time impulse function 6(t), we
note that the magnitude of the discrete impulse is always finite. Thus. there are no analytical difficulties in defining 6(n ).
The unit-step sequence shown in Figtre 6.2.2 is defined as
,,(r):{l: ;.3
(6.2.3)
The discrete-time delta and step functions have properties somewhat similar to their continuous-time counterparts. For example, the first dffirence of the unit-step function is
u(n)
-
u(n
If we compute the sum from -oo to n of the
we get the unit step function:
6(n
- l) = 6(r)
6
function,
(6.2.4)
as can be seen
6(,
)
from Figure 6.2.3,
- t)
kn
(b)
Ilgure 6.2.1 (a) The unit sample of E function. (b) The shifted 6 function.
u(n)
tlgure
6.22
function,
Thc unit step
2U
Discrete-Time
Systems
Chapier 6
ln
t.
I
I
(a)
Flgure
(b)
623
Summing the 6 tunction. (a) a < 0. (b) z > 0.
.i.,,: {?' ;:l
(6_2.s)
= u(n)
By replacing kby n
-
k, we can write Equation (6.2.5)
i
t-0
as
ut, - k) = u(n)
(6.2.6)
From Equations (6.2.4) and (6.2.5), we see that in discrete-time systems, the first difference, in a sense, takes the place of the first derivative in continuous-time systems,
and the sum operator replaces the integral.
Other analogous properties of the 6 function follow easily. For any arbitrary
sequenoe x (z ), we have
x(n ) E(r
-
k)
:
x(k) 6(n
-
k)
(6.2.7)
Since we can write .r(n) as
x(n) =
"' + x(-l)
6(n + 1) +.r(0) 6(n) +.r(1) 6(n
-
1) +
"'
it follows that
,(r)= ir(/<)s(n-l<)
l' -r
(6.2.8)
Thus. Equation (6.2.6) is a special case of Equation (6.2.8).
6.2.2 ExponontialShquencee
The exponential sequence in discrete time is given by
x(n) = grn
(6.2.e)
where, in general, C and o are complex numbers. The fact that this is a direct analog
of the exponential function in continuous time can be seen by writing c e9, so that
:
r(n)=g"w'
For C and a real,.r(n) increases with increasing n if
have a decreasing exponential.
(6.2.10)
lcl > l. Similarly. if lal < l, we
Sec.
6.2
n5
Elementary Discrete-Tlme Signals
Consider the complex exponential signal in continuous timc,
.r(t):
(6.2.1r)
Cexp[itoot]
Suppose we sample.r(t) at equally spaced intervals
xf
x(n) = gexp[loroln
to get the discrete-time signal
(6.2.12)
]
By replacing ool in this equation by f,ln, we obtain the complex exponential in discrete time,
x(n) = Cexp[i0oz]
(6.2.13)
Recall that oo is the frequency of the continuous-time signal x (t ). Correspondingly, we
will refer to flo as the frequency of the discrete-time signal x(z ). It can be seen, however, that whereas the continuous-time or analog frequency or0 has units of radians per
second, the discrete-time frequency (lo has units of radians.
Furthermore, while the signal x(r) is periodic with period 'l' = 2r/aofor any too, in
the discrete-time case, since the period is constrained to be an intcger, not all values of
On correspond to a periodic signal. To see this, suppose;(n) in Equation (6.2.13) is
periodic with period N. Then, since x(n) = x(n + N), we must have
ejltoN
= |
For this to hold, OoN must be an integer multiple of 2n, so that
OoN = m
2r,m = 0, +1, a2,
etc.
or
fb
2r
m
N
Ior m any integer. Thus, r(n) will be periodic only it {lr/2n is a rational number. The
period is given by N = 2rm/Oo, with the fundamental period corresponding to the
smallest possible value for rn.
&anple
6.2,1
Irt
x(n)
l7t
: exn[
,
1
zJ
so that
7
=
2n18N=^
Oo
Thus, the sequenc€ is periodic, and the fundamental period. ohtarned by choosing rn = 7'
is given by N = 18.
Discrete-Time
286
Systems
Chapter 6
Exarrple 6.2.2
For the sequence
,(")
*o[, ?]
=
we have
q=
2tr
7
l8rr
which is not rational. Thus, the sequence is not periodic.
lrt
x.(n) define the set of functions
+2, ...
(6.2.r4)
k = 0,
- gi*t\'n
=1,
with Oo = 2n/N, so that rr(r,) represents the kth harmonic of fundamental signal
.r, (n ). In the case of continuous-time signals, we saw that the set of harmonics
expljk(2t/T)tl, k : 0, +1, !2,... are all distinct, so that we have an infinite number
x1,(n)
of harmonics. However, in the discrete-time case, since
x*tN(n): si$+N,:a = ,i2zn 4kzin = x{n)
(6.2.1s)
there are only N distinct waveforms in the set given by Equation (6.2.14). These correspond to the frequencies f,)1 :Ztrk/Nfork =0, 1,...,N- l. Since dlp*y= dlo+. 2n,
waveforrns separated in frequency by 2n radians are identical. As we shall see later,
this has implications in the Fourier analysis of discrete-time, periodic signals.
Esample 6,2.3
Consider the continuous-time signal
2
r(l) = )
Lwhere co
= l, cr = (l + il) = cl,,
Let us sample:(t)
and c, =
uniformly at a rate
f
c*d^'i'
-2
cl, = 312.
= 4 to get the sampled
signal
2
r1z;
= ) c*dr'i to
k=-2
=
f
I--2
,re,ro*
where .f!o = aQn/3). Thus, x(n ) represents a sum of harmonic signals with fundamental
period N = 2tm/Ao. Choosing rn = 4 then yields N = 3. Il follows, therefore, that there
are only three distinct harmonics, and hence, the summation can be reduced to one consisting only of three terms.
To sEe this, we note that, from Equation (6.2.15), we have exp (i 2flra) = exp(-i0on )
and exp(l(-2()on)) = exp(ifha), so thal grouping like terms together gives
Sec.
6.3
Discrete-Time Systems
287
I
x(n) = )
t--l
duetL'i'
where
do=
6.3
co=
l.d, = c, )- t' ,= -1 * i),a ,= c-r + c, -
-, - i)=
al
DISCRETE-TIME SYSTEMS
A discrete-time system is a system in which all the signals are discrete-time
signals.
transforms
discrete-time
into
discrete-time
system
inputs
outThat is, a discrete-time
puts. Such concepts as linearity, time invariance, causality, etc.. which we defined for
continuous-time systems carry over to discrete-time systems. As in our discussion of
continuous-time systems, we consider only linear, time-invariant (or shifi-invariant)
systems in discrete time.
Again, as with continuous-tirne systems, we can use either a timc-domain or a frequency-domain characterization of a discrete-time system. In this scction, we examine
the time-domain characterization of discrete-time systems using (a) the impulseresponse and (b) the difference equation representations.
Consider a lincar, shift-invaria n t, discrele-tinrc system with input x(n). We saw in
Section 6.2.1 that any arbitrary signal .t(n) can be written as thc weighted sum of
shifted unit-sample functions:
-r(n)= ) x(t)E(n-l<)
t
=
(6.3.1)
-o
It follows, therefore, that we can use the linearity property of lhe system to determine
its response to.r(n) in terms of its response to a unit-sample input. kt i(n) denote
the response of the system measured at time n lo a unit impulsc applied at time zero.
If we apply a shifted impulse 6(n - k) occurring at time k, then. by the assumption of
shift invariance, the response of the system at lime n is given by /r (n - k). If the input
is amplitude scaled by a factor r(k), then, again, by linearity. so is the output. If we
now fix a, let k vary from -- to or, and take the sum, it follows from Equation (6.3.1)
that the output of the system at time n is given in terms of the input as
y(,)= i x(k)h(n-k)
t=-D
(6.3.2)
As in the case of continuous-time systems, the impulse responsc is determined assuming that the system.has no initial energ,y; otherwise the linearity property does not hold
(why?), so that y(z), as determined by using Equation (6.3.2), corresponds to only the
forced response of the system.
The right-hand side of Equation (6.3.2) is referred to as the convolution surl, of the
two sequences r(n) and h(n) and is represented symbolically as r(n) * &(z). By
replacing kby n - k in the equation, the output can also be writtcn as
Discr€t+Time Systems
Chapter 6
y(n)= ) x(n-k)h(k)
k= -a
= h(n) * x(n)
(6.3.3)
Thus, the convolution operation is commutative.
For causal systems, it is clear that
h(n):0,
n<0
(5.3.4)
so that Equation (6.3.2) can be written as
tl
y(n)= ) x(k)h(n-k)
i-
(53.s)
-o
or, in the equivalent form,
y(n)=)r(n-k)h(k)
(6.3.6)
l=0
For continuous-time systems, we saw that the impulse response is, in general, the sum
of several oomplex exponentials. Consequently, the impulse response is nonzero over
any finite interval of time (except, possibly, at isolated points) and is generally referred
to as aD infinile impube response (IIR). With discrete-time systems, on the other hand,
the impulse response can become identically zero after a few samples. Such sptems
are said to have a Jinite impulse response (FIR). Thus, discrete-time systems catr be
either IIR or FIR.
We can interpret Equation (6.3.2) in a manner similar to the continuous-time case.
For a fixed value of n, we consider the product of the two sequences .r(t) and
h(n - k),where h(n - k) is obtained from ft(&) by first reflectingh(k) about the ori-
ginandthenshiftingtotherightbynifnispositiveortotheleftby lnlifzisnegative. This is illustrated in Figure (6.3.1). The output y(x) for this value of z is
determined by summing the values of the sequence x(k)h(n - k).
n(t)
(b)
i(a - l)
(c)
6J.l
r(t<)lt(a
- *)
(d)
Figure
Convolution operation of Equation (6.3.2). (a)
e(t), (c) h(n - k), and (d) r(&)&(z - &).
r(k),
(b)
Sec.
6.3
289
Discrete-Time Systems
"
We note that the convolution of h(n) with 6(n) is, by definition, equal toi(n).That
is, the convolution of any function with the 6 function gives back the original function.
We now consider a few examples.
f,rqrnple 63.1
When an input.r(n ) = 36(r
output is found to be
-
v@
2) is applied lo a causal, lincar time-invariatrt system, the
=,li).'0"
n>2
Find the impulse response h(n ) of the system.
By definition, /r(a) is the response of the system to the input 6(n). Since the splem is
LTI. it follows that
h(nl
-\y(n
+ 2)
We note that the output can be wrilten as
,,,,
=
lr(l)'-'),a rt
*
;)"'"
[; (-
so that
,", = :[(-;)" . (])"1.,,,
Example 0-92
L.et
r(n) = o"1n',
h(n): $'u(n)
Then
y(n)
Since
=)
aru(k)p'-tu(n
u(k) = 0 for k < 0, and u(n - kl = 0 for
y(r) =
i
l -0
&
-
k)
> n, we can rewrite the summation as
olpa-r = p"
)
(op-,)^
t-0
ClearlY,Y(n)=0ifr<0.
Forn
z0,if o = g,wehave
y(n)=9'ittl=(n+l)P"
If c + p, the sum can be put in closed form by using the formula (see Problem 6.5)
Discrete-Time
Sysiems
Chapter 6
),,,r='l:--s::"-, a*t
Assuming that ag
(6.3.7)
-l + l. we can write
y(z) =iPa,
I - ("P-')1.' gil-{-'
l-oP-t = o-P
As a special case of this example, let
of this system obtained by setting c
c = l, so that r(fl) is the unit step. The step response
= I in the last expression for y(z ) is
I - p'tl
ytr,/=-l_p
In general, as can be seen by letting r(z) : u(nl in Equation (6.3.3), the step
response of a system whose impulse response is fr(z) is given by
st,r)=
j att)
k--r
(6.3.8)
For a causal system, this reduces to
stn)=jrt*)
I .(l
(6.3.e)
It follows that, given the step response s(a) of a system, we can find the impulse
response as
h(n) = s1r) - s(n
- 1)
(6.3.10)
xample 6.3.3
We want to find the step response of the system with impulse response
h@ = 2(:)
*,(?,),",
By writing &(n ) as
o(,) =
it follows from the
[(]",'')'* (l"-,'r)'],t"r
Iast equation in Example 6.3.2 that the step response is equal to
,,.,
=
which can be simplified
as
s(n)
f:$;{.'
l r-r""
-
(l:' ).'.1,,,,,
t-1e-t' I
=2. i;(l)'.',(T,). n>o
Sec.
6.3
Discrete-Time System.
291
We can use Equation (6.3.1
responsc as
pulse
ft(n) = 51r,; -.r(n
=
- l)
'.,,('l
,) (i)"'."(?
J,,
A(l)"
("
-
,))
which simplifies lo lhe expression for ft(n) in the problem srarement.
The following examples consider the convolution of two finite-length sequences.
Brerrrple 6.8.4
be a finite sequence that is nonzero for n e lN,,Nrl and h(n) be a finite
sequence that is nonzero for n e [N,, Nnl. Then for fixcd n, h(n - /<) is nonzero for
k e ln- Nn,rr - N.'1. whereas r (k ) is nonzero only for li e lN,.Nr],sothattheproductr(k)lr(n - k) iszero if rr - N, < N, or i[a - No > Nr.'l hus,y(n ) is nonzero only for
n e [N, + Nr, N, + N.rl.
Let M = N, - N, + I be the length of the sequence.r(n ) and N = No - Nr + I be the
length of the sequence /r(n ). The length of lhe sequence r,(rr). which is (N, + &) (Nr + Nr) + I isthusequal to M + N - l.Thar is, the convolurion of an M-point sequence
and an N-point sequence rcsults in an (M + N - l)-point scqucnce.
Let
r(n)
Example 63.5
Let h(n) : ll. 2,0. - l, I I and -r(z) = 11, 3, - l, -21 be trvo causal sequences. Since i(n)
is a five-point sequence and.r(z) is a four-point sequencc, from the results of Example
6.3.3, .y (n ) is an eight-point sequence that is zero for r ( 0 or a > 7.
Since both sequences are finite, we can perform the convolution easily by setting up a
table of values of h(k) and x(n - k ) for thc relevant valucs o[ n and using
y(r)=i h(k)x(n-k)
as shown in Table 6.1 Thc cntries for
r(n
- /i ) in lhe table are obtained by first reflecting
.r(k) about the origin to form r(-t) and successively shifting thc resulting sequence by I
to the right. All entries not explicitly shorvn are assumed to hc zero. The output y(z) is
determined by multiplying the entries in the rorvs corresponrling to & (& ) and .r(r - * ) and
summing the results. Thus, to find y(0), multiply the entries in rorvs 2 and 4; fory(l), multiply rows 2 and 5; and so on. The last two columns list n and r,(n), respectively.
From the last column in the table, we see that
_v(n
)
:
ll. s,5, -5, -6,4. l.
-21
Example 6.8.6
We can use an alternative tabular form lo dctermine y(a ) hy noting that
y(n) = h(o\x(n) + ,l(l)x(x - l) + h(2).r(n - 2) +--+ ft(-l)-t(n + l) + l,(-2).r( + 2) +...
292
Discrele-Time
Systems Chapter 6
ABLE 5.1
onYoluuon Table ,o, Erample 6.3.4.
-3
-2
-1
v(n,
72
13
h(k)
r(,t)
r(-k)
.t(l - k)
x(2 - k)
x(3 - k)
-t
-2
3
3r
-1 i
-2 -t
x(-k)
r(s - t)
r(6 - /<)
r(7
-
0l
l5
25
3-5
4-6
54
61
t7-2
1
-l
-2
0 -l
-l -2
-2
1
3l
-1 3 I
3
-2 -l
3
-2 -l
-2 -1
1
k)
I
3
We consider the convolution of sequences
l!
h(n)-- l-2,2,O, -1,
It
and .r(z) = l-1,3, -1,-21
The convolution table is shown in Table 6.2. Rows 2 through 5 lisr.r(n - /<) for lhe relevantvalucsof t, namely, & = -l,0, 1, 2, and 3. Values of ft (t ): (n - ft ) are shown in rows
7 through ll, and y(n ) for each n is obtained by summing these entries in each column.
TABLE 6.2
Conyolutlon Table ,or ErEmplo 6.3.5.
.t
(z + 1)
x(n)
.r(z
x(n
x(n
- l)
-
(0)r(n )
ft(l)r(z
h(2)x(n
h(3)x(n
v(n,
-1
-1
3-l
-t
2)
3)
h(-l)r(n + 1)
/l
-2
- 2)
- 3)
-6
-2
1)
3
-2
-1
-t
-1
3
-l
24
6-2
00
-l
-2
-l
3
-2
-1
-2
-4
I
-8
-2
3
0
0
0
-3
-1
I
2
3
-1
-8
4
I
-2
-2
Finally, we note that just as with the convolution integral, the convolution sum
defined in Equation (6.3.2) is additive, distributive, and commurative. This enables us
to determine the impulse response of series or paratlel combinations of systems in
terms of their individual impulse responses, as shown in Figure 6.3.2.
Sec.
6.3
293
Discrete-Time Systems
ffi-@*
(a)
i,(tl)
h1l,t)
h20tl
i,(n)
i,(l)
(c)
Iigure
6J.2
Impulse responses of series and parallel comhinations'
E-a'nple 63.7
Consider the system shown in Figure 6.3.3 with
ft'(n ) = E(z)
-
a6(n
- l)
o,ot = (l)""<"t
\(n)
= a"u1n)
ho@\=(n-l)u(n)
[o (n)
h10')
h2h'1
h3@)
h5(,')
Flgure
633
System for Example 6.3.7.
294
Discrete-Time
Syst€ms
Chapt€r 6
and
fts(n)
It
is clear
:
51r; + n u(n - I) + D(n
-
2)
from the figure that
To evaluate
h(n) = 1r,1n1 * h2(n) * hr(n) * lhr(n) - ho@)l
h(n\, we first form the convolution hr(tt) * fir1n1
^,
hr(n) * hr(n) = [6(n) - a6(n - 1)] * a' u(nl
= a" u(n) - a' u(n - l) = 6(n)
Also.
h'(n)
-
h'@)
=:[i]:,:'i;,'l;l
-2) -
(n
-
'|)u(n)
so that
n(n) = 6(n) * hr(n) t [6(n) + 6(zr - 2) + u(n)l
= h(n) + hr(n - 2) + sr(n)
where sr(n) represents the step response corresponding to hr(n). (See Equation (6.3.9).)
We have, therefore,
/l\",t /t\,-2u(n-2) -i\l/
+ /!\l+\2)
h(^)=\2)
which can be put in closed form, using Equation (6.3.7),
*@=(i)
6.4
as
,@-2)+2u(n)
PERIODIC CONVOLUTION
In certain applications, it is desirable to consider the convolution of two periodic
sequences r,(n ) and:r(n), with common period N. However, the convolution of two
periodic sequences in the sense of Equation (6.3.2) or (6.3.3) does not converge. This
rN + m in Equation (6.3.2) and rewriting the sum over k as
can be seen by letting k
a double sum over r and m;
:
a
o
N-t
y(n)= ) x,(k)xr(n-e)= > )r,(dv+m)xr(n-rN-m)
k=-,--@ m-o
Since both sequences on the right side are periodic with period N, we have
- N-l
y(n)= ) )x,(m)rr(n-m)
r= -a m=0
For a fixed value of /r, the inner sum is a constant; thus, the infinite sum on the right
does not converge.
Sec.
6.4
Periodic Convolution
295
ln order to get around this problem.
as in continuous_ timc, wc define a different
form of convolution for periodic signals, namely, periodic convolution:
N-t
y(n)= ) t,(k)xr(n-k)
(6.4.1)
'{l
Note that the sum on the right has only N terms. We denote thts operation
4
v(n) = x,(n)
By replacing k by n
-
€r -r2(n
as
)
(6.4.2)
k in Equation (6.4.1), we obtain the equivalent form,
/V-l
.y(,,): ) x,@-k)xr(k)
I
(6.4.3)
={l
We emphasize that periodic convolution is defined only for sequences with the same
period. Recall that, since the convolution of Equation (6.3.2) rcpresents the output of
a linear system. it is usual to call it a linear conyolution in order to distinguish it from
the convolution of Equation (6.4. I ).
It is clear that y(n) as defined in Equation (6.a.1) is periodic. since
y(n+ N)=
5'rr,, + N-t)rr(&)= v(n)
for0sr
(6.4.4)
< N l. It can also be easily verified
so that y(n ) hastobe evaluated only
that the sum can be laken over any one period. (See Problem 6.12). That is,
-
N,,+N-l
l=
,,
The convolulion operation of Equation (6.4.1) involves the shifrecl sequence rr(n - *),
which is obtained from.rr(n) by successive shifts to the right. ll()wevert we are interested only in values of n in the range 0 < n -: N - l. On each succcssive shift, the first
value in this range is replaced by the value at - l. Since the sequcnce is periodic, this
is the same as the value at N - l, as shown in the example in l.'igure 6.4.1. We can
assume, therefore, that on each successive shift, each entry in lhc sequence moves one
place to the right, and the last entry moves into the first place. Such a shift is known as
a periodic, or circular, sirifl.
From Equation (6.4.1), ylnl can be explicitly written as
y(rz) =,r,(0).rr(z) + r,(1).rr(n
- l) + ..' +r,(N - l).t.(n -
N +'1)
We can use the tabular form of Example 6.3.6 to calculate y(rr ). However, since the
sum is taken only over values of n from 0 to N - l, the tablc has to have only N
columns. We present an example to illustrate this.
f,gernple 6.4.1
Consider the convolution of the periodic exlezsioru of two sequcnccs:
r(n)=11.2.0,-ll
and
ft (n
)=
{1. 3.
-
I
.
-
2l
Discrele-Ti me
296
r (n)
Systems
Chapter 6
|
I
r(z - l)
Flgure
6.4.1 Shifting of periodic sequences.
It follows that y(a ) is periodic with period N = 4. The convolution table of Table 63 illustrates the steps involved in determining y (n ). For n = 0, 1, 2, 3, rows 2 through 5 list the
values ofr(z
&) obtained by circular shifts r(n). Rows 6 through 9list the values of
h(k)x(n k). The oputput y(n ) is determined by summing the entries in each column
corresponding lo these rows.
-
-
TABLE 6.3
Perlodlc Convoludon o, Erampte 6.4.1
)
x(n - l)
x(n-z)
x(n -3)
/l (o).r(n )
ft(l):(z - l)
h(z)t(n - 2)
ft(3)r(n - 3)
v(n)
.r(n
1
-1
0
2
I
-3
0
-4
-6
20
12
-1
-1
0
I
0-t
20
36
I -1
02
67
2
1
-1
0
-2
-2
-5
While we have defrned periodic convolution in terms of periodic sequences, giver
two finiteJength sequences, we can define a periodic convolution of the two sequences
in a similar manner. Thus, given two N-point sequences r,(n ) and rr(n), we defrne
their N-point periodic convolution as
N-t
l, r,(k)xr(n
4@) = l-0
where
rr(n
-
&) denotes that the shift is periodic.
- k)
(6.4.6)
Sec.
6.4
n7
Periodic Convolution
ln order to distinguish y(n ) discussed in the previous section from yr(n )'y(n ) is usually refcrred to as the llnear convolution of the sequences .r, (a ) and -r,(n ). since it corresponds to the output of a linear system driven by an input.
It is clear that yr(n ) in Equation (6.4.6) is the same as the pcriodic convolution of
the periodic extensions of the signals x;(n) and xr(n ), so that v/,(n) can also be considered periodic with period N. If the two sequences are not of the same length, we can
still define their convolution by augmenting the shorter sequence with zeros to make
the two sequences the same length. This is known as zero'Podding or zero-augmentation. Since zero-augmentation of a finiteJength sequence does not change the
sequence. given two sequences of length N, and Nr, we can define their periodic convolution. of arbitrary length M. denoted ll o@)l*, provided that M > Max [Nt, Nrl. We
illustrate this in the following example.
Example 6,42
Consider the periodic convolution of lhe sequences h(n) = ll.2' 0, -l' ll and.r(n) =
We can find the M-point periodic convolution of the two
I l. 3. - I , - 2l of Example 6.3.5.
>
the sequences appropriatcly and following the prozero-padding
sequences for M 5 by
cedure of Example 6.4.1. Thus, Ior M = 5, we form
x"(n ) = (1, 3.
so that both
h(n)
and
x.(n)
-
are five points long.
1.
-2,
01
It can then casily be veritied that
lv,(n )ls = ls' 6' 3'
-5'
-61
Comparing rhis result with y(n ) obtained in Example 6.3.4. we note that while the ftrst
three values ofylz) and lr,r(n )ls are different, the next two values are the same. In fact,
tv,(o)ls = .v(o) + v(6), t/e(l)L =
v(l)
+ v(7)' lJoQ\l -- v(2) + v(8)
Ir can similarly be verified lhat the eight-point circular convention of r(z) and ft(z)
obtained by considering the auBmenled sequences
x,(n ) =
ll, 3, -1, -2,0,
0, 0, 0l
and
h"(n) = 1t,2,0, -1, l,0,0,0l
is given by
1Yr(n)h =
which is exactly the same
as
ll'
s' s'
-s' -6'4't'
-2]1
y(n ) obtained in Example 6.3.5.
The preceding example shows that the periodic convolution lr@) ol two finitelength sequences is related to their linear convolution y,(n ). We will exPlore this relationship further in Section 9.4.
4E
J.5
Discrete-Time
Systems
Chapler 6
DIFFERENCE-EQUAT]ON REPRESENTATION
OF DISCRETE-TIME SYSTEM
Earlier, we saw that we can characterize a continuoui-time system in terms of a differential equation relating the output and its derivatives to the input and its derivatives.
The discrete+ime counterpart of this characterization is the difference equation, which,
for linear, time-invariant systems, is of the form
M
)
t-0
aoY@
- k): t)oox1n-t<'1, a>o
-o
(6.s.1)
where ao and bo are known constants.
By defining the operator
Dky(n): y(n
-
k)
(6.s.2)
we can write Equation (6.5.1) in operator notation as
NM
)-0 aoDk y(n) = l-0
) toDk x(n\
(6.s.3)
&
Note that an alternative form of Equation (6.5.1) is sometimes given
NM
2ooy(.n +
t-0
k): > box(n+ k),
&-0
n>0
as
a
(6.5.4)
In this form, if the system is causal, we must have M s N.
The solution to either Equation (6.5.1) or Equation (6.5.4) can be determined, by
analogy with the differential equation, as the sum of two components: the homogene()us solution, which depends on the initial conditions that are assumed to be known,
and tlre particular solution, which depends on the input.
Before we explore this approach to finding the solution to Equation (6.5.1), let us
consider an alternative approach by rewriting that equation as
=;l2u"a-
o,- -t ap@ - q]
(6.s.s)
In this equation, x(n - &) are known. lf y(n - /<) are also known, then y(n) can be
v@
determined. Setting z = 0 in Equation (6.5.5) yields
v(o)
:
; [r4 r",-o, - -i., +rt-tl]
(6.s.6)
The quantities y(-&), for k = 1,2,..., N, represent the initial condirions for the difference equation and are therefore assumed to be known. Thus, since all the terms on
the right-hand side are known, we can determine y(0).
We now let n = I in Equation (6.5.5) to get
yrr) =
* [_tr-,,, - k) -j,,rrr - &)]
S€c.
6.5
Difference-Equation Repr€sentation ol Discrete-Time
Systems
299
and use the value of y(0) determined earlier to solve for y(l ). This process can be
repeated for successive values of n to determine y(n) by iteration.
Using an argument similar to the previous one, we can see that the initial conditions
1). Starting with these inineeded to solve Equation (6.5.4) are y(0), y(1), ..., y(N
tial conditions, Equation (6.5.4) can be solved iteratively in a similar manner.
-
Example 65.1
Consider lhe difference equation
y@ -ly(n -
1)
*
|rt,
-,
=
(i)"'
n
)0
with
y(-2) = o
Y(-l) = I
Then
t{d =tot{n- u - irt, - r,. ())"
so that
7
'ott' r)-lvt -2)+1- 4
y(l) = 31t<ol lr,- , -:=12
r'(o) =
y(2) =
lrtrr -
rr83*
8)(o)
a=
u
etc.
Whereas we can use the iterative procedure described before to obtain y(n) for several values of n, the procedure does not, in general, yield an analytical expression for
evaluating y(a ) for any arbitrary n. The procedure, however, is easily implemented on
a digital computer. We now consider the analytical solution of the difference equation
by determining the homogeneous and Particular solutions of Equation (6.5'l).
6.6.1 Eomogeneoue Solution
of the Difference Equation
The homogeneous equation corresponding to Equation (6.5.1)
2orY@-t)=o
=0
is
(6.s.7)
&
By analogy with our discussion of the continuous-time case, we assume that the solution to this equation is given by the exponential function
300
Dlscrete-Time
yo(n) =
Systems
Chapter 6
Aa'
Substituting into the difference equation yields
II =0 a2Aa"-& :0
Thrs, any homogeneous solution musl satisfy the algebraic equation
)aoa-k=O
(6.5.8)
t=0
Equation (6.5.8) is the characteristic equation for the difference equation, and the values of a that satis$ this equation are the characteristic values. It is clear that there are
N characteristic roots.rl, c2, ..., ax, and these roots may or may not be distinct. If they
are distinct, the corresponding characteristic solutions are independent, and we can
obtain the homogeneous solution yr(n) as a linear combination of terms of the type
ci, so that
ytb)
:
Aro'i + A2ai + ..- +
Ana,|,
(6.5.9)
If any of the roots are repeated, then we generate N independent solutions by
multi-
plying the corresponding characteristic solution by the appropriate power of n. For
example, if c, has a multiplicity of P,, while the other N - P, roots are distinct, we
assume a homogeneous solution of the form
+ Arna'l + "' * Ap,nP,-ts'i
+ Ar,*ta[,*r + ... + Anai,
yn@) = z{raf
trranple 6.5.2
Consider the equation
y@)
-Ey@-
r1 +
fr(z - 4 - *cy@- 3) = o
with.
y(-l)=6,
y(-2)= 6
y(-3) = -2
The characteristic equation is
r-|1"-'+f"-' -Loa=o
or
o,-Eo,*i"-*=o
which can be factored as
("-)("-i)("-i)=.
(6.5.10)
|
Sec.
6.5
Diflerence-Equation Representation ol Discrete-Time
Systems
gOi
so that the characteristic roots are
tlt or=i,
"r=),
ar=4
Since {hese roots are distinct, the homogeneous solulion is of the form
nt
t = e,(l).
n,(1)'. r,(l)"
Substitution of the initial conditions then gives the following equalions for the unknown
constants
.r4
r, ./42, and z4r:
zAt+3A2+4A3=6
4At+9A2+164=6
8At + nAz + 64A, =
-)
The simultaneous solution of these equations yields
Ar=7, n,= -l:,
A.,=:
The hornogeneous solution, therefore, is equal to
vd)
E:varnple
-?G) T(]I i(ii
653
Consider the equation
.5
y(n)
-
ll
+
iy@ -.1) S@
-
z)
-
-iey@
-
3) = 0
with the same initial conditions as in the previous example. The characteristic equation is
5lr.
l-4o-'*ro-'-rUo-'=0
with roots
llt
9t=i'
(lz=i,
Therefore, we write the homogeneous solution
v^(") = A,(:)
ar=4
as
. n*(:)^. r,(i)'
Substituting the initial conditions and solving the resulting equarions gives
951 At=4,
At=2.
/r=-g
Discrete-Time
302
Systems
Chapter 6
so that the homogeneous solulioo is
y^@
=z(r)"
. ?0'- ;(i)'
6.62' The Particular Solution
We now consider the determination of the particular solution for the difference equation
NM
2ooy@-k):>box(n-k)
*-0
(5.s.11)
t=0
We note that the right side of this equation is the weighted sum of the input x(n ) and
is delayed versions. Therefore, we can obtain lr@), the particular solution to Equation (65.11), by first determining y(n ), the particular solution to the equatirn
2
oot@
- k) = x(n)
(6.s.12)
l=0
Use of the principle of superposition then enables us to write
M
4@)=luoyln-*7
,(-t)
(6.s.13)
To find y(n), we assume that it is a linear combination of x(n) and its delayed versions
1), x(n
2), etc. For example, if .r(z) is a constant, so is x(n k) for any k.
Therefore, !(n) is also a constant. Similarly, if .r(z) is an exponential function of the
form p", y(z) is an exponential of the same form. If
.r(z
-
-
-
x(z) = sin(f,n
then
x(n
-
k) = sinf,h(n - *) = cos(hk
sinf,lon
-
sin0ok cos0on
Correspondingly, we have
y@)=esinOon+Bcosf,)oz
We get the same form for y (z ) when
x(n)
:
"ot
61o'
We can determine the unknown constants in the assumed solution by substituting into
the difference equation and equating like terms.
As in the solution of differential equations, the assumed form for the particular solution has to be modified by multiplying by an appropriate power of n if the forcing function is of the same form as one of the characteristic solutions.
Erarnple 6.6.4
Consider the difference equation
yol
-llo - r) + |r(z - 2) = 2sinff
Sec.
6.5
Ditlerence-Equalion Bepresentation ol Discrete-Time Systems
303
with initial conditions
.y(-1)=2 and y(-Z)=4
We assume the partrcular solutlon to be
trb)=Asin!+
acosnl
Then
l,@
- l):
e
(1 -21)',
sin
-r
*
' "o""
'
'',
By using trigonometric identities, ii can easily be verified lhat
nt
. (n-- - l)zr -cos,
2
ano
sln
cos
- -, l)rltn'sln
2
(n
so that
to@' l) = - Acosn;
Similarly. yr(n
-
+ o sin"l
2) can be shown to be
),p@
-
2) =
-21)n
-/.o.('
= -esinll
-
* Brin(' ,')'
ecosll
Substitution into the difference equation yields
?-ir - |a).i,t-. (,. i^ - lr)co''f - 2.inf
Equating like terms gives the lollowing equations for the unknorvn constants A anl.i B:
n-3r-lo=,
48
B+
3l
iA -*B=0
Solving these equations srmultaneously, rve obtain
^=iT
and ,=-31
so that the particular solution is
t,,(z; =
lD'in"i' -
il
*'t'I
To tind the homogeneous solution. we wriic thc characteristic equiltion for the difference
equation as
304
Discrete.Tim€
Syslems
Chapter 6
3ll-a"-'+rc-z=0
Since the characteristic roots are
ar=41l ano ,r=i
the homogeneous solution is
n<a= e,(!)
so that lhe tolal solution is
,(,.)
=,l,(i) .
",(;)'.
.
"(i)
H''T - H*T
We can now substitute the given initial conditions to solve for the coNtaDts dr and z{, as
n,=
-fi
and
Ar=+
so that
v(n) =
-r" fi)' . 1r(;)' . 1?,',T -
H
*,?
Example 6.6.6
Consider the difference equation
y@
-lyrr -
rt +
lrtn -
2) -- x(n) +
jrla -
r)
sith
x(n) = 2"in\
From our earlier discussicn, we can determine the particular sotution for this equation in
terms of the particular solution yr(z ) of Example 6.5.4 as
I
y(n)=yp(n)+)t,@-r)
=
=
ll2 stnt
nt 96cos-rnr +. 56srn. (z - l)zr- 48 (n - lln
85EJ
E
2 85 "*--l74stn nr
__
85
-
152
n7t
cos
'E5-
2
Sec.
6.5
Systenrs
Ditlerence-Equation Representation ol Discrele-Time
305
6.6.3 Determination of the Impulse Responee
We conclude this section bv considering the determination o[ the irnpulse response ot
systems described by the differencc equation. Equation (6.5.1 ). Rccall rhar the impulse
response is the response of the svstem to a unit sample input rvith zero initial conditions, so that the impulse response is just the particular solution to thc dift'erence equation when the input .r(n ) is a 6 function. We thus consider thc cquation
j
n^,r1,
(=(t
- /.) = li=t)
i h^6(n - k)
(6.5.14)
withy(-1), y(-2),
etc., set equal to zero.
Clearly, f.or n > M, the right side of Equation (6.5.14) is zcro. so that we have a
homogeneous equation. The N initial conditions required to solvc this equation are
y(M),y(M - l), ..., y(M - N + l). Sinceiy'> M for a causal systcnr. we have roderermine only y(0), ) ( l ), ..., y(M). By successively letting n take on lhe values 0, 1. 2 ....
M in Equation (6.5.14) and using the fact that y(k) is zero if ll < 0. we get the set of
M
i
I
equations;
j
2
orYb
t=t)
- k\ = br i = 0, 1, 2. "' . M
(6.s.15)
or equivalently, in matrix form.
ao 0
0
hn
atoo0
(6.s.16)
d
ttt ou_t
oo_
[:]
_b .t
The initial conditions obtained by solving these equations are now used to determine
the impulse response as the solution to the homogeneous equltion
i"o''('-e)=0'
l=o
n>M
(6'5'17)
Eromple 6.6.6
Consider the syslem
y(a)
-
sy(n
N = 3 and M
tion to ihe cquarion
so that
.v(n)
-
- l) * l.rt, - 4 - t)oy|- 3) =.r(,rr . l.r(n -
= l. It follorvs that the impulse
5r,(rr
- l)* jrfr,- lt -
and is therefore o[ the form (scc Example 6.5..\ )
r)
responsc is dctcrmined as thc solu-
,lnr(,,
- 3t rr.
,r
z:
Discrete-Time
306
.
h@ = A,(;)
^"(:).,.(i)'
Sysiems
Chapter 6
n>2
The initial conditions needed to determine the constants ,/4 | . Ar, and A, arc y( - I ). y(0).
and y(1). By assumption. y(- t) = 0. We can determine y(0) and y(l) by using Equation
(6.5.16) to get
r,tr
,]
[-,
til
sothaly(0) = I y(l) =19/12.
Us€ of lhese
initial conditions gives the impulse response as
,r, = _i0).. ,,r,(jI.
This is an infinite impulse response
as
i(l)..
n
=
o
defned in Section 6.3.
Example 6.6.7
Consider the following special case of Equation (6.5.1) in which all the coefficients on the
lefthand side are.zero except for ao, which is assumed to be unity:
M
y(n)=)brx(n-kl
(6.s.18)
I "0
We let x (n ) = 6(n ) and solve for y(z ) iteratively to get
y(o) =
bo
y(l) = Dr
y(M) = bu
Clearly. y(rt ) = 0 for n > M, so that
h(nl =
lbo. bt, b2,
.... bgl
(6.s.le)
This result can be confirmed by comparing Equation (6.5.18) with Equation (6.3.3). which
yields ft(ft) = b1. The impulse response becomes identically zero afler M values. so thal
the system is a finite-impulse-reponse system as defined in Section 6.3.
6.6
SIMULATIONDIAGRAMS
FOR DISCRETE-TIME SYSTEMS
We can obtain simulation diagrams for discrete+ime systems by developing such diagrams in a manner similar to that for continuous-time systems. The simulation diagram
in this case is obtained by using summers. coefficient multipliers. and unit delays. The
Sec.
6.6
Simulation Diagrams lor Oiscrete-Time Systems
307
first two are the same as in the continuous-time case, and the unit delay takes the place
of the integrator. As in the case of continuous-time systems, we can obtain several different simulation diagrams for the same system. We illustrate this by considering two
approaches to obtaining the diagrams, similar to the two approaches we used for continuous-time systems in Chapter 2. In Chapter E, we explore other methods for deriving simulation diagrams.
Erample 6.6.1
We obtain a simulation diagram for the system described by the difference equation
y(n)
If
- l) - 0.25y(a - 2) + 0.0625.v(n - 3)
= r(n) + 0.5 x(n - 1) - .r(n - 2) + 0.25.t(r - 3)
-
0.25y(n
(6.6.1)
we now solve for y(n ) and group like terms together. we can write
y(n) =
r(z) + D[0.-5x(n) + 0.25y(n)l + Dzl- x(n) + 0.25y(n)]
+
D3[0.25
r(r) -
o.062sy(n)]
where D represents the unit-delay operator defined in Equation (6.5.2). To obtain the simulation diagram for this system, we assume that y(n) is available and first form the signal
-
0.0625
y(n)
We pass this signal through a unit delay and add
-r(a)
+ 0.25 -vQr) to form
ua@) = 0.25
ur(n)
:
p19.25
*(n)
-
x(n)
0.M25
y(n)l + [-.r(n) +
0.25,v(n )]
We now delay this signal and add 0.5 .r(n ) + 0.25 y(n) to it to get
a2(/u)= D2l0:Er(n)-0.0625y(n)l +
If
D[-r(n) +0.25y(n)] + [0.5x(n)
we now pass or(rr ) through a unit delay and add
r(n),
+ 0.25y(z)]
we get
u,(n) = 2r1g.2tr(n)-0.06?5y(n)l + D'z[-.r(z) + 0.25y(n)]
+
D [0.5
.r(z) +
0.25 y(n
)]
+
r(r
)
Clearly, o,(z) is the same as y(n ), so that we can complete the simulation diagram by
equating the two expressions. The simulation diagram is shown in Figure 6.6.1.
Consider the general Nth-order difference equation
y(n) + ary(n
-
1)
+ ... + ary(n
-
N)
= bax(n) + brx(n
- l) + ... + b^,.r(n - N)
(6.6.2)
By following the approach given in the last example, we can construct the simulation
diagram shown in Figure 6.6.2,
To derive an alternative simulation diagram for the system of Equation (6.6.2), we
rewrite the equation in terms of a new variable u(n) as
Discrele-Time
308
Systems
Chapter 6
.r(n)
,r(z)
Flgure
l-]
+
u; (a)
6.6.1 Simulation diagram for Example
=y (z)
6.6.1.
x(n)
-r'( z
Figure
6.6,2 Simulation diagram for discrete-time
)
svstem of order N.
N
u(r)+)a,u(n-j):r(n)
(6.6.3a)
i=t
,v
y(n):lbSt(n-m)
(6.6.3b)
m-0
Note that the left side of Equation (6.6.3a) is of the same form as the left side of Equation (6.6-2), and the right side of Equation (6.6.3b) is of the form of the right side of
Equation (6.6.2).
Sec.
6.6
Simulation Diagrams for Discrote-Time Systems
To verify rhal these two equations are equivalent to Equation (6.6.2). rvc substitute
Equation (6.6.3b) into the left side of Equation (6.6.2) to ohtain
)(r) * j=l
t
aty(n
-r, :
=
Z,
b,,,u(n
ttt=
-,r) *
i
o,lf,,,o,,,u{u
- ^ - r,]
m\ +
.,- n]
,i),u^I,, _r,"r, -
=lb_x(n-m\
where the last step follows from Equation (6.6.3a).
To generate the simulation diagram, we first determine the diagram for Equation
(6.6.3a). If we have o(n ) available, we can generare o(n - 1). u(n - 2), etc., by passing u(z) through successive unit delays. To generate o(n ), we note from Equation
(6.6.3a) that
N
u(n)
-
y1n7
- )a,u(n - j)
i=r
(6.6.4)
To complete the simulation diagram. we generate _v(n ) as in Equation (6.6.3b) by suitably combining a(n).fln - l), etc.. The complete diagram is shorvn in Figure 6.6.3.
r(z)-+
Figure
6.63
Alternative simulation diagram for Nlh-order svstem.
Note that both simulation diagrams can be obtained in a straightforward manner
from the corresponding difference equations.
Erample 6.62
The alternative simulalion diagram for the system of Equation (6.6.I ) is obtained by writing the equation as
t(nl -
O.Zltt(n
' l) -
0.251(r
-
2) + 0.0625r,(rr
l) - t(l)
Discrets-Time
310
Systems
Chapter 6
and
y(nl = aln) + 0.5x,(r -
1)
-
o(n
-
2) + 0.?5a(n
-
3)
Figure 6.6.4 gives lhe simulation diagram using these two equations.
Ilgure 6.6.4 Alternative simulation diagram for Example
6.7
6.6.2.
STATE-VARIABLE REPRESENTATION
OF DISCRETE-TI ME SYSTEMS
As with continuous-time systems, the use of state variables permits a more comPlete
description of a system in discrete time. We define the state of a discrete-time system
as the minimum amount of information needed to determine the future output states
of the system. If we denote the state by the N-dimensional vector
v(n)= [u,(a) oz@). ' ,r:x@)l'
(6.7.1)
then the state-space description of a single-input, singe-output, time-invariant, discretetime system with input x(z) and output y(n ) is given by the vector-matrix equations
(6.7.2a)
v(n+1)=Av(n)+bx(a)
(6.7.2b)
y(n)=cv(n)+dx(n)
rvhere A is an N x N matrix, b is an N X l column vector, cis a I x Nrow vector, and
d is a scalar.
As with conlinuous-time systems, in deriving a set of state equations for a system,
we can start with a simulation diagram of the system and use the outputs of the delays
as the states. We illustrate this in the following example.
Erample 6.7.1
Consider the problem of Example 6.6.1, and use the simulation diagrams that we obtained
(Figures 6.6.1 and 6.6.4) to derive two state descriptions. For convenience, the two dia-
Sec.
6.7
31
State-Variable Representation ol Oiscret€-Time Systems
1
grams are repealed in Figures 6.7.1(a) and 6.7.1(b). For our first dcscription. we use lhr.'
outputs of the delays in Figure 6.7.1(a) as states lo get
) = ur(n) +.r(,r)
rr,(n + l) = az@) + 0.25.r,(n) + 0.5r(r)
= 0.25ur@) + u,(l) + 0.75r(rt)
(ti.7.3a )
.l'(n
u,(n +
l) = u.(n) + 0.25i(r,) -
(6 7
x(,?)
= Q.fJ1,(n) + u.(n) - 0.75.r(rt
o.,(z + l) = -0.625Y(n) + 0'25r(n)
= -O.M2Sut@) + 0.1875.t(n )
(6.7.3c)
)
(6.7.3d)
.r(n)
r,,
.'J(a)
r
(rr)
(r)----r
+(br
Figure
thr
6.7.1 Simulation diagrams for Exampls ().7
1 .
Discret€-Time
ln vsstor-matri.\ format.
vr,r+
Systems
Chapter 5
lhese equations can be lvritlen as
fo.zs 1ol
[o.zs
I
l)=l.,.zs 0 t lv(zt+ l-o.zs lxtnt
L o.oozs o o-l [o.rszs-]
l,(n)
: [l 0
0l v(n) +
g.7.4)
r(n)
so that
r ol
[o.zs
[o.zs I
5=lo.zs 0 ll 5=l-0.7sl, "=U 0 01. d=r
Lo.rsTs]
[-o.r.rozs o o.l
(6.7.s)
As in continuous time, rve refer to this form as ihe first canonical form. For our second
representation, we have. from Figure 6.7.1(b),
itrln'1
02,r' + t) = i{n)
itrln + t1 =
(6.7'6a)
(6'7.b)
l) = -o.o625ir(n ) +o.25i2b) + 0.25ir(n ) +r(n)
y(n) = o.25iln) - ir(n) + 0.5ir(n) + i3(n + l)
6.(n +
= -0.1875i,(n) -0.7562@) +0.75ir(n)
+.r(n)
(6.7.6c)
(6.1.a)
rvhere the last step follows by substituting Equation (6.7.6c). In vector-matrix format, we
can rvrite
1 ol
[o-l
o r lltnt+lo l..tnt
I o
i(n +r)=l 0
o.2sl
[-o.oozs o.2s
y(r) = [-0.1875 -0.75
so that
0.7s]
i(n)
o r ol
t
^
n=l o o I l.
o.zs)
Lrl
+ .r(n )
lol
b=lol.
L,l
c=[-0.187s -0.75 0.7s], d=r
L-o.oozs o.2s
G.7.7)
(6.7.8)
This is thc second canonical form of the state equalions.
By generalizing the results of the last example to the system of Equation (6.6.2), we
can show that the first form of the state equations yields
-at I "'
O-
tl'
-:'0...:
, c=
,
--o" O ": ;
[:
::
:l
;]
d=bo
g.,.e)
Sec.
6.7
itl
State-Vanable Representalion orDiscreie-Time Systems
J
whereas for the second form, we get
0
I
n
0
0
I
0
t:ll
'Ll
A=
I
0
b
n
- llt
-a:,t_l
--aN
-
T
a,rh,,
bx-r -
c=
.aru-rhu
b,
-
d=b'
(6.7. r0)
arb,,
These two forms can be directly obtained by inspection of the diffcrence equation
Figures 6.6.2 and 6.6.3. I.€t
v(z+l)=Av(z)+bx(n)
t-rr
(6.7.11)
Y(n)=cv(n)+dx(n)
and
(6.7.t2)
i(n+1)=Ai'(n)+br(rr)
.v(n)=6i(n) +2x@)
be two alternative state-space descriptions of a system. Then thcrc cxists a nonsingular matrix P of dimension N x N such that
(6.7.13)
v(a) -- Pi(n)
It can easily be verified that the lollowing relations hold:
A=pAp-r. u=p-'t, i=cP, i=a
(6,7.14)
6.7.1 Solution of State-Space Equatione
We can find the solution of the equation
v(r + 1) = Av(n) + br(n),
n=0:
v(0) =
by iteration. Thus, setting n = 0 in Equation (6.7.15) gives
v(l)=Av(0)+bx(o)
For n
:
l, we have
v(2):Av(l)+bx(l)
:
A[Av(o) + br(0)l + bx(l)
= A'?v(0) + Ab.r(0) + bx(l)
vu
(6.7.1s)
314
Dlscrete-Time
Systems
Chapter 6
which can be written as
v(2)
:
azr1s,
*
j
j-o
o'-'-'*1r'1
By continuing this procedure, it is clear that the solution for general z is
v(n
):
a'"16;
+!l'-i-'u,g1
j'o
$'7'16)
The first term in the solution corresponds to the initial-condition response, and the second term, which is the convolution sum of An-r and bx(z), corresPonds to the forced
response of the system. The quantity An, which defines how the state changes as time
progresses, represents the state-transition matrix for the discrete-time system O(n). In
terms of tD(n), Equation (6.7.16) can be written as
a-l
v(n):q1r;v(o) + )a(z
i'o
-l- l)bx(7)
$.7.1t)
Clearty, the frrst step in obtaining the solution of the state equations is the determination of A". We can use the Cayley-Hamilton theorem for this PurPose.
Example 6.79
Consider the system
ur(n + 7) =
ar(n
+
ll
or(n\
ll
= Sr,(r)
(6.7.18)
-
Oar(n)
+ x(n)
y(z) = u'(n )
By using.the Cayley-Hamilton theorem as in Chapter 2, we can write
4" = ao(z)I + c'(n)A
Replacing A in Equation (6.?.19) by its eigenvalues,
co(n)
(6.7.19)
-l *a I, leads to the equations
- ,l",t,= (-l)'
and
c1(z)+1","r=(1)'
so thal
oo(n)=3(i)'.i(-il
or(z)=;(i)'-;l;l
Sec,
6.7
State-Variable Bepresentation ol Dlscrete-Time
Syslems
Substituting into Equation (6.7.16) gives
315
l
il
rlrl*r(-zl.1
;l rl
^ [ilil::.rl
-a\-zl
r\a/
Lo\a/
G72.,
Example 6.7.3
t us determine the unit-step response of the system of Example 6.7.2 for the
v(0) = [l -llr. Substituting into Equation (6.7.16) gives
Le
v(z) =
4'
[-',]
case whcn
. i n'-' '[f]t'r
-i(-])"-'l
:ril.:(-)l
[
*[i(i)"'
=
.3(-l)"'l
L :o'-:(-)l.*Li(i)'.'
Putting the second term in closed form yields
'' [.i[;] .i[il].[i.i[i -lli]
The first term corresponds to rhe initial-condition response and thc second term to the
forced response. Combining the two terms yields the total response:
[s z! r r\n zz ttl'1
",,,=[;j[]l=l;_X)_;{ _i)l(
Ls ra\ z/ rs\+/J
I
n>0
(672,,
The output is given by
y(a)=u,(n)=
;.?(-1)'-it(l),', r,=0
(6'?.22)
We conclude this section with a brief summary of the propertics of the state-transition matrix. These properties, which are easily verified. are sontewhat similar to the
corresponding ones in continuous time:
l.
rD(n
+
1) =
Ao(n)
(6.7.23a)
2.
o(0) = 1
(6.7.23b)
]
iiie:
g-,i"[ ;1.i
-
-
i]
li$f'ft'rrigeJ,11ir r:iii
i.j*-5i6-f+.---'-:--"--'--'
A ,.u, tFI I
.1.
-t
Dlscrele-Time
Systems
Chapter 6
Transition property:
o(n
4. Inversion
'
-
/<) --
o(tt - i)@(i
-
k)
(6.7.21c)
property:
o-'(n ) = o(-z) if the inverse exists.
(6J.23d)
Note that unlike the situation in continuous time, the transition matrix car be singular and hence noninvertible. Clearly, Q-t(r) is nonsingular (invertible) only if A is
nonsingular.
6.72
Impulse Rosponse of Systems Describod
by State Equations
We can frnd the impulse response of the system described by Equation (6.7.2) by setting
v,, = 0 and x(z )
D(n ) in the solution to the state equation, Equation (6.7.16), to get
:
v(n)=a'-t6
The impulse response is then obtained from Equation (6.7,2b)
(6.7.24)
as
ft(n):3a'-tb+dE1n;
(6.1.?s)
Example 6.7.4
The impulse response of the system of Example 6.7.2 easily follows from our previous results as
[z
/r\'-' - 1/-1\'-' I /rY-' - 1/-1Y-'l
'iff
ii. tI
.ti
:i
;i
L:iij
]
=i(i)"-i(-l)'-" 'r=o
h(n"'\,
(6'7'261
STABILITY
As with continuous-time systems, an important property associated with discrete-time
systems is system stability. We can extend our defrnition of stability to the discrete-time
case by saying that a discrete-time system is inpuVoutput stable if a bounded input produces a bounded output. That is, if
lx@)lsu<thcn
ly(n)l
<r<-
(6.8.1)
S€c.
6.8
Stability ol
Discrele-Time
7
r1
By using a similar procedure as in
vcrr
. c.,s'.-,,, ",..'l
a condition for stability in terms ot tne system lmputse rcsl,(rr'.( . '
,
r,,
.:rl.,Lt
impulse rcspurlsL: l(rr), let.r(l) bc such that l.r(rr)i '-,!/. Thurr
.v(rr) isgrrut
by the convolution sum:
-v(n)= k') hk)x(tr-k\
((r.tt.2 t
-'.
so that
lv(n)l = | >,1(k)t(rr
k=-*
-k)l
s ) ltttll lr(n - k)l
t= -t
< /vr
> l/r(A.)l
l=-z
Thus, a sufficient condition for the system to bc stable is tltaL ilr
must be absolutely summable: that is,
)
la1tll
,
impulse response
" '"
(6.rj.3
)
That it is also a neccssary condilii,n can lre seen hy considcrir'r ,' itr[rui thc boundecl
signal x(/<) : sgnUr(n - t)1. or equivalcntly, .t(n - k) '- 'r,til(A')1, with corresponding outpul
i'(n): )
,'
l(/<)sgn[r(/<)1=
)
k=-r
lr,1r
r
^
Clearly, if &(n) is not ahsolutely summable,v(a) will be unhotrn.i
For causal systems, the condition for stability bccomes
i
lrroll
.-
',1.
(6.{i...1)
We can obtain equivalent conditions in terms of tlte locatiorrs trl r r elt:ttitt jclistic val'
ues of the syslem. Recall that for a causal systern described hr ;, ,l,i';ct-cnce equati(rn,
the solution consists of termsof tltelbrnrnro",* - 0. 1,.... ful . \', lr(.r\' ct Licnotcs I chiracteristic value of multiplicity I/. lt isclearthat if l<rl == l.thc r!11,,)i,ir'is not l-rotrndcd
for all inputs. Thus, for a systcnl to be stahle, irll the charirctl. lr\rrr' ,':llues rnusl ltavc
magnitude less than l. That is. thcy ntust all lie inside a circlc ol Ltr, r. ' r'utlitts ir the cornplex plane.
Forthe statc-r'ariable represcntilt i()n, lvL'sawthatthr-solutit'r: itl't'ttdsonthestarr'transition matrix A". The fornt ol A" is detcrmined b1/ thc cigr:rr' ;tlucs or charr.clct:\tic values oI the nlatrix A. Suppose wc obtain the dill'crcircr' !(lLli]tii)!l r:c'lating t tc
outputy(r)totheinput.t'(r,)byclirninatingthcstatevariahlL:,tIrrtL:.quirtiorrs(6.7.Ja)
and (6.7.2b). It can hc verified that the characterislic valttcs ol ttri.; r.,:uatirlF rrre cxacr.ly
OlscretFTlme
- 818
Systems
ChaPter 6
the same as those of the matrix A. (We leave the proof of this relation as an exercise
for the reader; see Problem 6.31.) It follows, therefore, that a system described by state
equations is stable if the eigenvalues of A lie inside the unit cfucle in the complex plane.
Example 68.1
Determine if the follos'ing causal, time-invariant systems are slable:
(i)
Sptem with imPulse response
[,(-])'. z(])'],t,r
,(,) =
(ii)
System described by the difference equation
v@
-*
v@
- D - lvrl, - q + !t@- 3) = :(n) + ?t(n - 2)
(itt)System described by the state equations
[r
1l
-ll,r,.
,r,*,r=ll,
La
a.l
[i]",
rr'r=[r -3]"'
For the frnt syttem. we have
,i_ lrr,ll
=,;.,O'* r(l)" = u *;
so that the systems is stable.
For the second system, the characteristic equation is
., -
!r1o,
-
l"
*I
=,
and the characteristic roots are qr E: 2,a, = -112 and c, = l/3. Since l"r | > t, ttris sys'
tem is unstable.
It can easily be verified that the eigetrvalues of the A matrix in the last syBtem are
equal to 3/2 t i 1/2. Since both have a magnitude Sleater than l, it follows that the sys'
tem is unstable.
a
a
a
A discrete-time (DT) sipal is delined only at discrete instants of time.
A DT sigral is usually represented as a sequence of values .r (z ) for integer values of z.
A DT signal .r(n) is periodic with period N if x(n + N) = x(n ) for some integer N.
S€c.6.9
r
Summary
319
The DT unit-step and impulse functions are related
,1n1
as
= j altt
k- -o
6(n) = a1r;
o
.
r
o
o
r
u(n
- l)
Any DT signal .t(n ) can be cxpressed in tcrms of shifted impulse functions
,@)=
.
-
The complex exponential
nal number.
as
L r(k)s(r-k)
t'-a
r(n) = exp [l0,,n ] is periodic
only
if Au/Zr
is a ratio-
The set of harmonic signals r* (n ) = exp [/.O,,n ] consists of only N distinct waveforms.
Time scaling of DT signals may yield a signal that is completely different from the
original signal.
Concepts such as linearity, memory, time invariance, and causality ir DT systems
are similar to those in continuous-time (CT) systems.
A DT LTI system is completely characterized by its impulse response.
The output y(n) of an LTI DT system is obtained as the convolution ofthe input
x(z) and the s)'stem impulse response h(n );
y(n)=h(n)*x(n):
i
n61r1r-
n,7
,ttd_,
o
o
The convotution sum gives only the forced rcsponse of the system.
The periodic convolution of two periodic sequences x,(n ) and rr(r)
N-l
.r,0, &)rr(k)
.r,(n) el xz@) =
-
)
l'0
r
An altemative representation of a DT sptem
l.l
N
is
in terms of the difference equation (DE)
> bPh2ooY@-t)= l-0
A-0
is
k)'
n:-0
o The DE can be solved either analytically or by iterating from known initial condi-
o
r
.
tions. The analytical solution consists of trvo parts: the homogeneous (zero-input)
solution and the particular (zero-state) solution. The homogeneous solution is
determined by the roots of the characteristic equation. Thc particular solution is of
the same form as the input r(rr) and its delayed versions.
The impulse response is obtained by solving the system DE rvith input.r(a) = E(r)
and all initial conditions zero.
The simulation diagram for a DT system can be obtained f rom the DE using summers, coefficient multipliers, and delays as building blocks.
The state equations for an LTI DT system can be obtained fronr the simulation diagram by assigning a state to the output of each delay. The equations are of the form
32q
L,r:scrare-Time
Systems
Chapter 6
v(rr + 1) = Av(rr) + b-r(n)
.v(rt) - cv(rr) )- :(n)
As in the CT case, for a given DT s1,stem, we can obtain several equlvalent simulatioo diagrams and, hence; several equivalent statc re presentations.
The solution of the state equation is
v(n) = 61r;v(n) +
y(n) =
cv(n
rr..l
),=(, ibin - i -
l)b.r(7)
) + dr(n)
rvhere
O(n) =
4'
state-transition matrix and can be cvaluated using the Cayley-Hamilton theorem.
The following conditions for tlre BIBO stability of a DT LTI system are equivalent:
is the
o
(a)
)
k=-t
la(*)l
<-
(b) The roots of the characteristic cquation are inside the unit circle.
(c) Thc eigenvalues of A are inside the unit circle.
6
10 CHECKLIST OF IMPORTANT TERMS
Cayley-Hamllton theorem
Characterlstc equailon
Coefllclent multlpller
Complex oxponenilal
Convolutlon sum
Delay
Dlflerenco equallon
Dlecrete-tlme slgnal
Homogeneous solutlon
lmpulse responae
6.1
1
Partlcular solutlon
Perlodlc convolutlon
Perlodlc slgnal
Slmulatlon dlagram
State equallons
State varlables
Summer
Trsnsltlon matrlx
Unlt-lmpulse luncllon
Unlt-step functlon
PROBLEMS
6.1. F'or thc discrstc-time signal shorvn in Figurc
(a) .r(2 - rt)
(tr).r(3rr * 4)
(c) .r(i rr + l)
/ ,, I tl\
/
(e) .r (a t)
lo),r(- I
P6.1. sketch each of the following:
Sec.
6.11
321
Problems
Flgure
-.1
(I)
P6.l
r(n ) for Problem
6.1.
x.(n )
(g) .ro(n )
(h) .r(2 - n) + x(3,t
Repeat Problem 6.1
-
4)
if
,r,l = {- r, i,., -;, - , }
',
t
Determine whether each of the following signals is periodic, and if it is, find its period.
=.,"(T.;)
o) r(n) = ''.(1i') -'t(1,)
(s) x(n)
(c) r(n ) = .'"
(lX.) .', (l ")
*o[?,]
(e) .r(z) = *r[rT,]
(d)
r(r)
=
pt,
- 2-) 26(n ,i.
(e) r(n) =.*(li') . *'(1,)
(f) :(n) =
-r
3m)l
The srgnal x(t) = 5 cos (120r - r/3) is sampled ro yield uniforntlv spaced samples 7 sec
onds apart. What values of '/" cause the resulting discrele-timc scquence to be periodic?
What is the period?
6.5. Repeat Problem 6.4 if .t (t) = 3 sin l(trrrr + 4 cos 120r.
6.6. The following equalities are used several places in the tex!. Provc their validity.
( | - on
(a)
i-;
5'o.=,1
n'o
[,v
rt
a=0
| - q
..+I
q-l
322
Discrete-Tlme
(") i o'= oo',: o'l-',
I -(l
c
*
Syst€ms
Chapter 6
I
6.7. Such conccpts from continuous-time systems
as nrcmory. time invariance, lineariry, and
causality carry over to discrete-time systems. tn the following, x(a) refers to the input to
a syslem and.v(a) refers to the output. Derermine whether the systems are (i) linear.
(ii)memoryless, (iii) shifi invarianr, and (iv) causat. Justify your answer in each case.
(o) y(r) = log[.r(n)l
O) y(n) = x(nlx(n - 2)
(c) y(n) = 3nt(n)
(d) y(z) = nx(n) + 3
(e) Y(n ) =:(n - l)
(f) y(a) = r(r) + 7:(n -
l)
(0y(,1)=irttl
l-0
(h)y(n)=irtrl
t'0
_ o,
(r) y(n) =
i,;,,
0) y(z) =
it,\;rntrrt, - oy
(k) y(a) = median lr(n
- l)..r(a).r(n + t)l
o) y(a) =
[:,xi;,, ; :3
(rn)y(n) =
[:,;l;,,
6.8. (a) Find
ili:;
the convolution
.y
ln) = h(n) r r (n ) of the following
[_r _5sn< _l
(t):(a)=t,'
o=n=4
h(n\ = 2u1nS
0r) x(n) =
h(n)
=
(])",r,t
51n1+ 5(n
- rr - (f)',t,1
(Ul) r(a ) = x12;
h(a)=1 0s"s9
(rv)'r(n)=(i)"'t'l
(v)
/r\a
* (iJrtrl
ft(a,y=
5,111
x(n ) =
(])',.,.
[(r)
= 51r;
*
)
0)'",,
signals:
Sec.
6.11
Problems
323
(vl) -r(n) = nu(n)
h(n)=
(b)
(c,
41n1
-
u(n
-
l0)
Use any mathematical software package to verify your resuhs.
Plot )(n ) vs n.
6.9' Find the c<rnvolution y (n ) = h(n) * x(n)
(a) r(a) =
{,
-] I -i :},
for each of the foUowing pairs of finite sequenc€s:
h(n) =
11,
-r. rr,-r}
f
(b).r('t)= 11,2..1.o.-1.1, hln) = 12,-1,3.1.-21
(c) r(,') =
nat =
, |,-}}
{,
j I ,.r}
{2.-
1
(d).r(n) =
{-,,;,;,-i,,},
(e) Verity your
(f)
h@)
= tt.l,1,r,rl
results using any mathematical software package.
Plot the resulting y(n ).
6.10. (a) Find the impulse response of ihe system shown in Figure
h,(n)=h2@)=(i)"1,)
h{n) = u(n)
n,ot =
G)"at
(b) Find the response of the system to a unit-step input.
h
tl l
h2Ot)
Flgure P6,10 System for Problem 6.10.
6.1L (o) Repeat Problem
6.10
if
,,t,,r =
(l)',t,r
h2(a) = 6@)
h,1n'1= ho@)=
(b) Find the
(i)"r,r
response of the system to a unit-step input.
P6.10. Assume that
324
Discrste-Time
Systems
Chapter 6
6.12. Let rq (a ) and r.(n ) be two periodic sequences with period N. Show that
) r,(/<)r.(n -
&)
l=0
= )'r,(k).r3(rr
I ',,u
k)
sequences of Problem 6.9 bv
lo the y(a) that you deterrelated
17(n)
6.f3. (a) Find the penodic convolurion .vr(r ) of the flntte-length
zero-padding lhe shorter sequence. How
mined in Problem 6.9.
(b) Veri$ your results using any mathematical software package.
6.1d. (e) In solving differential equations on a computer, we can approximate the derivatives
of successive order by the corresponding differences at discrete time increments ?.
That is, we replace
is
d'(t)
y(,) =
dt
with
x(nT)
y(nT) _
-
x((n
- t)r)
T
and
z0)
dv(t)
d2x(t)
dt
dt2
with
z(nr) =
tWP
-
x@r)
-
?:((n
- \r)
+ x((n
-
2)T)
,
Use thls approximation to derive the equation you would emPloy to solve the differential equation
,4#*y(t)=x(r)
(b) Repeat part (a) using the forward-difference approximation
q9:'t((z+1)I)-r(nt)
dtT
6.15. We can use a similar procedure as in Problems 6.13 and 6.14 to evaluate the integral of
coDtinuous-time functions. That is, if we want to find
y()=
f,"r(t1dt
+r(o)
we can write
aP =,r,
If we
use the backward-difference approximation for
y(nT) = Tx(nT) + y((n
- l\r),
whereas the fon*'ard-difference approximation gives
y(l)'
we get
y(0) = ,t(to)
Sec. 6.1
1
Problerns
325
'i.\U'f
r{(,rr Iill
)+
r(,r7t.
r.(()}
.r(r,l
(a) Lisr thcsc irppr()\rntalt()rr l, dcternlinc thc rnlcgrill ()f thc,:i.r:!inuous-time function
shown tn Figurc Ptr l5 l r r rn lhe rarrgt, [t). .il. r\ssunrc th;,r ,l . u.02 s. What is the
.1. r,i
-.rr,r
.1..'/t., \rl.r -) 5.
(b) Rcpcat part (irl k-,r 7' = (l(ll s. conllncnt ()n vour fesults.
ot:r
(sccunJs
Figure P6.15 lnput for Problem
I
6.1-5-
6.16. A better approximation lt-l thc intL.gral in Problem
zoidal rule
.r,(nf) =
71..1rf)
2
+ -r(,l
6..l-5 can hu rrlrtained
- I)f ) +.v((, - t)i
by the trape-
)
Determine the inteeral of the function in Problem 6.15 using this rulc.
6.17. (a) Solvc the following diffcrcnce equations bl,iteration:
(i) l(n) +),(x - I) + lolln - 2) =.r(n). n>0
.r'(- l) - 0. -v(-3) - 1. .r(n) = 111,,
1
(ii) l(,r)
](-
1t
-'o!@- I){;r'(z -2)=-r(r). ,r>0
rr =
r,
y(-2)
(iil) t,(r ) + .vtn
f
.v( - l)
(iv) .y(n +
](0) =
(b) Using
t). ,t,t -- (l)',,r,1
- rt + Jltr - 2)=t(n),
=0, y l-2 I = tt.
l)
l,
+
I
,I' (n.r Qr
(v) vQt) = r(n)
.r (rr
=
+
,t,= (l[
I) = x (n)
rl
x
=(l
(r, )
1
- ,r(n - l),
,r=()
)= [) u(n)
l..r n- l) + Zr(fl.- 2). r=0
J
) = &(r)
any rnathematical software package,
20. Obtarn a plot r)l l,(n ) vs. r.
verii lour
resulls
lor'l in lhe range 0 to
g26
Discr€te-Time
Systoms
Chapter 6
6.18. Determine tha characteristic roots and the homogeneous solutions of the following difference equations:
(l) r(n) -.y(n - g+)t(n -2) = x(n). n=0
Y(- t) =
0'
Y(-2) =
1
- |r,, - rl - l.rt, -2l= x(n)*trrln- t'1. z=0
.Y(- l) = I' Y(-2) = s
(lil)y(n) -.v(n - t7*'Or<, -2)= x(n), n >0
.Y(- 1) = l. 1'(-2) =
(lt) y(r)
1
1t
-it@-D+ S@-2)=r(n).
(iv)y(n)
v(-
1) =
2.
tl
Y(-2\ = o
- att" - 1)-ry(, Y(- r) = r' Y(-2): -
(v) y(a)
n>o
2) =
x(n),
n=0
1
6.19. Fiod the total sotution to the following difference equations:
(a)
y(n)*|rt, - r)-*yt, - 2)= r(n)*!,6- r'1.
if
y(- l) = I'
(b) y(a) +
(c) y(z)
(d)
y(-Z)
: O, and
r
;t@ - l) = r(z)
* ;lt<n -
r) = r(n)
x(n)
:
2
n
=0
cos3-fi
if
y(- l)
:0
/ly a(a)
and r(z) = (-2l
if
y(- t)
:I
and
r(z) = (i);Ol
y(n)*flrt, - 11 +frt, -2)=r(n), r,=o
ify(-l) = y(-2) = | and rrr= (i)"
(e) y(n + 2)-|iy@+r)-fir(z)=:(n)+|r1a+
if Y(1) = Y(0) =
o arrd ,r,
=
r),
z>0
(l)"
62t1. Find the impulse response of the systems in Problem 6.18.
621. Find lhe impulse respons:s of the systems in Problem 6'19.
6,til
can find the difference-equation characterization of a system wu=ith a given impulse
respnse ft(n ) by assuming a difference equation of apPropriate order with unknown coeffit-ients. We can substituie the given A (n ) in the difference equation and solve for the coeffir:ienrs. Use this procedure to finrl rhe difference-equation representation of the system
with impulse r".pln* h(n) = ('til"u(n) + (l)'u(n) by assuming that
we
-v(n) + ay(n
and find a, D, c, and
/.
- l) + by(n - 2) = u(n) + dr(n -
1)
Sec.
6.11
Problems
327
6.ani. Repeat Problem 6.22 if
or,r= (-l)',,t,r 6J4
We can frnd the impulse response
(-j)',r, -
rr
/r(z) of the system of Equation (6.5.11)
by first linding
the impulse response fto(a ) of the system
2
t.0
ory(n
-
(P5.1)
k) = x(n)
and then using superposition to find
M
/l(n)=)b,ho@-k)
t -0
(a) Verify that the inilial condition for solving Equation (P6.1)
is
!
v(o) = al
(b) Use this method to find the impulse response of the system of Example 5'5.6.
(c) Find the impulse responscs of the systems of Problem 6.17 by using this merhod.
6li.
Find the two canonical simulation diagrams for the systems of Prohlem 6.17.
615. Find the corresponding forms of the state equations for the systems of Problem 6.17 by
using the simulation diagrams that you determined in Problem 6,2.5.
627. Repeat Problems 6.25 and 6.76 for the syslems of Problem 5.1E.
6r& (a) Find an appropriate sei of state equations for the systems describcd by the following
difference equations:
- fitr" - rl * ]ot<" - z) - iqy@- 3) =.t(n)
*
(ll) y(n) + 0.70't y(n - l) + y(n - 2) = x(n') *
lrO - r1 j.r1n (|ff) y(n) - 3y(n - t) + zt(n - 2) = x(n)
(r)
y(n )
z1
Find A'for the preceding systems.
Verify your results using any mathematical soltware packagc.
Find the unit-step response of the systems in Problem 6.2E if v(0) = n.
Verify your results using any mathematical software pa:kage
Consider the state-space systems with
(b)
(c)
6g). (a)
(b)
A=
r=[-"].
c=il rt. r/=o
(s) Veri$ that the eigenvalues of A are I and - | .
o) Let i(n ) = Pv(n). Find P such that the state representation in terms of
[i
i=li ,l
L'-ll
o.l
i(z)
has
B2B
63L
631
Discret€-Time
Systems Chapter 6
(This.is the diagonal form o[ the srarc cquations.) Find the corresponding values for
b. i. d. and i(0).
(c) Find the unit-step response ofrhe system representation thal you obtained in part (b).
(d) Find the unit-step response of rhe original system.
(e) Verify your results using any mathematical software package.
By using the second canonical form of the state equations, show thar the characteristic valuesof the difference-equation representation of a system are the same as the eigenvalues
of the A matrix in the state-space characterization.
Determine which of the following sysrems are stable:
(a)
"'= {[ll ;-,
(b)
nr,,=[3', o<a<t(x)
otherwise
10.
(c)
,,,,
_
[(l)""..'.
n>o
Iz"cosrz. n<o
(d) y(z)
=:(n) + zx(n -
(e) y(z)
-2y(n
(r)
.v(a + z)
-
l) +y(z
*
-
)16 -
2) =
z\
:(n)
+
x(r
-
l)
- D - lyb - 2) = x(n)
t rl
-1y@
I
rer
1)
'r, + rr = l ? ] 1,",. [_
L-o')
i] ,,,,, y(n) = rr olv(z)
(h)v(,,+,)=[-l l],r,1.[f] .r,r
y@)=tz rlv(a)
Chapter 7
Fourier Analysis
of Discrete-Time Systems
7.1
INTRODUCTION
In the prcvious chapter. wc considcred techniques for the tintr:-dornain analysis oldiscrete-time systems. Recall that. as in the case of continuous-tinre systems. the primary
characterization of a linear. time-invariant. discrete-time system that we used was in
terms of the response of lhe system to the unit impulse. In this lrrd subsequcnt chapteni, we consider frequency-domain techniques for analyzing discrete-time systems, We
start our discussion of these techniques with an examination of thc Fourier analysis of
discrete-time signals. As we might suspecl, the results that we ohtain closely parallel
those for continuous-time systems.
To motivate our discussion of frequency-domain techniqucs, Iet us consider the
response of a linear, time-invariant, discrete-time system to a complex exponential
input of the form
r(n) = ."
(7. r.r )
where e is a complex number. lf the impulse response of the system is ft(rl), the output of the system is determined by the convolution sum as
y(r)= X lr(/<)r(n-k)
*= --
=Z
k=--
h?\2,-k
z')
hG\z-k
=
(7.t.2)
k=-329
ggo
Fouri€r Analysis ot Discrete-Time
For a fixed 4, the summation
is
Systems
Chapter 7
just a constant, which we denote by H(z): that
H(z)=
is,
i ofo',r-o
(7.1.3)
H(z\x(n)
(7.1.4)
l- --
so that
y(n) =
As can be seen from Equation (7.1.4), the outputy(n) is just the input.r(n) multiplied
by a scaling factor I/(e).
We can extend this result to the case where the input to the system consists of a linear
combination of complex exponentials of the form of Equation (7.1.1). Specifically. let
,v
x@)=)alzi
(7.1s)
It= I
It then follows from the superposition property and Equation (7.1.3) that the ouput is
N
y(a) =
arH(zt)zi
)
l-l
iV
=
) b,zi
e.t.6)
&-l
That is, the output is also a linear combination of the complex exponentials in the
input. The coefficient bo associated with the function z[ in the output is just the corresponding coeffrcient a* multiplied by the scaling factor H (21).
trrample 7.1.1
Suppose we want to find the output of the system with impulse response
ar,r =
(i).,r,r
when the input is
x(n) =
2*'2a
n
To frnd the outpul, we fint find
/{(z) =
.i (:)..
=.4
(i. ,l =
;_ ,_,
lj
.
where we have used Equation (5.3.7). The input can be expressed as
*<,t =
so
that we have
.
"*eliz],] *, [-, T,]
<1
,
.
Sec.
7.2
Fourler-Series Represontation of Discrete-Time periodic Signals
',
=
*o[,?]'
=
', "-o[-;:u],
331
at=i]'=l
and
H(;,) =
I
t - ;exp[-j(2r/3)l
H\22)
= - --
1-
--
I
---
=
{?'*or-;*t'
= :2z
"xpUil,
t- rexplj(2n/3)l vt
It follorvs from Equation (7.1.5) that the output
,
O
\/t
= tan-,
5
is
v@=
Acxpllol*rl,?,]
=:""'(f , * o)
* ?l
14 '*n
[
-,, ':
"]
A special case occurs when the input is of the form exp [jO^ ], where Oa is a real, continuous variable. This corresponds to the case lz1 | = 1. For this input, the output is
y(n) - H(eitt) exp[7oon]
(7.t.7)
where, from Equation (7.1.3),
H(eia1= H(O)
7.2
= ) a(r)exp[-j{)n]
(7.1.8)
FOURIER-SERIES REPRESENTATION
OF DISCRETE-TIME PERIODIC SIGNALS
Often. as wc have seen in our considerations of continuous-time systemsr we are interested in the response of linear systems to periodic inputs. Recall that a discrete time
signal x(a) is periodic with period N if
r(n)=r1r*r,
(7.2.1)
for some positive integer N. lt follows from our discussion in the previous section that
if x(n) can be expressed as the sum of several complex exponcntials. the response of
the system is easily determined. By analogy with our representation of periodic signals
in continuous time, we can expect that we can obtain such a representation in terms of
the harmonics corresponding to the fundamental frequency 2r/N.That is, we seek a
representation for x(r) of the form
,(r)
=
)tAa1,exp[j0rnl= )
a*xo(rr)
(7.2.2)
Fourier Analysis ot Discrete-Time
Systems
Chapter 7
where ()o = 2rk/N.It is clear that the xo(n) arc periodic, since Oo/2tr is a rational
number. Also, from our discussions in Chapter 6, there are only N distinct waveforms
in this set, corresponding to k = 0, l. 2. .... N l. since
-
xaQr) = rr*,r(n
),
for all k
(7.2.3)
Therefore, we have to include only N terms in the summation on the right side of
Equation (7.2.2). This sum can be taken over any N consecutive values of &. We indicate
this by expressing the range of summation as i! = (M. However, for the most part, we
consider the range 0 s & < N - l. The representation forx(n) can now be written as
,(r) =
Equation
(7
o)-,,
,r*oli'#o^f
(7.2.4)
,2.0 is the discrete-time Fourier-series representation of the periodic
sequence x(n) with coefficients a*.
To determine the coefficients al, we replace the summation variable & by nr on the
right side of Equation (7 .?.4) and multiply both sides by expl- j2rkn /Nl to get
,1,r;
e*p[-;';
*]=
Then we sum over values of n in [0, N
^\*o^"-rfi'ff
-
<^
-
o,t"]
1] to get
,e,","*p[-r'i*]=,r*-2.*,- *rli'#o By interchanging the order of summation in Equation
(7
)o':l-s
ilo
For a
= l.
r
onf
ovf
- o*1
I-a
(7.2.8)
we have
)c':N
can let
a
(7.2.e)
m
-
- *
is not an integer multiple of N (i.e., m
k rN for
= expljZr.(m t)/Nl in Equation (7.2.8) to get
-
*o[r*
F*
lf.
(7.2.7)
A,
,-0
lf m - k
e.2.6)
.2.6), we can write
5',r,1*p[-i'#*)=,,e,e o^"*ol1!o From Equation (6.3.7), we note that
/V-l
(7.2.s)
{m
r = 0, +1, t2,
- on)= !-+r-Ie(?C,4)@l
k is an integer multiple of N, we can
.>*
"*[r'#
r)4t
=
o
etc.), we
(,.z.to)
use Equation (7.2.9), so ihat we have
1^
- *1^)= it
(7.2.11)
sec.
7.2
Fourier-series Representation of Discrete-Time periodic
srgnals
agil
Combining Equations (7.2.10) and (7.2.11), we wrire
,^ -ol,]
= ru1,,
(7.2.t2)
- k - rN)
where E(rz - k - rN) is the unit sample occurrinl ar m = t + rN. substitution into
,r*
"0[r'I
Equation (7.2.7) then yields
,P.,,,,,
"*p[-i'i *] =,,},
r,,,0{,zr
- A-
rN)
(7.2.13\
since the summation on the right is carried out over N consecutive values of m for a
fixed value ol k, it is clear thar the only value that r can take in the range of summation is r = 0. Thus, the only nonzero value in the sum corresponrJs to
k,and the
right hand side of Equarion (7.2.13) evaluates to Nar, so thar
i=
,^ =
* ,Z
r(r) .-p
[-,
,f kr]
(7.2.14)
Because each of the terms in the summation in Equation (7.2.14) is periodic with
period N, the summation can be taken over any N sucessive values of r. we thus have
the pair of equations
r(n)
= ) ,r*r[i';- orf
r =(M
(7.2.15)
and
,o
:
i,,?r..(r1.xp[
',T *]
(7.2.16)
which together form the discrcte-time Fourier-series pair.
Since r^ , r(n) = .rl(a), it is clear that
o**N = a*
(7.2.17)
Because the Fourier series for discrete-time periodic signals is a finite sum defined
entirely b'; the values of the signal over one period, the series always converges. The
Fourier series provides an eract alternative representalion ol thc time signal, and issues
such as convergence or the Gibbs phenomenon do not arise.
Example 7.2.1
Let.r(z) =
exp
[lKf[nl
N. By writing.r(r) as
for some K with O0 = 2rr/N. so rhat.r(1) is periodic wirh period
,t,l=.*o[7'f r,]o=nSn -
r.
ii follows lrom Equation (7.2.1-5) that in rhe range 0 s k < N l. only a^ = l, with all
other a, bcing zero. Since d1 *,y = (,6, thc spcclrum of .r(n) is a linc spectrum consisting of
discrcte irnpulses of magnitude I repeated ar intcrvals N(),,. as shorvn in Figure 7.2.1 .
3U
Fourler Analysis of DlscreleTlme
(N
Ilgure
Example
721
-
tr)Oo
0 /(oo
Systems
(N + K)Oo
K)Oo
Chapter 7
I
Spectrum of complex exponential for Example 7.2.1.
72J
L-t r(z)
be the signal
'(,): "*(T) ..t(T -;)
As we saw in Example 6.1.2, this signal is periodic with period N = 726
frequency {ln = 2n /126, so rhat n/9 ana
Since -&f!o corresponds ro (N
-
}
correspond to l40o and
*)f!0, it follows thal
-;
ana
l(ts0, and ll2.tlr. We csn therefore write
,@)
=:[dr
+ e-r11 +
]Vta't
+
-
|
arrd fundamental
l8fh
respectively.
can be replaced by
e-ite-i';j
=la,*" *4,uon, *|r,*, +1a,,**
so that
,* = I =
anz.arE
:
17
--
"l^,all
other a. = 0,0
= k = t?s
Frarnple 7.23
Consider the discrete-time periodic square wave shown in Figure 7.2.2. From Equation
(72.16). the Fourier coefficients can be evaluated as
,. = f
ftgure
7J2
.i "*o[-rr" *]
O
lL'
Periodic square wave of Example 7.2.3.
Sec.
7.2
Fourier-Series Representation ol Discrete-Time Periodic
Signals
3il5
Fork = 0.
|,t,',trr =?4-P
o, =
t + 0, \ve can use Equation (6.3.7) to Bet
t exp[j(2t/N)kMl - expl- j(zr/N)k(M
at=
- r --.*pl-]tz"tnkl
it
For
+ t))
"y-- a" / *L lzl[:: Il':':y @ : tn--- *o [ - i t' " /Y ('
expl-j(2r/N)(k/2)llexplj(2r/N)(k/2)l - expl- j(2r/N)(k/2)ll
N
=L
t
jl]
,.,|,#("lj)] k=r'2"N-I
=''qt';l:l'
_
,
We can, thcrefore, write an expession for the coefficients ar in tcrnrs ()f the sample values
of the function
t,r,,.'
\'"
'
=
sin[(]Msin
1
lXq4l
(o/2)
as
1 l2rk\
'-=n{ru/
The function /( . ) is similar to the sampling tunction (sin x)/-r that u'c have encountered
in the continuous-timc case. Whereas the sinc funclion is not periodic. the function/(O),
being the ratio of two sinusoidal signals with commensurate frcqucncies, is periodic with
period 2zr. Figure 7.2.3 shorvs a plot of the Fourier-series coefficicnts for M = 3, for values of N corresponding to 10. 20. and 30.
Figure 7.2.4 shows the partial sums xr(z) of the Fourier-series cspansion for this example for N = Il and M = 3 and for values ofp = 1, 2,3,4,and 5. rvhcre
,n(n)=
ofoo**rli'#o^l
As can be seen from the figure, the partial sum is exactly the origrnal sequence forp = 5.
Erample 72.4
l.-et.r(n) be the periodic extcnsion of the sequence
[2.
The period is N = 4, so that exp
[I
ih
/
-1.
1.2]
Nl = -1. The coefficients
a,,=.r(2-l+l+2)=l
rra
are therefore given by
f.r:i, Ir^'s
of Dlscrote.Time sysrems
Chapter 7
N=20
.ry
tlgue
723
=J0
Fourier series coefficiens for the periodic square wave of Example 7.23.
o,=Irr+i-t-zn=l-il
)
or=le*r*r-zl=l
o,=Ie-i - r + ,,t =I* il=
"l
In general, if .r(a) is a real periodic sequence. then
at,
= ai-*
(7.LtE)
ll)
(!
Q)
(E
a
cr
(J
!,
.9,
L
{,,
:(!
uo
c
o
tr
lll
o.
x(l)
(u
tq,
c)
=
o
E.
!u
o
:U:
(,
A.
9
Ft
F
oa
ba
It
337
Fourier Analysis ot Discrete-Time
Systems
Chapter 7
Example 72.6
Consider the periodic sequence with the following Fourier-series cocfficienls:
,^ =
j.inT .,'r.orki.
Tlre signal .t (r ) can be determined
.*1n; =
j
a^ ex
eli'zri
o<ft s tl
as
rrl
=*[".r,r,*"/6)l-rypl-j(k"/6\l +9xpli(ktr/J)l_!rexpl-i(kr/2lll*r[i];*1
=
,? [,L {"-o['1;
ot' *
. )[*r[,i;or,
ot" 'rl "-p[r ]1
'r]]
* rr] * *pfi2ri,,(, - 3)]]]
Using Equation (7.2.12),we can write this equation
.r,
=
;6(n
+ r)
- lr,, -
rl *
as
l11,
*rr * ]s(n -3)
The valucs of the sequcnce.r(rr) in onc pcriod arc thcrefore given by
{. - ;.r.j.o o.r.o.o.l.o.1}
rvhere we have used the fact that.r(N +
*;
=
.11;a;.
.2.16) that. given two sequences x, (n)
and -r2 (n). both of period N. with Fourier-series coefficients a,^ and aro. the coefficients
for the sequence A,r, (l) + Bxr(n) are equal to Aor* I Ba.,r.
For a periodic sequence with coefficients at. we can find the coefficients D. corresponCing to the shifted sequence.r(rr - m) as
It follows from the definition of Equation
b^
= i,,E=,r,,,
-
(7
^l*el-izl *,f
(7.2.te)
-
By replacing n m by a and noting that the summalion is taken over any N successive values of n. we can write
,^ =
(i,,)=,,.,(,,.*p[-r ? o,])"-r[-,'J
r,f= .*p[-i'; o^f"r
(7.2.20)
Let the periodic sequence x(n). with Fourier coefficients a*. be the input to a linear
system with impulse response ft (a). where /r(n) is not periodic. [Note that if fi(z) is also
periodic, the linear convolution ofr(r) and ft(n) is not defined.] Since. from Equation
(7.1.7). the response y^(n ) to input a* explj(Zr/N)knlis
Sec.7.2
Fourier-Series Representation of Discrete-Time Periodic
v*(n\ =
,r,(+t)
.-n
l;
it follows that the response to r(n) can be written
v@):
oZ-vr(n)
=
*P--
Signals
f; t,l
339
\7.2.21\
as
"rr('*'o).-o[;2f ",]
(7.2.22)
H(Zrk/N)is obtained by evaluating H(O) in Equation (7.1.8) at A = ZtklN.
If x,(z) and xr(n) are both periodic sequences with the same period and with
where
Fourier coefficients aroand a21,, it can be shown that the Fouricr-series coefficients of
their periodic convolution is (see Problem 7.7b) Nar2ar1. That is.
.r,(n) @ x2(n)
<-+
Naroay
Also, the Fourier-series coefficicnts for their product is (see Prohlcrm 7.7a1 aro @ a*
.r'
(r ).rr(n )
o
atk t*) au
These properties are summarized in Table 7-1.
TABLE 7-1
Proportlea ot Dlscr€teTlme Fourler S€deB
l.
Fourier coefficients
.t,(n) periodic with period N
2. Linearity
Axr(n)
3. Time shift
x(n
4. Convolution
x(n\ *
+
,,-
:,1;..,1,,y",pI t?i,o]
Aar*
Bxr(n)
*
Bau
f
-
(7.2.t4)
.-
m)
| ,Ltt
expl't-i
fi1n1' ft(n ) not periodic
a,.It\;
H(o) =
1
,
Kn)nt
I
(7.2.20)
k)
,i_r'(n)
(7.2.2t)
cxp[-jo,l
5. Periodic convolution
r, (n) O .rr(z)
Nar*a*
(7.2.23)
6. Modulation
tr(n)xr(n)
atl
(7.2.24)
@)
ozt
Example 7.2.6
Consider the system with impulse responsc h(n\ = (1/3)'u(r). Suppose tvant to find thc
Fourier-series representation for the output y(a) when the input.t(n) is the periodic
extension of the sequence z. - 1,1,21. From Equation (7.2.22),it follows thal we can writc
y(n ) in a Fourier series as
y(,) =
with
*r^"-o[i'Jo,]
Fourior Analysis ol Discrete-Time
340
b' =
"'H(T
Systems
Chapter 7
k)
From Example 7 .2.4, we have
ao
=
r. ', = ,l = )-ii.
,, ='i
Using Equation (7.1.8), we can wrile
H(o)
=,2Eo\e'
(l)'*r,-,*, =
.+-l-iexp[-jo]
so that with N = 4, we have
'('; r) = , - i*o[-i]t]
tt follows that
\=
n@a--3;
t,
,(t),, =tLt:i?)
=
br= HQr)ar=f,
, Dt
,r =
Dt=
7.3
3(1
+j2)
-'nn
THE DISCRETE-TIME FOURIER TRANSFORM
We now consider the frequency-domain representation of discrete-time signals that are
not necessarily periodic. For continuous-time signals, we obtained such a representation by defining the Fourier transform of a signal .r(l) as
r"
x(u,)=g[x(t)l=lr(t)exp[-jtot]dr
J--
(7.3.1)
with respect to the transform (frequency) variable r'r. For discrete-time signals, we consider an analogous definition. To motivate this definition, let us sample x(t) uoiformly
every Iseconds to obtain the samples:(nT). Recall from Equations (4.4.1) and (4.4.2)
that the sampled signal can be written as
r,(r)
: x(r) j
n1-b
s1,
- ,r1
(7.3.2)
Sec.
7.3
The Discrete-Time Fourier Translorm
341
so that its Fourier transform is giren by
x,(,)
=
=
=
[' -r,1t7e-i-' dt
I
J-,
x(r)
>
n=-e
6(t
-
nT)e-,''dt
(7.3.3)
,i_*x(nT)e-i'r'
where the last step follows from the sifting property of the 6 funclion.
If we replace ro7 in the previous equation by the discrete-timc :requency variable
O, we get the discrete-time Fourier transform, X(O), of the discrctc-time signal r(r),
obtained by sampling.r(t), as
x(o)
: *lx(n)l: i
,(nl
exp[-iorr]
(7.3.4\
Equation (7.3.4), in fact, defines ,h. di.";;-;*e Fourier transforrn of any discretetime signal x(z). The transform exists if x(n ) satisfies a relation of the type
)
l.r(n)l
<-
or ) lx(n)1'z<-
(7.3.s)
*.
These condition. ur" ,ir,.,"nt to guarantee ,nr,
sequence has a discrete-time
Fourier transform. As in the case of continuous-time signals, there are signals that neither are absolutely summable nor have finite energy, but still have discrete-time
Fourier transforms.
We reiterate that although ro has units of radians/second. O has units of radians.
Since exp [loz ] is periodic with period 2n, it follows that X(O) is also pcriodic with
the same period, since
X(O+2z-)= i
r(nlexp[-l(O +2n)nl
"=:-
:
,I_r(r)
exp[-ion] = x(o)
(7.3.6)
As a consequence, while in the continuous-time case we have to consider values of o
over the entire real axis, in the discrete-time case we have to considcr values ofO only
over the range [0,2r].
To find the inverse relation betrveen X(O) and x(rr). we replac,: the variable n in
Equation (7.3.4) by p to get
x(o): i ,fplexp[-lop]
('1.3.7)
P=-6
Next, we multiply both sides of Equation (7.3.7) by exp
range [0,2n] to get
[j0n]
and intcgrele over thc
g4?
Fourier Analysis ot Discrete-Time
exp[ion]r/o =
,[=,,r,n,
I,i*"=,,
exp[io(n
r}*_rrp)
Systems
-
Chapter 7
p)ldo
(7.38)
Interchanging the order of summation and integration on the right side of Equation
(7.3.8) then gives
f" ,tnl exp[ion]rlo = ,|-,rorf"
exp[;o(n
- dlda
(7'3e)
It can be verifred (see Problem 7.10) that
f
so
expt;o(n
- p\tdo=
{3: :;i
(7.3.10)
that the right-hand side of Equation (7.3.9) evaluates to 2fr(z). We can therefore write
,fO
=
xtol
l,f"
exp[ion]do
(7'3'11)
Again, since the integrand in Equation (7.3.11) is periodic with period 2r, the integratiJn can be carried out ovel any interval of length 2zr. Thus, the discrete-time Fouriertransform relations can be written as
x(o)= ir{'texP[-ion]
,Ul
=
**
1,r,,
X(o)exp[ion]do
(7'3'12\
(7.3.13)
Example 7.3.1
Consider the sequence
x(n) =
o"u1n'' l"l t
'
For this sequence,
x(o) =
i
n-0
c'exp[-ion] =
G+i-Fj
The magnitude is given bY
lxtoll
=
\4 +;,
_ 2a-cosO
and the phase by
Argx(o):
-tan-rd*k
Figure 7.3.1 shows the magnitude and phase spectra of this signal for
these functions are periodic with period 2t.
c > 0. Note that
Sec. 7.3
The Discrete-Time Fourier Transtorm
|
.lrflr
343
Ar!
i
.\ ll,1
)
-2r'r0nlnO
tigure
73.1
Fourier spectra of signal for Examplc 7.3.1.
E=ernple 73.2
Lrt
r(n) = sl"l,
l"l .
t
We obtain the Fourier tranform of :(n ) as
x(o)
:
,I"
-t
=)
ol'lexp[-ionl
-
o-'exp[-lr)z] + ) o"expl-i{)nl
a
-0
which can be put in closed [orm, by using Equation (6.3.7), as
x(o)
=
___l
I - c-rexp[-70] I - oexp[-lO]
l-o'=--l-2ocosf,)+s2
In this case, X(O) is rehl, so that the phase is identically zero. Thc magnitude is plotted in
Figure 7 .3.2.
Eranple 73.8
Consider the sequence .r(n) = exp [i Oon l, with f]o arbitrary. 1'hus. .r(n) is not necessarily
a periodic signal. Then
u4
Fourier Analysis ol Discreie-Time
I .Y(r2)
Systems
Chapter 7
I
2nO
x(o)= i
Figure 732 Magnitude spectrum
of signal for Example 73.2.
2116(o- ttn-2nm)
(7.3.14)
ln the range [0, 2rrl, X(O) consists of a 6 function of strength 2n, occurring at O = fh. As
can be expected, and as indicated by Equation (7.3.14), X(O) is a periodic extension, with
period 2rr, of this 6 function. (See Figure 7.3.3.) To establish Equarion (7.3.14), we use the
inverse Fourier relation of Equation (7.3.13) as
.r(n) =
e-r1,,n, =
*"
f "x(o) exp[jon]do
=
i; L"[,,i_*u,n - n" - z,,rz)]exp[ioz]do
=
exp
[jOon]
where the last step follows because the dnly pemissible value for rz in the range of integration is ,n = 0.
We can modify the results of this example to determine the Fourier transform of an
exPonential signal that is periodic. Thus, let.r(n ) = exp[Tkfloalbe such that Oo = 2tlN.
We can write the Fourier transform from Equation (7.3.72) as
x(o)
f,2o- 4n
Oo-2n
Figure
Oo
Qo+Zt
733 Spectrum of exp [0rz].
Os*4r
Sec.
7.4
g5
Properties o, the Discrete-Time Fouder Transtorm
x(o)
Replacing
=,,i,
2z16(o
-
kttn
-
Ztrm)
2r by NOn yields
x(O)
=>
2t6(()
- tQ,-
N{),,nr)
Thar is. the specrrum consists of an Lnn,,. .., of ,rpulses o[ strength 2rr centered at tOn.
(t I N)q. (k '$ 2N)(h.etc. This can be compared to ihe result rvc obtained in Example
7.2.1. where we considered the Fourier-series representalion for 'r(rr). The difference, as
in continuous time. is lhat in the Fourier-series represenlation thc frequency variable
takes on only discrele values, whereas in the Fourier lransform the flequency variable is
continuous.
7.4
PROPERTIES OF THE DISCRETE-TIME
FOURIER TRANSFORM
The properties of the discrete-time Fourier transform closely parallel those of the continuous-time transform. These properties can prove useful in the analysis of signals and
systems and in simplifying our manipulations of both the forward and inverse trans'
forms. In this scction, rve considcr some of the more uscful propcrties.
7.4.1 Periodicity
We saw that the discrete-time Fourier uansfolm is periodic in O with Period 2T, so that
X(o+2r)=x(o)
(7.4.1)
7.42 Linearity
Let x,(a) and.rr(n) be two sequences with Fourier transforms X,(O) and Xr(O)'
resp€ctively. Then
Tlarxr(n) + arx2(n)l = a,X, (O) + arXr(A)
for any constants al and
Q.42)
a2.
7.4.8 fime and Frequency Shifting
By direct substitution into the defining equations of the Fourier transfonn, it can easily be shown that
?lx(n
- no)l = exP[-i Ono]x(O)
(7.4s)
- oJ
(7.4.4)
and
9[exp[j0or]x(n)l = x(O
346
Fourier Anatysis o, Discr€te-Time
Systems
Chapter 7
7.4.4 Difrerentiation in Frequcncy
Since
x(o) =,j_ x(n)exp[-ion]
it follows that if we differentiate both sides with respect to O, we get
-a
4X(a\
=
d; '?-?in)x(n) exP[-ion]
from which we can write
e[nt(n)l=
i
*(nlexp[-loz]
=i4{lop
(7.4.s)
BTarrrple 2.4.1
Letr(z) = no'u(a),rvith lcl < 1. Then, by using the resulrs of Example 7.3.1,
x
@
=i
_
-he
@"u(n)t = i
we can wrire
rt l-+r_,n;
o exp[- jO]
(1 - a exp[-i0])z
7.4.6 Convolution
l*t
y(n) represent the convolution of two discrete-time signals r(n) and ft(n ); thar is,
y(n\ = h(n) ," r(n)
(7.4.6)
Then
y(o) = H(o)x(o)
(7.4.7)
This result can easily be established by using the definition of the convolution operation given in Equation (6.3.2) and the definition of the Fourier transform:
y(o)
=
i
ytrlexp[-ion]
,i_ Li_
=
h(k)x(n- *)]expt-lonl
_i,r,o,[,i
_,(n
- r)expt-ion]]
Here, the last step follows by interchanging the order of summation. Now we replace
- k by n.in the inner sum to get
n
Sec.
7.4
Properlies of the Discrete-Time Fourier Transform
v(o)
=,I.,
(o)1,,i,
-r(n)exp
[-ion]]
347
exp t -
rrrrl
so that
r(o) =
i
n61xp1exp[-lok]
= H(o)x(o)
As in the case of continuous-time systems, this property rs extrcmely useful in the
analysis of discrete-time linear sysrems. The function I1(o) is rcf"rrej to as the
/re.
quency rcsponse of the system.
Example 7.4.2
A pure delay
is described by rhe input/output relation
y(n)=x(n-no)
Taking the Fourier transformarion of both sides, using Equation (7.4.j), yields
Y(O) = s,(r1-;onolX(O)
The frequency response of a pure delay is thercfore
H(O) = exP[-l0no]
Since
H(o)
has unity gain for
all frequencies and a linear phase, it
Example 7.43
Lrt
nat =
,r,r
0).,at
= (])",,r"r
Their respective Fourier transforms are given by
H(O) =
---.
r - jexpl-io1
x(o):- --l--
1- lexpt-lrll
so that
rs
distortionless.
w
Fouri€r Analysis of Discrete-Time
y(o) = H(o)x(o) =
Systems
r
- jexpt-;ol r - ]expt-7ol
r
- j expt-lol
r
Chapter 7
- lexpt-iol
By comparing the two tenns in the previous equation with X(O) in Example 7.3.1, we see
that y(n) can be writlen down as
v(n) =
,(;)' u(n) -
,(i)'u(n)
E-e'nple 7.4.4
As a modification of the problem in Example 7.3.2,\ea
&(n) =
ql'-'"|'
-@ < n <
@
represent the impulse response ofa discrete-time system. It is clear that this is a noncausal
IIR system. By following the same procedure as in Example 7.3.2,i1can easily be verified
that the frequency response of the system is
H(o) =
I
--*;;$
The magnitude function lA1Oll is the same
*
as
",
"*nt-ro,,]
X(O) in Example 7.3.2 and
is plotted in
Figure 7.3.2. The phase is given by
Arg H(O)
= - zog
Thus, H(O) represents a linear-phase system, with the associated delay equal to no. It can
be shown that, in general, a system will have a linear phase, if ft(a) satisfres
h(n) =
7t17ro
- nr,
-co < n <
co
the syslem is an IIR system, this condition implies that the system is noncausal. Since a
continuous-time system is always IIR, we cannot have a linear phase in a continuous-time
causal system. For an FIR discrete-time system, for which the impulse response is an Npoint sequence, we can find a causal i(n) lo satisfy the linear-phase condition by letting
l)/2.It can easily be verified that ll(z) then satisfies
delay zo be equal to (N
If
-
h(n)=
111Y
- I - n)' 0<n sN-
I
E=e,nple 7.4.6
Irt
"(o)={;:
l.1tl*
That is, H(O) represents the transfer function of an ideal low-pass discrete-time lilter
with a cutoff of O. radians. We can find the impulse respoDse of this ftlter by using Equation (7.3.11):
Sec.
7.4
Properties ol the Discrete.Time Fourier Translorm
1 ro.
-:- |
Ll J_{t,
h1al =
sin
_
exp [i
349
Oa]dQ
O.n
7tn
Exanple 7.4.6
We will find the output y(z) of the system with
- /rrz\
t(n) = 5'1';
- "'lT
7tn
/
when the input is
lrr.n\* lrn *
,inl7
r(n) = cos(
I\
,/
e-l
From Example 7.2.2 and Equation (7.4.12), it follows that, with {),, = n /OA,
0<O<2n
x(o)
=2,[:s(o-
t4oo)
*
f
utn-
n'i,t
tsoo)
to-
rosn,,) +
n
]srn - u2o,)].
Now
H(o) = I
so that in the range 0
< A < 2r
-
*.,(#), -r s lol ... r
we have
,,n,=l' i=o'llrr
Io
otherwise
y(o) = s161y,r'rol = z'[f, oro
-
rach)
*
?]
and
.tl
y
(n) = "- ' 4ta+ +
.- tl
'--
4tffn'
which can be simplified as
,t,l =,,"("i * l)
u,,,
-
the range
ro&rh)]
350
Fouder Analysis of Discrete'Time
Systoms
Chapter 7
7.4.A Modulation
Let y(a) be the product of the two sequences x,(n ) and xr(n) with transforms Xt(O)
and Xr(O), respectively. Then
'
Y(o): i
x,(n)x2@)exp[-j0n]
If we use the inverse pou.i.r-tron.rnln ,.t"tion to write r,
(n
) in terms of
is Fourier
transform, we have
v(o) =
,i.
[r't [,.,*,(r)exp[jon
]de]x,(n) exp[-lon]
Interchanging the order of summation and integration yields
v(()) =
j,
[,,,x,te) {,i_,,(,r)
exp[-l(o
- elnt]ao
so that
v(o) =
l[,r,,*,rrr*,(o
- e)do
(7.4.8)
7.4.7 Fourier Tlansform of Digcrete-fime
Periodic Sequences
Let .r(n ) be a periodic sequence with period N, so that we can express .r(z) in
a
Fourier-series expansion as
,(r) =
eaoexpfik0on]
(7.4.e)
where
n =
rh
-2n
N
(7.4.10)
Then
x(o) = hlx(n)l:
=
!'
*F- a* exPfuk'nnnl]
o*r1exp[fi{\n]l
We saw in Example 7.3.3. that. in the range [0, 2rr],
a*lexpljkdl"nll :2rr6(O
so that
- &q)
S€c.
7.5
351
Fourier Translorm ol Sampled Continuous-Time Signals
1nu1,
lzu1 2ta, 1ra,, ?ta1 2r.a1 2luu 7ta, \u'
- 3rl0 -2sl(r
-!lrr
0
Qu 2Qo .lQn 4Qu
Figure 7.4.1 Spectrum of a periodic signal N =
NI
X(O) =
)
2ra*6(O
5Q(,
3.
-k0o), 0sO<2n
(7.4.11)
Since the discrete-time Fourier transform is periodic with period 2n, it follows that
X(O) consists of a set of N impulses of strength 2ra*, k = 0, l. 2, ..., N - 1, repeated
at intervals of NOo = 2r. Thus, X(O) can be compactly written as
NI
(7.4.12)
2na*6(O k0o), forallO
X(O) =
)
I={)
-
This is illustrated in Figure 7.4-l for the case N = 3.
Table 7 -2 summarizes the properties of the discrete-time Fourier transform, while
Table 7-3 lists the transforms o[ some common sequences.
TABLE7.2
Proporllas of the Dlgcreto-Tlme Fourlet Translotm
1. Linearity
Axt(n) + Bx2h)
2. Time shift
x(n
3. Frequency shitt
.r(n) exp IjO,,n]
4. Convolution
.r, (n
5. Modulation
.r, (n ).x,
6. Periodic signals
.r(n) periodic with period N
)
nol
u .t2 (rt )
n :2'
7.5
(n)
AX.(A) + BXr(Q)
exp
[
(7.4.2)
(7.4.3)
-jO nolX (O )
x(() - oo)
x,(o)&(o)
(7.4.4)
(7.4.7)
lt
zn Jr,x'{r)x'ttt
!
2rrars(o
,, = ,l )..t,i
-
-
(7.4.8)
P)dP
(7.4.11)
A{)o)
cxp
[-lfton
I
FOURIER TRANSFORM OF SAMPLED
CONTINUOUS-TIME SIGNAL
We conclude this chapter with a discussion of the Fourier transform of sampled continuous-time (analog) signals. Recall that we can obtain a discretc-time signal by samp/ing a continuous-time signal. Le t.r,(t) be the analog signal that is sampled at equally
spaced intervals 7:
Fourler Analysis ot Discrete-Time
Syst€ms
Chapier 7
TABLE 7.3
Sorne Common Dlscrete-Tlme Fourler Tranelorm Paltg
Fourler Tranalorm (pododlc ln (}, perlod 2a)
Slgnal
6(a)
I
I
2n6(O)
exp[iOrz],
()oarbitrary
2zt6
N-l
) ao exp[j&fton],
t-0
a.nu(n), l"l
.
ot,l, l"l .
t
no."u(a), l"l
(O
-
()o)
/V-l
Nfh = 2rr
)
l-0
t
1
-
2taoD(O
-
&f!o)
exp[-lO]
a
l-o2
I-
.
2q cos
c
t
f) + d2
exp[-iol
(l - c exp[-jol)'?
rect(n/Nr)
sin(O/2)
sinO"n
recr(o/2o.)
tn
x(n) = x,(nT)
(7.5.r)
In some applications, the signals in parts of a system can be discrete-time
signals,
whereas other signals in the system are analog signals. An example is a system in which
a microprocessor is used to process the signals in the system or to provide on-line control. In such hybrid. or sampled-data systems, it is advantageous to consider the sampled sigral as feing a continuous-time signal so that all the signals in the system can be
treated in the same manner. when we consider the sampled signal in this manner, we
denote it by x,(r).
We can write the analog signal x,(r) in terms of its Fourier transform X, (o) as
,,(i = * f
exp[jor]dto
_X,(ro)
Sample values.r(n ) can be determined by setting t
x(n)
:
x,(nl)
:
:
nT in Equation (7.5.2), yielding
* f"X,(o) explj otnTldut
However, since x(n) is a discrete-time signal, we can write
rrne Fourier transform X(O) as
:(n) =
(7.s.2)
lr" X(O)
exp[lOn]do
," l_"
it in terms of
(7.s.3)
its discrete-
(7.s.4)
S€c.
7.5
Fourier Transform ol Sampled Continuous'Timo
Signals
353
Both Equations (7.5.3) and (7.5.4) rePresent the same sequence.r(n). Hence, the transforms must also be related. In order to find this relation, let us divide the range
of
-cp ( ro ( co inro equal intervals of length 2tt/T and express the right-hand side
Equation (7.5.3) as a sum of integrals over these intervals:
,ot = ];
,2.1,i-,'"i)*.(o)
If we now replace o by a + 2rr/T, we can write Equation (7.5.5)
,<o =
);
(7.s.s)
exp[rronrld,o
-T )*p[,('
,>-ll,',,*"(,
as
*';')"fa'
(7'5'6)
Interchanging the orders of summation and integration, and noting that
expli2t rnT /T] = 1, we can write Equation (7'5.6) as
, <o =
If
* I 1,"1,>*-*.(, * +r)]
we make the change of variable
,@=
exp
t;., rl a,
o = AlT, Equation (7.5.7)
(7.s.7)
becomes
* I _,li .Z--"(+ . 7,)] *o 1in,1,o
(7.s.8)
A comparison of Equations (7.5.8) and (7.5.4) then yields
x(o) = )r,>_*.(*.+,)
We can express this relation in terms of the frequency variable
(7.s.e)
or by
setting
,=7
(7.s.10)
With this change of variable, the left-hand side of Equation (7.5.9) can be identilied as
the continuous--time Fourier transform of the sampled signal and is therefore equal to
X,(o), the Fourier transform of the signal .r,(t). That is'
x,(r): r(n)ln_",,
Also, since the sampling interval is I, the sampling frequency o, is equal
We can therefore wfite Equation (7.5.9) as
x,(.)=!,,i.at,*'..1
(7.s.11)
to}rlT
ndls.
(7s.t2l
This is the result that we obtained in Chapter 4 when we were discussing the Fourier
transform of sampled signals. It is clear from Equation (7.5.12) that X,(o) is_ the perithe
odic extension, with peiod to,, of the continuous-time Fourier transform Xr(r,r) of
low'pas
analog signal x,(r), amplitudi scaled by a factor l/L Suppose that.ro(r) is a
signafruih thri its rpeitru. is zero for to ) to,,. Figure 7.5.1 shows the spectra of a typ'
in
ical band-limited analog signal and the conesponding sampled signal. As discussed
354
Fourier Analysis of Discrete-Tim"e
I Xs(cr)
-
(.r0
Systems
Chapter 7
I
o-o
(a)
I Xr{or)
I
tlr
-or T
-oo
0
oro
-ou - @r o0-ur
or,
- aJo
oo * cr,
(b)
rx(olr
(-asT
-.o,)T (a6- a,\T
llo"- alT
(.o.+.D,IT
(c)
Iigure 75.1 Spectra of sampled signals. (a) Analog specrrum. (b) Spectrum of x, (r). (c) Spectrum of x(z).
chapter 4, and as can be seen from the figure. there is no overlap of the spectrat compon€nts in x-(.) if to, - rrr1, ) to,,. we can then recover x,,(r) from the sampred
signat
xJt) by passing.r,(r) rhrough an ideal low-pass filter with i cutoff at rrr,, radls'and a
of 7. Thus, there is no aliasing distortion if the sampring frequency is such that lain
qrr-0ro>(r)rl
or
o, )
2t'ro
(7.s.13)
This is a restatement of the Nyquist sampling theorem that we encountered in
chap
ler 4 ard specifies the minimum sampling frequency that must be used to recover a
continuous-time signal from ils samples. cleariy. if i,1r; is not band limited.
there is
always an overlap (aliasing).
Sec.
7.5
Fourier Translorm of Sampted Continuous-Time Signats
Equation (7.5.10) describes the mapping berween the analog frequency ro and the
digital frequency O. It follows from this equation that, whereas the ur,its oiro are rad/s.
those for O are just rad.
From Equation (7.5.11) and rhe definition of X(o), it follorvs that the Fourier rransform of the signal .r,(r) is
x,(.)
=
,fi_rfurrlexp[-lr,rn
T]
(7.s.14)
we can use Equation (7.5.14) to justify the impulse-modularion model for sanpled
signals that we employed in Chapter 4. From the sifting property of the 6 function, we
can write Equation (7.5.14) as
x,(.)
=
il_-..Ur,i,u,,
-
nr'yexpl-jutldt
o1r
- zr)
from which it follows that
.r"(r)
=.r,(r)
)
(7.s.rs)
That is, the sampled signal x,(r) can be considered to be the product of the analog signal .r,(l) and the impulse
train
)
6(r
-
nI).
To summarize our discussi#iTo far, when an analog signal .r,(r) is sampted, the
samplcd signal may be considcrcd to be either a discrete-limc signal r(n) or a continuous-time signal r,(r), as given by Equations (7.5.1) and (7,5.15), respecrively. When
the sampled signal is considered to be the discrete-time signal .r(n), we can find its discrete-time Fourier transform
x(o)
= n=-a
i .r(n)e-in,
(7.s.16)
If we consider the sampled signal to be the continuous-time signal x, (t), we can find
its continuous-time Fourier transform by using either Equation (7.5.12) or (7.5.14).
However, Equation (7.5.12),being in the form of an infinite sum, is not useful in determining X,(o) in closed form. Still, it enables us to derive the Nyquist sampling theorem, which specifies the minimum sampling frequency or, that must be useo so that
there is no aliasing distortion. From Equation (7.5.11), it follorvs that, to obtain X(O)
from X, (to), we must scale the frequency axis. Therefore, wilh reference to Figure 2.5.1
(b), to find X(O), we replace to in Figure 7.5.1 (c) by oT.
If there is no aliasing, X,(o) is just the periodic repetition of X,(o) at intervals of
<o,, amplitude scaled by the factor l/I, so that
x,(r) = Lr*,,r,
Since
-(,,s
<
ar
=
=
i"(?)
(7.s.17)
- orl it follows that
(7.s.r8)
-zsOsn
X(O) is a frequenry-scaled version of X,(ro) with 0
x(o)
t,lr
356
Fourier Analysis of Discrete.Time
Systems
Chapter 7
n=arnple 7.6.1
We consider the analog signal .r, () with spectrum as shown in Figure 7.5 2 (a). The signal has a one-sided bandwidth /o = 50fi) Ha or equivalently, roo = zrfo = 10,( In rad/sec.
The minimum sampling frequency that can be used without introducing aliasing is
[ro,I,n = 2r,ro = 29,6*, rad/sec. Thus, the maximum sampling rate that can be used is
l/(2fs): l1p
',.ec.
I X"(a) |
T*=
I
-tr x ld
X" (otl
I
4xld
-zx Id rx ld
Ezx
ld
(b)
rx(o)r
v-tf (c)OJL4
Flgure
75,
Spectra for Example 7.5.1.
I
ld
Suppose we sample the sigral at a rate = 25 psec. Then (Dr = 8rT x
rad/sec. Figure 75.2 (b) shows the spectrum X, (o) of the sampled signal. The spectrurn is periodic with
perird rrr,. To get X(O), we simply scale the frequency axis, replacing ro by O = r,rT, as
shown in Figure 75.2(c). The resutting spectrum is, as expected, periodic with period 2n.
7.6.1 Reconstnrction of Sampted Sigtrals
If there is no aliasing, we can re@ver the analog sigpal .r"(t) from the samples x.(nI)
by using a reconstnntion fikenThe input to the reconstruction filter is a discrete-time
sigral, whereas its output is a continuous-time signal. As discussed in Chapter 4, the
reconstruction 6lter is an ideal low-pass frlter. Referring to Figure 7.5.1. we see that if
we pass.r"(t) through a filter with frequency response function
,.-t:[r,
t0,
I'l
",
otherwise
(7.s.te)
Sec.
7.5
Fourier Translorm of Sampled Continuous-Time Signats
357
to lie between o0 and rltr - o0, the spectrum of thu filter output will be
identical to X,(o), so that the output is equal to.r,(l). For a signal sampled at the Nyquist
rate, orj = 2roo, so that the bandwidth of the reconstruction filter must be equat to
a" = a,/2 = r /7.ln this case, the reconstruction filter is said to be matched to the sampler. The reconstructed output can be determined by using Equation (4.4.9) to obtain
with
<o, chosen
*,(0 -
17.s.20)
"2*,,@o"+is#;)
Since the ideal low-pass filter is not causal and hence not physically realizable, in practice we cannot exactly recover x,(r) from its sample values. Thus, any practical reconstruction filter can only give an approximation to the analog signal. Indeed, as can be
seen from Equation. (7.5.20), in order to reconstruct ra(l) exactly. we need all sample
values r,(nI) for r in the range
-). However, any realizable filter can use only
past samples to reconstruct r,(r). Among such realizable filters are the hold circuiu,
which are based on approximating .r,(l) in the range nT t < (n +
a series as
(--,
l)Iin
=
i,(t)
= x.(nT) + .r'"(nT)(t
-
nT) +
I
,.xi@T)(t
-
nT)2
+...
(75.21)
The derivatives are approximated in terms of past sampled values: for example
xi@7-l = l.r.(nT) - x,((n - t)r)l/ r.
The most widely used of these filters is the zero-order hold. rvhich can be easily
implemented. The zero-order hold corresponds to retaining only the first term on the
right-hand sidc in Eq. (7.5.21). That is, the output of the hold is given by
i,(t): x,(nT\ nT=t<(n + l)T
(7.s.22)
ln other words, the zero-order hold provides a staircase approximation to the analog
signal, as shown in Fig. 7.5.3.
Let goo(t) denote the impulse response of the zero-order hold, obtained by applying a unit impulse 6(n) to the circuit Since all values of the input are zero except at
n - 0, it follows that
i"(r)
----
Analog signal
Output of zero-orrler hold
-
OI2T3T4T5T6T'17
Figure
753
Recontruction of sampled signal using zero-order hold.
Fourier Analysis ot Discret+Time
o,,,trl =
Systems
0<r<T
otherwise
{1.
Chapter 7
(7.s.23)
with corresponding transfer function
G,o(S) =
!--'
(7.s.24)
J
In order to compare the zero-order hold with the ideal reconstruction filter, let
replace s by
us
7'r,r in Eq. (7.5.22) to get
G,o(,)=
_
t#
a
-i{-r/21
-e
='#lei@rt2) 2i
sin (trr,J/ro,)
T@/.n,
s-itru/a,l
(7.s.?s)
where we have used T = 2t/a,.
Figure 7.5.4 shows the magnitude and phase spectra of the zero-order hold as a function of <o. The figure also shows the magnitude and phase spectra of the ideal reconstruction filter matched to o.. The presence of the srle lobes ia Goo(to) introduces
distortion in the reconstructed signal, even rvhen there is no aliasing distortion during
sampling. Since the energy in the side lobes is much less in the case of higher order
hold circuits, the rcconstructed signal obtained rvith thcse filters is much closer to the
original analog signal.
I G1,6(ar) |
19a@>
Figure 75.4 (a) Magnitude
spectrum and (b) phase spectrum of
zero-order hold.
Sec.
7.5
Fourier Transtorm ol Sampled Continuous-Time Signals
. An aiternative scheme that is also easy to implemenl ohtains the reconstructed sig- l)7,nirl as the straight linc joining. the values
nal i,(r) in the inrerval
[(n
interpolator is called a linear interpolator and is
This
x,@T).
x,l@"- i; f1 ana
described by the input'output relation
i,(r)
=
r,(nf1[r
.
x.t@
=T)-
- rtrtlL-{],r, -', r < t < nr
(7.s.26)
It can be easily verified that lhe impulse response of the linear interpolator is
lr- I4T'
Irl=r
s,(r) = {
[0.
(7.s.27)
otherwise
with corresponding frequency function
Gr(,)
= rltit<e!!\')
(7.s.28)
Note that this interpolator is noncausal. Nonetheless, it applies in areas such as the Processing of still image frames, in which interpolation is done in the spatial domain.
7.6.2 Sarnpling-Rate Conversion
We will conclude our consideration of sampled analog symbols with a brief discussion
of changing the sampling rate of a sampled signal. In many applications, we may have
to chan[e the sampling rate of a signal as the signal undergoes successive stages of processing by digital hlters. For example, if we process the samplcd signal by a low-pass
digitaifltier, the bandwidth of the filter output will be less than that of the input. Thus,
it Is not necessary to retain the same sampling rate at the output. As another example,
difin many telecommunications systems, the signals involved are of different types with
ferent Landwidths. These signals will therefore have to be proccssed at different rates'
one method of changing rhe sampling rate is to pass the sampled signal through a
reconstruction filter and reiample the resulting signat. Herc. wc will explore an alternative, which is to change the effective sampling rate in thc digital domain. The two
(or
basic operation, n"..r.iry for accomplishing such a convcrsion ate decimation
dow nsamp ling) arld i nt e r p o la t i o n (ot up s a m p I ing).
Suppoie *i htr. an inalog signal band limited to a frequcncy'o'*-ht9! has been
tu.pi.d at a rate T ro get thg discrete-time signal r(n), wirh x(n) = x,(nT.)' Decimation involves reducing- the sampling frequency so that lhc new sampling rate is
T' = MT.The new sampling frequency will thus be r,rj : (l.),/M. we will restrict ourevery
selves to integer values of M, so that decimation is equivalent to retaining one of
M samples oithe sampled signal r(n )' The decimated signal is given by
(7.s.ze)
xa@)=x(Mn)=x.(nMT)
Since the effective sampling rate is now
nal, we must have
T' = MT. for no aliasing in the sampled
sig-
360
Fourier Analysls ol Discrete-Time
Systems
Chapter 7
7" !0)o
or equivalentlv.
MT-L
(7.5.30)
(,)o
For a fixed r, Equation (7.5.30) provides an upper limit on the maximrrn value that
M can take.
If there is no aliasing in the decimated signal, we can use Equation (7.5.1g) to write
xd(a)=+.,(*). -n<osn
_
=
l ../1o\
tar*'\ud' -nsosn
Since x(o), the disclete-time Fourier transform of the analog signal sampled at the
rate T, is equal to
x(o)=
+-,(+),-n<osn
it follows that
xd(a):
i.(#)
(7.s.3r)
That is, Xr(o) is equal to x(o) amplitude scaled by a factor LIM atdfrequency scaled
by the same factor. This is illustrared in Figure 7.5.5 for the case where r = i.er /ro
andT
:27.
Increasing the effective sampling rate of an anatog signal implies that, given a signal
x(n ) obtained by sampling an analog signalxo(r) at a rate r, we want to deiermine aiignal x,(n) that corresponds to sampling r,O at arate T" = f lL, where L > 1. That is,
x,(z) =
x(n/L):
4@T/L'1
(7.s.32)
This process is known as interpolation, since, for each n, it involves reconstructing the
missing samples x(n + mL),m
1,2,... , L l. We will again restrict oursetvLs to
integer values of L.
The spectrum of the interpolated signal can be found by replacing T uath r/L in
Equation (7.5.18). Thus, in the range -rr < O n,
:
-
=
x,(o) =
*r.(r*)
l
r*sn1.
=l,
tot
f
=T
.lnl .,
(7.s.33)
Sec.
7.5
Founer Translorm ol Sampled Conlinuous'Time Signals
L\,,1rr-rt
i
(l
-dtt
361
@tt
(it)
uoT
ll
-uoT
(h)
tx,/(!))
Figure 7-5.5 Illustration of
decimation. (a ) Spectrum of analog
signal. (b) Spcctrum of r(n) with
sampling ratc f. (c) Spectrum of
dccimatc(l \ir'nill correspondinB to
rate T = M-\ . Figures correspond
to T = O.4n ltooand M = 2.
I
vT'
- tt -tOrrT
0
anT' tr
i)
(c)
As a first step in determining .t,(n) from.r(n), let us replacc the missing samPles by
zeros lo form the signal
',r^,
=
{;:"'"'
n:0,-+L,!2L,...
otherwise
(7.s.34)
Then
&(O): 2
ie
xln\s-ttt"
-r
=,i_*x(n/L)e-rb
= )
x1t<1e-iteL=X(LA)
(7.s.3s)
so that X,(O) is a frequency-scaled version of X(O). The relation betrveen these various spectra is shown in Figure 7.5.6, for the case when 7 = 0.4t llonand L
From the figure. it is clear that if we pass x,(n) through a lorv-p5ss digital filter with
gain L and cutoff frequency auT/ L, the output will correspond to -r,(n). Interpolation
by a factor L therefore consists of interspersing L 1 zeros betwcen samPles and then
low-pass filtering the resulting signal.
:2.
-
362
Fourier Analysis ot Discrete-Time
I
-aO
X"(al
Sysiems
Chapter 7
I
0
(a)
rx(o)
-'tr
r
o\T g .ngT t
= 0.4l
= -O.4r
-
(b)
lxi(o)r
-trOT" O
rxr(o)
-t
-@oT"
@OT"
|
O @oT"
t
A
(d)
Iigure 75.6 Illustratinn of interpolation. (a) Spectrum of analog signal.
(b) Spectrum ofr(n) with sampling rare f. (c) Spectrum of interpoiired
signal corresponding to rate T" : T/L. (d) Spectrum of signal .r,(z). Fig_
ures correspond to T = O.4t /lod,oand L = 2.
Example 7.6.2
Consider the signal of Example 7.5.1, which was band limited to roo = 10,ffi)n rad/sec, so
that loat = 100Fs. Suppose x,(r) is sampled at = 25$ to obtain the signal r(n) with
spectrum x(o) as shown in Figure 7.5.2 (b). If we want lo decimare.r(z) by a faitor M
without introducing aliasing, it is clear from Equation (2.5.30) thar M (t
/tooT) = 4.
Suppose we decimate r(n\ by M 3. so rhir the effecrive sampling=rate is ?.-= 75ps.
It follows from Equation (7.5.31) and Figure 2.5.5(c) rhat the specrrum of rhe decimaied
f
:
Sec. 7.5
Fourier Translorm o, Sampled Conlinuous-Time Signals
rx(o)
0
363
|
t7
T
(b)
I
xd(Q )l
4\ rd
.3
-!r
-v-
0
(c)
'',ln'L,
Iigure 7S.7 Spectra for Example
(al Analog spectrum.
(b) Spectrum of sampled signal.
(c) Spectrunr of decimated signal.
(d) Spectrum oI interpolaled signal
7.5.2.
-3zr
t
0
(d)
after decimation.
signal, Xo(O), is found by amplitude and frequency scaling X(J)) hy the factor 1/3. The
resulting spectrum is shown in Figure 7.5.7(c).
Let us now interpolate the decimated signal -rr(z) by L:2 to form the interpolated
signal .t,(n ). It follows from Equation (7.5.33) that
a^T
< -,L =
xr(o)
=
3:r'
L8
. lol .,
Figure 7.5.7 (d) shows the spectrum of the interpolated signal. Fr()m our earlier discussion.
it follows that interpolation is achieved by interspersing a zero hctrvcen each two samples
of .r., (n) and low-pass filtering the result with a filter with gain 2 and cutoff frequency
(3rl8)
rad/sec.
Fourier Analysis ol Discrete-Time
Systems Chapter 7
Note that the combination o[ dccimation and inLcrpolation gives us an elfective samof. T' : MT/ L = 37.5ps. In general. by suitably choosing M and L. we can
change the sampling rate by any rational multiple of it.
pling rate
7.6.3 A/D and D/A Conversron
The application of digital signal processing techniques in areas such as communication,
speech, and image processing, to cite only a few, has been significantly facilitated by
the rapid development of new technologies and some important theoretical contributions. In such applications, the underlying analog signals must be converted into a
sequenoe of samples before they can be processed. After processing, the discrete-time
signals must be converted back into analog form. The term "digital signal processing"
usually implies that after time sampling, the amplitude of the signal is quantized into a
finite nurnber of levels and converted into binary form suitable for processing using for
example, a digital computer. The process of converting an analog signal to a binary representation is referred to as analog-to-digilal (ND) conversion, and the process of converting the binary representation back into an analog signal is called digiul-to-analog
(D/A) conversion. Figure 7.5.8 shows a functional block diagram for procesing an analog signal using digital signal-processing techniques.
As we have seen, the sampling operation converts the analog signal into a discretetime signal whose amplitude can take on a continuum of values; that is, the amplitude
is represented by an infinite-precision number. The quantization operation converts the
amplitude inlo a finite-prccisrbn number. The binary coder converts this finite precision number into a string of ones and zeros. While each of the operations depicted in
the figure can introduce errors in the representation of the analog signal in digital forrn,
in most applications the encoding and decoding processes can be carried out without
any significant error. We will, therefore, not consider these processes any further. We
have already discussed the sampling operation, associated errors, and methods for
reducing these errors. We have also considered various schemes for the reconstruction
of sampled signals. In this section, we will briefly look at the quantization of the amplitude of a discrete-time signal.
The quantizer is essentially a device that assigns one of a finite set of values to a signal which can take on a continuum of values over its range. Let [:r, xr,] denote the
range of values, D, of the signal .r,(r). We divide the range into intervals
[x,-r,:J,i=l,2....N.with.r,=r,andrr=rn.Wethenassignavaluey;,i=1,2...,
Analog-ro-Digital Convenor
Digital-to-Analog Convertor
,v,(l)
Figure
75.E
Functional block diagram of the A/D and D/A processes.
Ssc.
7.5
Nto
the signal whenever.r,_ r
Fourier Translorm of Sampled Continuous-1ime Signals
= .r.(t) <
365
of levels of
the quantizer. The r, are known as the decision levels, and the l', are known as the
reconstruction levels. Even though the dynamic range of the input signal is usually not
known exactly, and the values x, and x, are educated guesses, it nrav be expected that
values of x,(t) outside this range occur not too frequently. All values xo(t) < \are
assigned to r,, while all values x,(t) > xn are assigned to x,,.
For best performance. the decision and reconstruction levcls must be chosen to
match the characteristics of the input signals. This is, in general, a fairly complex procedure. However, optimal quantizers have been designed for certain classes of signats.
Untform quantizers are often used in practice because they are easy to implement. In
such quantizers, the differences .r, - r,_r and y, - yi_r arc choscn to be the same
value-say, A-which is referred to as the step size. The step size is related to the
dynamic range D and the number of levels N as
.r,. Thus, N represents the number
a=-DN
(7.s.36)
Figures 7.5.9 (a) and (b) show two variations of rhe uniform quanrizer-namely, the
midriser and the midtead. The difference between the two is rhat the output in the
midriser quantizer is not assigned a value of zero. The midtread quantizer is useful in
situations where the signal level is very close to zero for significant lengths of timefor example, the level of the error signal in a conlrol system.
Since therc are eight and seven output levels, rcspectively, for thc quantizers shown
in Figures 7 .5.9(a) and (b), if we use a fixed-lenglh code word, each output vatue can
be represented by a three-bit code word, with one code word left ovcr for the midtread
quantizer. In what follows, we will restrict our discussion to thc midriser quantizer. In
that case, for a quantizer with N levels, each output level can bc rcpresented by a code
word of length
(a)
(br
Flgure 75.9 lnput/output relation for uniform quantizer. (a) Midriser
quantizcr. (b) Midtread quantizer.
Fourier Analysis of Discrete-Time
366
Systems Chapier 7
(i + l)A
I
A
,A+;
A
iA
i
lll
rtl
or,
T1
(a)
A
2
0
-A
T
Figure 75.10 Quantization error.
(a) Quantizer input. (b) Error.
B
-
logzN =
togz
D
(7.s.37)
a
'fhe proper analysis of the errors introduced by quantization requires the use of
techniques that are outside the scope of this book. However, we can get a fairly good
understanding of these errors by assuming that the input to the quantizer is a signal
which increases linearly with time at a rate S units/s. Then the input assumes values in
any specilic range of the quantizer-say, [iA, (i + 1)A]-for a duration [f,,
with
T,
A/S as shown in Figure 7.5.10. The quantizer input over this time can be
easily verified to be
t],
- Tr:
x"(t) =
r#(r - r,) +,a
while the output is
xo(t)=i^++
The quantization error,
put. We have
e
(t), is defined as the difference between the input and the out-
e(r) = x,(r)
-
xn(t) =
+r,tt
-
r,)
-
Li
(7.5.38)
o+"]
=
'T:'\l'-
It is clear that e (r) also increases linearly from - L/2 to A/2 during the interval lTr,Tzl.
The mean-square value of the error signal is therefore given by (see Problem 7.27)
Sec.
7.6
367
Summary
E
I
tr, e'u)41
= ,= ' r,
)r,
A2
=-
(7.s.3e)
D2 n-zo
- iL
where the last step follows from Equation (7.5.37). E is usually referred to as the quantization noise power.
It can be shown that if the number of quantizer levels, N, is very large, Equation
(7.5.39) still provides a very good approximation to the mean-square value of the quantization error for a wide variety of input signals.
In conclusion, we note that a quantitative measure of the quality of a quantizer is
the signa.lto-noise ratio (SNR), which is defined as the ratio of the quantizer input signal power P, to the quantizer noise power E. From Equation (7.5.39)' we can write
SNR=?
=rzP,D'2228
(7.5.210)
In decibels,
(SNR)dB
= l0loglsSNR
= l0log,o(12) + 10log,oP,
-
20log,rD + 20Blog,o(2)
(7.s.41)
That is,
(SNR)dB
=
10.79
+ l0log,oP, + 6.028 - 2Olog',,D
(7'5'42)
As can be seen from the last equation, increasing the code-word length by one bit
results in an approximately 6-dB improvement in the quantizcr SNR. The equation
also shows that the assumed dynamic range of the quantizer must be matched to the
input signal. The choice of a very large value for D reduces thc SNR.
Examplo 7.63
Let the input to the quantizer be the signal
.r,(t) =
,4 sin root
The dynamic range of this signal is 2.r4, and the siBnal power is P. = A212. The use of
Equation (7.5.41) gives the SNR for this input as
(sNR)dB = 20logro(l'5) +
6'aB = l'76 + 6028
Note that in this case D was exactly equal to the dynamic rangc of the inPut signal. The
SNR is independent of the amplitude .A of the signal'
7,6
SUMMARY
.
A periodic discrete-time
signal .r(n) with period N can be represented by the dis-
crete-time Fourier series (DTFS)
.r(n) =
^>_,
".-o[iaro
*]
Fourier Analysis ol Discreie-Time
368
.
o
t
r
Chaptor 7
The DTFS coefficients a, are given by
'-
.
Systems
=
i>*''t"l*nf-if;m]
The coefficients ai are periodic with period N, so that ao : at t N,
The DTFS is a finite sum over only N terms. It provides an exact alteraative representation of the time signal, and issues such ar convergenoe or the Gibbs phenomenon do not arise.
lt ar, are the DTFS coefficients of the signal .r(n), then the coefficiens of. x(n - m)
are equal to ao expl- j(2tt / N ) kml.
If the periodic sequence.r(n) with DTFS coefficiens ao is input into an LTI system
with impulse response &(n ), the DTFS coefficiens Do of the output y(n) are given by
or =
"rn(fr
r)
where
fl(o)
=L
oOlexp[-jon]
na-6
r
The discrete-time Fourier transform (DTFT) of an aperiodic sequence r(n) is given by
x(o)= i,(rtexp[-loz]
na -@
r
The inverse relationship is given by
,<^> =
o
o
.
*f"
x1o1exp1;
onlda
The DTFT variable O has units of radians.
The DTFT is periodic in O with period 2r, so that X(O) = X(O + 2t).
Other properties of the DTFT are similar to those of the continuous-time Fourier
ransform. In particular, if y(n) is the convolution of x(n) and ft(a), then,
Y(o) = H(o)x(o)
r
When the analog signal r,(t) is sampled, the resulting signal can be considered to
be either a CT signal r,(t) with Fourier transform X,(o) or a DT sequence.r(n) with
DTFT X(O). The relation between the two is
X"(or) = X(O)
r
-.ri,1
f'
1n..,
The preceding equation can be used to derive the impulse modulation model
for sampling:
'1
7r'i - :
1.f..,,., , ..
,"1 ,*. ..rr li
.r"(r)=x,(r)
li,..i
t Ii
j
,,E--
41,-rr1
Sec.
o
r
.
o
r
7.8
'369
Problems
The transform X.(o) is related to X,(to) by
x,(,) = |,i.r,,,
*,,,,
We can use the last equation to derive lhe sampling theorem. rvhich gives the minimum rate at which an analog signal must be sampled to permit error-free reconstruction. If the signal has a bandwidth of or. . lhen T < 2rla,..
The ideal reconstruction filter is an ideal low-pass filter. Hold circuits are practical
reconstruction filters that approximate the analog signal.
The zero-order hold provides a staircase approximation to the analog signal.
Changing the sampling rate involves decimation and interpolation. Decimation by
a factor M implies retaining only one of every M samples. Inte rpolation by a factor
L requires reconstruction of L I missing samples between every two samples of
the original sampled signal.
By using a suitable combination of M and L. the sampling ratc can be changed by
any rational factor.
The process of representing an analog signal by a string of binary numbers is known
as analog-to-digital (AD) conversion. Conceptually, the process consists of sam'
pling, amplitude quantization. and conversion to a binary codc.
The process of digital-to-analog (D/A) conversion consists of decoding the digital
sequence and passing it through a reconstruction filter.
A quantizer outputs one of a finite set of values corresponding to the amplitude of
the input signal.
Uniform quantizers are widely used in practice. In such quanlizcrs, the outPut values differ by a constant value. The SNR of a uniform quantizcr increases by apProximately 6 dB per bit.
-
.
.
o
.
.
7.7
CHECKLIST OF IMPORTANT TERMS
Allaslng
Convergenco ot DTFS
Declmatlon
Dlocretetlme Fourler serles
DlscrotFtlme Fourler translorm
DTFS coefllclents
lmpulse-modulatlon model
7.8
lnterPolatlon
lnverse DTFT
Perlodlclty of DTFS coalflclentg
Perlodlctty ot DTFT
Sampllng ot analog slgnals
Sampllng theorem
Zero-order hold
PROBLEMS
7.1. Determine the Fourier-series representation for each of thc follosing discretetime
nals. Plot the magnitude and phasc of the Fouricr coefficients a^.
(a) ,r(n) = cos3r.n
4
sig-
S7O.
Fourier Analysis ol Dlscrete.Time
(b) x(n) = acos
(c) .r(z)
(d) .r(z)
,in
f
Syslems
Chapter 7
fi
is the periodic extension of the sequence (1.
-l.0, l. - ll.
is periodic with period 8, and
rt4,
(e) x(n) is periodic with period
: [t. o=n=3
10, 4s n=7
6, and
ln, 0sn<3
r(r,r=lO,
4=as5
(0
72
.r(n)
=,i
t-1)t6(r - *y + co.z+
t- -o
Given a periodic sequence r(n ) with the following Fourier-series coefficients, determine
the sequence:
(a) at = r *
]'*! *'"""!,
osksB
2=I:1
"={;:
"'}
(c) ar = exPl-jrk/al' o= k<7
(d) ar = [,0, -1,0, 11
73. l-et ar represenl the Fourier series coefficients
of the periodic sequence x(a) with period
N. Find the Fourier-series coefficients of each of the following signals in terms of a.:
(a) x(a - no)
(b) r(-n )
(c) (- I )"r(n)
(d)
rtzt =
(Hinx y(n)can be wriuen u.
x@),
[0,
I
+ (] Irtn)
zt
even
zodd
l),.r(n)1.)
(e)
y(n, =
1ll y(n):
[t(n), n odd
10, r even
x,(n)
@l y(n) = r"(n)
7.4. Show thal for a real periodic sequence r(n).a^ = a[-n.
75. Find the Fourier-series representation of the output y (a) when each of the periodic signals in Problem 7.1 is input into a system with impulse response
7.6. Repeat Problem
7.5
if
,,(r) = (l)r"l
nO = (l) r!l.
sec'
7'8 Problems
7.7.
l*t
x(n),hln
), and
371
v(n) be periodic sequences rvith the same period
,V. and let ar.
br. and
c^ be the respective Fourier-serics coefficients.
(a) Let y(n) = .r(n)h(n).Show that
co=)a-b*-,=2ar-^br.9
(M
= a"@
(b) Lrt y(n) = x(r)
@
b*
/r(n). Shorv that
co:
Narbn
7.& Consider the periodic extensions of the following pain of signals:
(a) x(n) = 11.0, 1.0,0, ll
&(n) = [], - I, l.0, - I, ll
nn
(b) r(n) = cos j
ft(n) =
{1,
- I, I, l, - I. ll
1n
(c) r(n) = 2cos -2
n{n) =
t- -l I -ll
lt,-{,q,-s I
I' O=n=7
+ l.
0<n=3
It(z/= [n
4=n=7
l-n+8,
(d) .r(n) =
Let y(a) = ft(a)r(n ). Use the results of Problem 7.7 to find the Fourier-series coefficients for y(z).
7.9. Repeat Problem
7.10. Show that
7.8
if y(n) = ,r (n) 6, :(n)
.1
['" exp[;o( n - k)ldo = E(n Z7t Jo
k)
7.11. By successively differentiating Equation (7.3.2) rvith respect to {}. show that
elnPx(n)l =
7.112. Use
j'4:fJ9
the properties of the discrete-time transform to determinc X(O) for the follow-
ing sequences:
(a) .t(r) = [t'
[l'
o=nsxo
orherwise
rr r l,l
ttl tt") =,(3J
(c) .t(n) = a'cos0/ru(n)'
(d) r(n ) = exP[l3al
l,l 't
Fourier Analysis ol Discrete-Time
372
(e) r(n) =
Systems
Chapter 7
"-o[r;,]
'(O r(z) =
lsinrz + 4cos In
(g) :(n) = a'[u(n) - u(n - n)l
'
sin
(h).r(n)
(rrnl3)
-
sin{mnl3):r!(rn/2)
0) x(n) -
sinhrn/3!sln(rn/2)
(l)
.r(n)
.
(t)
t
x(n ) = (n + 7)a'u(n), lrl
(n)
with transforms in the range 0
7.13. Find the discrete-time sequence .r
(a) x(o) = -r,o(o ]) . 'o(o
(b) X(O) = 4sin5o + 2coso
(c) x(o) =
(d)
x(o)
=
= A < 2r
as
follows:
- +) . 'o(o - T) .i"t(o - 11
4
1r"o1..6r_r,
r - expt-iol
f
I + jexpt-;ol- jexpt-izol
7.14. Show that
.i_ t,t,rt' = * f"tx(o)l'zdo
7.15. The following signals are input into
a system
with impulse response
Fourier transforms to find the output y(z) in each case.
(e) r(n) =
(il[- T)"r,
(b) :(z) = (|)'.r"(f),r,r
(") ,(,)
= (l)r"l
(d) .r(z) =
"(l)''',r,
7.16. Repeat Problem
T.tstth(n)= 5(n
-
,1
. (|)'rtrl
7.17. For the LTI system with impulse response
h@)=Y#2
6nd the output if the input .r(n) is as follows:
& (n
)=
(r' z(n
). Use
Sec.
7.8
Problems
373
(a) A periodic square wave with pcriod
6 such that
[t. o s z < 3
.r(n)=10.
4=n<S
(b).r(n)=
) ta(n-2k)-6(rr -1-2k)l
k- --.
7.1& Repeat koblem
7.17
if
o(^) =
2"!#9
7.19. (a) Use the time-shift property of the Fourier transform to find l/(O) for the systems in
Problem 6.18.
(b) Find fi (n) for these systems by inverse tranforming H(O).
720. The frequenry response ofa discrete-time system is given bv
H@)= --'si
I+
| * 1- *P1-;o;
[-exp[-lo]+ iexp[-l2o]
(a) Find the impulse response of the system.
O) Find the difference equation representation of the system.
(c) Find lhe responsc of thc syslem if
72L A discrete-time
rhe input is the signal
system has a frequency response
d(o) =
rs*Ep"l-pht#1. lol
1ZL
(j)'r,,,,
.r
Assume that p is fixed. Find u such that H(O) is an all-pass funcrion-that is,
a constant for all O. (Do not assume that p is real.)
(al Consider the causal system with frequency response
,,n,,
\",, _
lH(iO)l
is
I + aexp[_ iO.].+ bexp[-l1Q]
b + aexp[-j0] + exp[-j20l
Show that this is an all-pass function if. a and b are real.
O) t€t H(O) = N(O)/D(O),
where N(O) and D(O) are polynonrials in exp [-lO]. Can
you generalize your result in part (a) to find the relation betrvccn N(O) and D(O) so
that H(O) is an all-pass function?
7J3,. An analog signal .r,(r) = 5cos(2@nt - 30") is sampled at a frequcncy f,intlz
(a) Plot the Fourier spectrum of the sampled signal if f is (i) 150 llz(ii)250H2.
(b) Explain whether.r,(t) can be recovered from the samples, and il so, how.
72A. Deive Equation (7.5.27) tot the impulse response of the linear intcrpolator of Equation
(7.5.26), and show that the corresponding frequency function is as Eiivcn in Equation (7.5.2E).
72J,. A low-pass signal with a bandwidth of 1.5 kHz is sampled at a ratc of 10,ffi sampleVs.
(a) We want to decimate the sampled signal by a factor M. How largc can M be without
introducing aliasing distortion in the decimated signal?
(b) Expiain how you can change the sampling rate from 10,000 sanrpleVs to 4(H sampleds.
374
726. An
Fourier Analysis ol Discrete-Time
Systems
Chapter 7
analog signal rvith spectrum
is sampled at a frequency ro,= 10,000 radls.
(a) Sketch the spectrum of the sampled signa[.
(b) If it is desired to decimate the signal by a factor M,what
(c)
is the largest value of
Mthat
can be used without introducing aliasing distortion?
Sketch the spectrum of the decimated signal if M = 4.
(d) The decimated signal in (c)
frequency of
75C10
is to be processed by an interpolator to obtain a sampling
rad/s. Sketch the spectrum of the interpolated signal.
7.t1. veify for lhe uniform quanlizer that the mean-square value of the error' E, is equal to
A2/12. where A is the step size.
7,?A.
A uniform quantizer is to be designed for
a signal with amplitude assumed to lie in
the range *20.
(e) Find the number of quantizer levels needed if the quantizer SNR must be at least 5
dB. Assume that the signal power is 10.
(b) If the dynamic range of the signal is [-10, l0], what is the resulting SNR?
7J9. Repeat Problem 7.27 if the quantizer SNR is to be at least 10 db.
Chapter
B
The Z-Transform
INTROD
In this chapter, we study the Z-transform. which is the discrete-tinle counterpart of thc
Laplace transform that we studied in Chapter 5. Just as the Laplacc transform provides
ur fr"qu"n.y-domain tcchnique for analyzing signals for which thc Fourier transforrn
" noi exisi. the Z-transform enables us to analyze cerlain tliscr.'te-time signals that
does
do not have a discrete-time Fouricr transform. As nlight be expcctcd, the properties of
the Z-transform closely resemble those of the Laplace transfortn, so that the results are
similar to those of Chapter 5. Horvever, as with Fourier transtirrms of continuous and
discrete-time signals. there are cerlain differences.
The relationihip between the taplace transform and the Z-trans[ornr can bc cstablished bv considering the sequence of samples obtained by sanrpling an analog signal
ro(t). In our discussion of samplcd signals in Chapter 7, we sa\\ that lhe outbut of the
simpler could be considered to be either the continuous-time signal
...(r)
:,)-ru(xf)S(r -
nT)
(8.1.1)
or the discrete-time signal
(ii.1 .2 )
rfu):.r,,(n7')
Thus, thc
l:place transform of .r.(r)
X.(.s) =
is
r-,,2. .r, (n I) exp
=)
[
-. sr] r/t
x.@T)cxp[-'n7.rlr/r
(8.1.3)
375
976
The Z_Trans{orm Chaptor g
where the last step follows from the sifting property of the S.function. If we make the
substitution = exp[Ts], rhen
i
.Y,(S)1.=*pr,rl =
(E.1.4)
,,i.x,(nT)z-^
The summation on the right side of Equation (E.1.4) is usuaily written as X(e) and
delines the Z-trarrsform of the discrete-time signal r(n ).
We have,in fact, alr,, Ji urrcountered the Z-transform in Section 7.1, where we discussed the respons. rrf ;. linear, discrete-time, time-invariant system to exponential
inputs. There we sa'.. hat if th; input to the system was.r(n ) : 3", the output was
'
y(n) =
H(z)z'
(8.1.5)
where H(z) was defined in terms of the impulse response h(z) of the system as
H(z)=
i -* n(r)r'
(E.1.6)
n=
Equation (8.1.6) thus defines the Z-transfornr of the sequence &(z). We will formalize
this definition in the next section and subsequently investigate the properties and look
at the applications of the Z-transform.
8.2
THE Z-TRANSFORM
The Z-transform of a discrete-time sequence x(z) is defined as
x(z)
=\
r(n)z-'
where e is a complex variable. For convenience, we sometimes denote the Z-transfornr
as Z[r(n)1. For causal sequences, the Z-transform becomes
X(z) =
)
r-0
x@)z-"
(8.2.2)
To distinguish between the two definitions, as with the taplace transform, the transform in Equation (8.2.1) is usually referred to as the bilateral transform, and the
transform in Equation (8.2.2) is referred to as the unilateral transform.
Example 8.2.1
C-onsider the unirsample sequence
,(r) =
j;: :;Z
(823)
The use of Equation (8.2.1) yields
X(z) = 1.7o =
1
(E2.4)
Sec.
8.2
The Z-Transform
Example 8.2.2
Let.r(n) he the sequence obtained hy sampling thc conrinuous,limc function
.r(t
every
I
)=
exp[-arlr,(r)
(8.2.-s)
seconds. Then
r(a) = exp[-arr7'la(n)
(8.2.6)
so that, from Equation (8.2.2). rve have
xtzf =
j lexp 1- ,r 1-- ,l',
i-tt expl- anrlz-" = n.O
Using Equation (6.3.7). we can rvrite this in closed form as
lz
x(:)= I
-expi-arlz'=.-"*i1-,r;
(8'2J)
E-a'rrple t2.3
Consider the lwo sequences
.r(n)
fo"
=1 '
rr>o
(E.2.E)
n
|.0,
<o
and
(nv
a<o
't,,r={-(iJ'
10,
(8.2.e)
n=0
Using the definition of the Z-rransform, we can write
x(z) =.e
(:1.'=;.(j.
)'
(8.2.,0)
We can obtain a closed-form expression for X(z) by again using Equation (6.3.7), so that
x(z)=,_1,-= z
.
(8.2.11)
zZ-, z-l
Similarly, we ger
Y@ =
.i.
- (:)' ..
=
.t
(j.
,)' = -
,i,,,o",
Gz.,z)
which yields rhe closed form
y(7)=-;!'^_).
j--,
=
| tz z_;
(g.2.13)
.i,
.l
The Z-Translorm ChaPier 8
,\s can he seeu. the exprcssions lor the two transforms. x(z) and Y(z). are identical.
Seemingly. rhe rwo tr)raily different sequences.r(n) and y(n) have the same Z-transform'
Thc c.lifieience. of course. as rvith the Laplace transform, is in the two different regions of
convcrgence for x(z) and Y(;), where the region of convergence is those values of z lbr
rvhich tie powcr series in Equation (8.2.1 ) or (8.2.2) exists-that is. has a finite value. Since
Equati.n iO.:t.;l ir n geomerric series. the sum can be put in closed fornl only when the
summand has a nragnitude less than unity. Thus. the exPression for x(z) given in Equa'
tron (8.2.11) is valid (that is. X(3) exists) only if
l]r-'| < t
"'
Similarly. from Equation (8.2. l3). Y(z) exists
lzzl <
t
l.l
,
j
(8.2.14)
if
or
lzl
<l
(E.2.ls)
Equations 18.2.1a) anrt (8.2.15) define the regions of convergence for x(z\ and Y(z),
reipecrively. These regions are plotted in the comptex z'plane in Figure 8'2'l'
lmz
&21
Regions of
Flgure
convergence (ROCs) of the
Z-transforms for Example 8.2'3.
(a) ROC for X(z). (b) ROC for
Y
(zl.
From Example 8.2.3, it follows that, in order to uniquely relate a Z-transform to
time function, we must specify the region of convergence of the z-transform.
a
CONVERGEN
Ct'rnsider a sequence .r(n ) with Z-transform
x(z)=
'
i ,@'12-'
(E.3.1)
aE't
We want to determine the values of z for which X(Z) exists. In order to do so, we
represent z in polar form as z : r exp[10] and write
Sec.
8.3
Convergence of the Z-Transform
*(r)
=
379
expIlgl)
,,?-.r(a)(r
"
-1
= ).r(n)r-"expl-7rrul
Let -r* (n ) and
-r
(5.-i.
j
- (n ) denote the causal and anticausal Parts o[.r (,r ). rc spectivel,,*. Tha L ii.
n*(n
) = x(n)u(n)
x-(n): x(n)u(-n -
(rt.3.3
1)
)
We subslitute Equation (8.3.3) into Equation (8.3.2) to gct
-t
X(r)= ) r-(n)r-"exp[-ln0l
+
I*-(rr)r'"cx1,i
= ).r-1-rr)/"exp[jme] + ) r, (rr)r "cr1il
=
ir01
,ll
7'rr(rl
) lr-(-rr)ll + > l.r.1rr)lr-"
(s..i
l)
For X(z) to exist. each of the two tcnns on the right-hand side of l:quation (8.3.4) nrust
be finite. Suppose there exist constants M, N, R-, and R* suclr tlrirl
lx-1n11
<
run:
forn <
0,
l.r*(n1l <
Nnl
lt;rrr =
0
(i1.3.-5i
We can substitute thesc bounds in Equation (8.3.4) to obtain
X(z) = M ) R-n'r^ + N
trt=l
)
R'ir-"
(8.3.6)
Clearly, the first sum in Equation (8.3.6) is finite it rlR- < l. irtttl the sec(rnl1 strr,r rs
finite if R*/, < l. We can combine the two relations to detcrtrtrrt. rhe regir;n oI c,,I
vergence for X(z) as
R.<r<R
Figure E.3.1 shorvs the region of convergence in the r planc as Lire annular regiotr
between the circles of radii R- and R*. The part of the transforrl ctrrresponding to the
oI, equivale'ntlv.
causal sequence .r*(n) converges in the region for which r -.'
^.
l
ith
radius I(., Sirrr
,
circle
is
outside
the
lal R..That is, the region of convergence
ilarly, the transform corresponding to the anticausal sequen,:c '. 1rl) ctrnr'rrtii ii'
r ( R- or, equivalently, lr l a n-, so that the region of coni'cr{;t:.c is irtsitit ii.': ..,,
cle of radius R-. X(z) does not cxist if R- < R-.
We recall from out' discussion of the Fourier translor m of ,.ll:L tcla-l,t'r -' ':lgrlzll 't'
Chapter 7 that the frequency variable O takes on values in ltt. 1r; l. For ii [i'.t o ' :tti:t
of I it follorvs from a comparison of Equation (8.3.2) with htgiretion 17.-'.1-:. tr,-,'
-"
X(z) can be interpreted as the discrete-time Fourier lranslbrtrt {:l thc srqrral 't(rr)r
This corresponds to evaluating X(z) along the circle of radius r in the i r)llne L
we set r = I, that is. for values of z along the circle with urrit;' r,,'.lrtr: .\'r'l i lr--t-I.,:.
The Z-Translorm Chapt€r
380
Ilgore
&3,1
I
Region of
oonverg,ence for a general noncausal
sequence.
to the discrete-time Fourier transform of x(n), assuming that the transform exists.
That is, for sequences which possses both a discrete-time Fourier transform and a Ztrausform, we have
X(O) = X(a) l.-.,p1-ior
(8.3.7)
The circle with radius unity is referred to as the unit circle.
In general, if x(n) is the sum of several sequences, X(z) exists only if there is a set
of values of z for which the transforms of each of the sequences forming the sum converge. The region of convergence is thus the intersection of the individual regions of
convergence. If there is no common region of convergence, then X(z) do€s not exist.
E:vanrple 83.1
Consider the function
,r"r = (])"r,r
Clearly, R* = 1/3 and R- = 6, so that the region of convergence is
t.l
,l
The Z-transform of .t(a) is
x(z)=-4-.:-3.
t
z
-
;z-' - i
which has a pole at z = 1I3. The region of convergence is thus outside the circle enclosing
the pole of X(a). Now let us consider the function
,r,r
=
(|)',or. (|)',r,r
Sec.
8.3
Convergence of the Z-Transtorm
381
which is the sum of the two functions
/l\"rr(tt) and
r,(n) = {;f
\./
.r2(rr}
=
,r'
i. | ,;r,,:
From the preceding example, the region of convergcnce for ,\', (.: I
lzl >
whereas for
Xr(z) it
i'
I
z
is
I
Izl > r
Thus, the region of convergence for X(z) is the intcrsection of thcsr: trYo recions and is
l.l
,
,
--:
.-
""-(;.1)
=;
It can easily be verified that
222 -l;
?
'
=
-'-.
z-1, z-l k-))(z-\)
--eHence, the region of convergence is oulside the circle
that includcs both poles of X(z).
X(7) =
+
The foregoing example shows that for a causal sequence, the regrou of convergence is
outside of a circle rvhich is such lhat all the poles of the transfornr .Y(t) are witlrin this
circle. We may similarly conclude that for an anticausal function, the region of convergence is inside a circle such that all the poles are external to thc circle.
If the region ofconvergencc is an annular region. then the polcs of X(z) outside this
annulus correspond to the anticausal part of the function. rvhilc thc poles inside the
annulus correspond to the causal part.
Eqa'nple
832
The function
3",
n<0
,"r: (ll
tt = o.2,4.erc
{ (jl
,, = r,3,5.etc
has the Z-transfornr
x(21=
Let n =
-rr
,,i-_t.2..
in the first sum, n
:
;
G)".
2rn irr lhe second
..
(j
.e
odJ
n
rr..
*.1
,1
-
2r:r
i
i,r lhc
titird sum.
Ii'tn
The
382
x(r)
=
P,(i.)-
. int';t\' *'-' i
Z-Translorm ChaPter I
(i.
l-
1: --.l----lt'- r-1.
I -fz-z I -
z-z
z_3 ,r_,1 ,r_i
As can be seen, x(z) has poles ai z = 3, lt3, -1/3, l/2. aad -llZ.The pole at 3 corresponds to the anticausal parr. and the others are causal poles. Figure 8.3.2 shor[s the locations of these poles in thl i plane, as wetl as the region of convergence. which, accordingl
to our previous discussion' is l/2 < lzl < 3.
lm:
Articausal pole
Flgure E32 Pole locations and
region of convergence for ExamPle
8.3.2.
Example
833
Let.t(a) be a finite sequence that is zero for a < no and n) n,'Then
X(z) = .11r,,);.-* i x(no * l)3-(""* tt r "' r x(nr)z-'r
Since X(z) is a polynomial in z(or z-l). X(z) converges for all finite values of z' exaept
n, > 0'
z = 0 for nt> tl. The poles of X(z) are at infinity i[ ao < 0 and at the origin if
From the previous example. it can be seen that if we form a sequence y(n) by adding
the
a finite-length sequence to a sequence x(n), the region of convergence of Y(z) is
same as that of X(z), except possibly for z = 0.
Erample 8.3.4
Consider the righr-sided sequence
,(") = ,(;)'
rr(n + 5)
Sec.
8.4
383
Properties ot the Z-Transtom
-
as the sum of the finite sequence 3(l/2\"lt(n + 5) u(z)] and the
= 7112rr,r)"r(n), it becomes clear that the R()('ot f(r) is the same as
thar of X(:). namely, l: I > l/2.
Sinilcr!y, :he sequence
)(r)
By rvriting
sequcnce t (n'1
v(n)=-(j)',r-,*,,
to he the sum of the s!'quence x(n1 = -)2(l/2)'z(-n - l) and
the finite sequence - 32(l/2)'lu(n) - rr(l - 6)1. lt follows rhat Y(z) converges for
can be considered
0< lzl <
trz.
In the rest of this chapter, we restrict ourselves to causal signals and systems' for
which we will be concemed only with the unilateral transform. In the next section, we
discuss some of the relevant properties of the unilateral Z-transform. Many of these
properties carry over to the bilateral transform.
PROPERTIE
OF THE Z-TRANSFO
Recall the definition of the Z-transform of a causal sequence -r(n):
x(z)-)-r(n)2"
(8.4.1)
,t-0
We can directly use Equation (8.4.1) to derivc, the Z-transforms o[ common discretetime signals, as the following example shows.
Example 8.4.1
(a) For thc
6 function, we saw that the Z-transform is
216(n\l= 1.zo= l
(8.4.2)
(h) Let
:(n) = o"'1n,
Then
rl
x(z) =
By letting
=:'-.1.:1,lcl
) a'z-'=',
l-oi l---, z-q
(8.4.3)
,o
c = t, we obtain the transform of the unit-step function:
zlu(n)l =
11 ;, l.l
,
r
(8.4.4)
(c) Let
.t
By writing
.r
Qr) as
(n) = c6.61orr,,,,
(E.4.s)
rI:!]
The
x(n) =
I
+
)[exg[jfiozl
exp
Z-Transform Chapter g
[-lfioz]lz(z)
and using the result of (b). it follows that
'
,r,',:!2 expljttol'-L
^\'''
z2z -exp[-jfto]
Z
=
zG:ls&)_
IL
(8'4'6)
t
zzcosrh +
Sinilarly, the Z-transform of the sequence
:(z) = s6q-,r,
(E.4.7)
is
x(z) =
Let .r(n) be noncausal, with .r* (z) and r_
respectively, as in Equation (8.3.3). Then
x(z):
-
z2
z sinOo
2z ccf,lo
(r) denoting
X,(z) +
(8.4.8)
+I
its causal and anticausal parts,
X-(z)
(8.4.9)
Now,
x-(z)=
j r-1r)e-'
(E.4.10)
4d -@
By making the change of variable m
= -n
x-(z)=
and noting that
r- (0) = 0, we can write
i x-(-n)z^
(8.4.11)
m-O
l-et us denote r
-(-m)by
xr(m). It then follows that
X-(z) = Xr(z-')
wbere X1(z) denotes the Z-transform of the cazsal sequence .r_
Eraraple t.4.2
kr
,r,
= (;)'''
Then
,.",=
(l)", n>0
' "'= (l)-',
z<0
(8.4'12)
(-n )
Sec.
8.4
Properties ol the Z-Transform
385
and
-r,(n) =
5-1-,; = (])' r,
"
=
(])',r,- u,,y
From Example E.4.1. we can write
x,({
=
-}-.
lzl > I
1.
and
x.,(z)= L-z-i
I
I=
rl
-2-,
z-', l.l
so that
x-(7)=-;-
l:.1.2
and
Thus, a table of transforms of causal time functions can be uscd to find the Z-iansforms of noncausal functions. Table 8.1 lists the transform pairs derived in Example
8.4.1, as well as a few others. The additional pairs can be derived dircctly by using Equation (8.4.1) or by using the properties of the Z-transform. We discuss a few of these properties next. Since they are similar to those of the other transforms we have discussed so
far, we state many of the more familiar properties and do not dcrive them in detail.
8.4.1 Linearity
If r,
(n )
and.rr(n) are two sequences with transforms X, (z) and X,(:). respectively, then
Z[ar.rr(n) + arxr(n)l= arXr(z) + arXr(z)
where a, and ararc arbitrary constants.
(8.4.13)
The Z-Translorm Chapt€r 8
386
8.1.2 fime Shifting
Let
r(n)
gct' rt,,
)
be a causal sequence and let X(z) denote its transform. Then, for any inte0,
x(n + no)t-"
>
n=O
Zlx(n +no)l =
= \ r(my;tn't,t
t
rll
E
= z,ulio,ln)a-,,,
[
= z\lx(z)
-
"!rr1rn1z-,,]
I
'b:r
- )or@)r-"1
(8.4.14)
Similarly,
Zlx(n -no)l
: ).r(n -
nr)z-'
n =O
=!
=
'1-;t-''.,n''
t"li,r<^).-'*
= r-
"lx
<rl +
(*)r-^f
-1,,,..
j,. r{-) z-,,]
(8.4.1s)
,,
Example 8.4.3
Consider the difference equalion
Y(n)
I
- ry(n - l) = 6(z)
with the initial condition
.y(-
l) = 3
In order to find y (n) for z 0, we use Equation (E.4.15) and take transforms on both side.
=
of the difference equation, Betlting
Y(zl
-
1
2z-tlY?)
+ .r'(- l)31
=
t
We now substitute the initial condition and rearrange terms, so lhai
t
|-t'z-t
2:-"
Sec.8.4
Properties ol the
Z-Transform
387
It follows from Example E.4.1(b) that
5/l\"
2\z). r=o
y(n\=
Example t.4.4
Solve the difference equation
v(n + 2l
-
+l) + |r(n)
y(n
-
0, it.r(n) = u(n)'v(l) = l, and v(0)
(8.4.14). we have
Equation
Using
fory(n).n
=.r(nr
= l'
zlly4) -y(0) - y(l)z-rl-zlY(r) - y(o)l * 1v(:) = x(z)
Substituting
X(l\ = zlk - l) and using the given initial conditions'
(.' -. * i)",.,
Writing Y(z)
r'-,.* r'=." :-'
=
rve get
l'
as
Y(z\=
""
,
z1 'z*l
z'(z-l)(z-llt.-i1
-
and expanding the fractional term in partial fractions yields
r:
', t
vt.l=.Lil,*.__l_r_i
=,1.
.-t *'r,
:-l - z-]
7,2
From Example 8.4.1(b). it follows that
y1n1
=e;u1n1.
i(i)',r,
- ,(])""r"r
8.4.3 FrequencyScaling
The Z-transform of a sequence a'r(n) is
Zla" x(n\l
=
! xPrl(a
i a'x(n\z-" = x=0
,,=0
= X(a-t
I
:)-'
z)
Example 8.4.6
We can use the scaling property to derive the transform o[ thc signal
y(n) = (a'cosOon)u(n)
(8.4.16)
388
The Z-Translorm Chapter
from the transfornr of
.r(z) =
(cos
l)on)a (z)
_
-
z(z
-
rvhich, from Equation (8.a.6), is
v/-\
,"r.,
zz
-i
cosoo)
iosfrr+T
Th us.
rt.r=;f;{fi*ffi
1
= zz_:12_-_r:elrb)
- 2a cos{loz + a2
Similarly, the transform of
y(z) = a'(sin()na)rr(n)
is, from Equalion (8.4.8),
y(.) =
F;!;'j#;7
4,4.4 l)iffereatiation with Respect to z
If we differentiate both sides of Equation (8.4.1) with respect to z, we obtain
dX(z\ : (_
n)x(n)z-"-l
)
u<'
n=o
= -z-t
2 *b\z-"
'l-0
from which it follows that
z[nr(n)) =
-rt*el
(8.4.17)
By successively differentiating with respect to z, this result can be gerreralized as
Ztnkx(n)t:
(-,*)r *ru
(8.4.18)
Example 8.4.6
Let
us find the transform of the function
y
(n) = n(n + l)u(n)
Fronr Equation (E.4.17), we have
zlnu(n)t =
-, ft ,p611 = -,
**
=
;*
I
Sec.
8.4
Propertes ol the Z-Transform
389
and
*)"rrnl = - z lrr-, l, ru all
z(z+l)
_
_d z
--'A(z-tr=(FT;'
Z[nzu(a)t =
(-,
so that
8.45 InitialValue
For
a causal sequence
x(z)
It can be seen that
:
x(n), we can write Equation (8.4.1) explicitly as
r(0) + r(l)e-t + x(z)z-2 r ... + x(n):-" *
as z
...
-+
co,
(8.4.19)
the term z-n -s0 for each fl > 0. so that
Jrlg
x(a) =
-r(o)
(8.4.20)
Example 8.4.7
We will determine rhe initial value r(0) for thc signal with transform
x(z\
__. _1
: _: zr_1zr+22_5.
(z-1)(z-!)Q'z-rz+t)
Use the initial value theorem gives
r(0) = 1;' x(z) = t
tJc
The initial value theorem is a convenient tool for checking i[ thc z-rransform of a given
signal is in error. Partial fraction expansion of X(z) gives
x(z) =
J-+ -j. - ---:)
z-l z-ti z2-t1z+ I
so that
x(n) = u(n)
The initial value is.r(0) =
l
* (|)',t"r _ (i)'*,(l ,)
which agrees with ihe result above.
t.4.6 FinalValue
From the time-shift theorem, we have
Zfx(n) -.r(n
- l)l = (t - z-t)xk)
The left-hand side of Equation (8.a.21) can be written as
(8.4.21)
The Z-Transform Chapter
390
)
tt ll
[.r1r1
-.\'(r - l)1.: "'- ]int )
'''
a'(t
lf we now let : -+ I. Eqtrati,-'n
{N
[.r(rr)
I 2l) c;rn lre !r71i11s.
-.r(rr
I
- l)]r""
^*
l$ tl - r-')x(;) = I,* ,I, [.r(r)'- r(n = lim .r(N) = x(:c)
l)]
18.4.22)
assuming.r(cc) exists.
f,sernple E.4.8
By applying the final value theorem, we can find the final value of the signal of Examplc 8.4.7 as
r(a) = 1*
so
,
,t
*r,r=
g [.,]'_-,f;t'i,.1i)
th-.
r(cr; =
1
which again agrees with the final value ofx(n) given in the previous example.
Example 8.4.9
[,et
us considcr lhe signal
x(n) = 2 rr1nl*ith Z-transform given by
X(z)
3
= z-2
Application of the final value theorem yields
z: l z..=1
.t(,)=lirn
:jt
Z
Z_z
Clearly this result is incorrect since.r(a) grows without bound and hence has no final value.
This example shows that the final value theorem must be used with care. As noted earlier it gives the correct result only if the tinal value exists.
8.4.7 Convolution
If y(n) is the convolution of two sequences.r(n) and lr(rr), then, in a manner analogous
to our derivation of the convolution property for the discrete-time Fourier transform.
we can show that
Y(z) =
Recall that
Htz)X(z)
(8.4.23)
Sec.
8.4
Properties ol the Z-Transform
391
Y(z\ =
i
y@)z-'
y(n) is the coefficient of the z,th term in the power-series expansion of Y(z).
It follows that when we multiply two power series or polynomials X (z) and H(z), the
coefficients of the resulting polynomial are the convolutions of the coefficients in
.r(n) and ft (n).
so that
Example 8.4.10
We want to use lhe Z-transform to find the convolution of the [ollowing tvo sequences,
which rvere considered in Example 6.3.4:
h(n) = 11.2,0,
- I. ll
and
r(n) = [.3.
- t. -2|'
The respective transforms are
H(z)=l+22-t-z-'+z-o
and
i/(1) = 1 + 3z-r
- z-' -
Zz-'
so that
Y(z)= 1+52 I+
52-2
It follows that the resulting sequence
.v(z) =
-
5z-r
-
6z-o + 4z-\ +
z u -22-1
is
(1, 5, 5,
-
s.
- 6.4, l, -
2l
This is the same answer that was obtained in Example 6.3.4.
The Z-transform properties discussed in this section are summarized in Table 8-1.
Table 8-2, which is a table of Z-transform pairs of causal time functions, gives. in addition to the transforms of discrete-time sequences, the transforms of several sampledtime functions. These transforms can be obtained by fairly obvious modifications of the
derivations discussed in this section and are left as exercises for lhe reader.
392
TABLE &1
Z-Tranatorm
l.
The
Z-Transform Chapter
I
Propertl6
Linearity
2. Time shift
arxr(n) + arxr(n)
arXr(z) + arXr(z)
(8.4.13)
r(a +
,"1*rr, -9",o12'^)
(8.4.14)
z6)
j
r(n - ,t!)
z-"lxe)+
L
3. Frequency scaling
a'r(n)
X(a-tz)
(8.4.16)
4. Multiplication by n
nx(n)
-r4*ut
az
(8.4.17)
nk
5. Convolution
8.5
,1,.ya-,1
J
m--an
x(n)
.r,(n) r.rr(z)
(E.4.ls)
1_2ft)rxat
(8.4.18)
Xr(z)Xzk)
(8.4.23)
THE INVERSE Z-TRANSFORM
There are several methods for finding a sequence.r(n ), given its Z-transform X(z). The
most direct is by using lhe inversion integral;
'@)
=
(85.1)
*j{rx(z)z'-'dz
f
where fs- represents integration along the closed contour in the counterclockwise
direction in the z plane. The contour must be chosen to lie in the region of convergence of X(z).
Equation (8.5.1) can be derived from Equation (8.a.1) by multiplying both sides by
zt-l and integrating over f so that
* f,*
r,r,r -'
By the Cauchy integral theorem,
(n) zk -'-' dz
f,flt
+
"
=
{r'o-'-'o'=
{3:' I;:
so that
fr*{,),|-'o,
=
hix(k)
from which it follows that
'o
=
*f,xk)zo-'az
TABLE &,2
Z-Transtorm Palr8
Radlua o, GonYargsnco
x(4
{rr)tota>0
1.6(z)
2.6(n - m)
3. u(n)
4.n
5. n2
lzl
I
0
z-,n
0
z-l
I
z
G:IT
I
4!-+-,r)
I
(z
- l)'
z
6. an
lol
z-o
az
7. na"
lrl
o)'
22
+ l)a"
E. (r7
-
(z
Q
lol
-;f
Za+t
10. cos
flsn
11. sin
f,lrn
-En
lrl
z(z -_c91!h)- 2z cosfh + I
I
z2
!u-q.---
1
cos(h + I
z(z_- a cosfh)
z2 - 2zo cos 0o + a2
2z
z2
12,
a" cos(l6n
13.
a'
14.
expl- anTl
15.
nT
16.
nT expl- anTl
17.
cosaor6I
18.
sinzrool
19.
expl- anTl
cos n r,re
T
20.
expl- anTl
sin n tos
I
sin
d
---
lrl
za sin Oo
flen
zz
-
2za cos(lo
+
l,l
a2
z
-
z
lexp
expl- aTl
Tz
Tz expl- aTl
22
exgl-aTll2
z(z - cosoro 7)
2z cosaoT
z sinoo
-
I
[-ar]l
I
cosooT]
2z expl- aT] cosool
@rt
|
I
+I
z2-2zcoso4T+l
z [z - exp [- all
zz
lexp
-
[-ar]
I
Grtf
lz
'a
+ expl-ZaTl
iexp
[- aI]
|
lerp[- ar]l
393
394
The
z-Transtom chapter E
We can evaluate the integral on the right side of Equation (8.5.1) by using the residue
theorem. However. in many cases. this is not necessary. and rve can obtain the inverse
transform by using other methods.
We assume that X(e) is a rational function in I of the torm
x(z) =u)u**uol,'r.-
::2::::,
tur
=N
(8.5.2)
with region of convergence outside all the poles of X(z).
8.6.f Inversion
by a Power-Series Expansion
If we express X(z) in a power series in z-t, x(n) can easily be determined by identifying it with the coefficient of 2-" in the power-series expansion. The power series can
be obtained by arranging the numeraror and denominator of X(z) in descending powers of z and then dividing the numerator by the denominator using long division.
g.rarrple 8.5.1
Determine the inverse Z-transform of the function
x(:)=z-lo-i'
lzl >o.t
Since we want a power-series expansion in powers of z-1. we divide the numerator by the
denominator to obtain
I + 0.lz-r + (0.lfz-'?+
z -0.11
(0.1)13-r +
"'
z
z-0.1
0.1
0.t
-
(o.l;:.
-
t
(o.l )!z - |
(0.1)22-'
(o.l Yr. -:
We can write, therefore,
X(z) =
1
+ 0.lz-r + (0.1)22-r + (0.1)13-r
+...
so that
r(0) =
1. :(1) = s.t, r(2) = (0.1)2. ,r(3) = (0.1)r. etc.
It can easily be seen that this corresponds to the sequence
r(n) = (0.1)'rr(n)
Although we were able to identify the general expression for:(n) in the last example,
in most cases it is not easy to identify the general term from the first few sample values' However. in those cases where we are interested in only a few sample values of
Sec.8.5
The lnverse Z-Transform
395
x(z), this technique can readily be applied. For example, if .r(n) in the last example represented a system impulse response, then, since .r(n) decreases very rapidly to zero, we
can for all practical purposes evaluate just the first few values of r (n) and assume that
the rest ate zero. The resulting error in our analysis of the system should prove to be
negligible in most cases.
It is clear from our definition of the Z-transform that the series expinsion of the
transform of a causal sequence can have only negative powers of <. A mnsequence of
this result is that, if r(n ) is causal, the degree of the denominator polynomial in the
expression for X(z) in Equation (8.5.2) must be greater than or equal to the degree of
the numerator polynomial. That is, N > M.
Example 8.5.2
We want lo find the inverse transform of
z3-z'+z-i
x(z) =
r !- _
..t -5-2
4. t2.
l.l
_L'
,l
16
Carrying out the long division yields the series expansion
x(z)
:
| * !,-,
*ii.-
* s|r- * ...
from which it follows that
.r(0) =
1, ,(r) = 1, ,(4
=
reE,
,Q) =
s:4,
etc.
In this example, it is not easy to determine the general expression for.r(n), which,
as we
see in the next section, is
.r(n) = s(n)
- s(|)',r,1 * s^(l),at.,(])" u(n)
85"2 Invereion by Partial-Itaction Expansion
For rational functions, we can obtain a partial-fraction expansion of X(z) over its poles
just as in the case of the Laplace transform. We can then, in view of the uniqueness of
the Z-transform, use the table of Z-transform pairs to identity the sequences corresponding to the terms in the partial-fraction expansion.
Example 8.63
Consider X(z) of Example 8.5.2:
x(z) =
z'-z'+z-!s
-3 _5--2..
'4<'2'16
,j
l- _ r' lzl
In order to obtain the partial-fraction expansion, we first write X(z) as the sum of a constant and a term in which the degree of the numerator is less than that of the denominator:
,396
The
izTransform
Chapter
I
\z
r.;:i._,z' +_i_r
t,
x(z\ =
4'
'2'
16
In factored form, this becomes
x(z)=1.#r)
We can make a partial-fraction expansion of the second term and try to identify terms
from Table 8-2. However, the entries in the table have a factor z in the numerator. We
therefore write X(z) as
x(7)=1.,ffrfi
lf we now make a partial-fraction expansion of the fractional term,
X(z) = 1+
we obtain
t -o
+ I"..- + s- i
zl--:-\.-i
Q-i)' ,-'il
|
=r-e' z-i +s-i!?-ae_,zz-';
k-i)'
From Table 8-2. we can now write
r(n) = 61n; -
,(i),r,1 .
t^()),o. r(j)',r,1
f,'rample 8.6.4
Solve the difference equation
.
y(n\
-|tA
-r1 + |r(n -2)
=zsin|
with initial conditions
/(-
1) =
2
and y(-2) = 4
This is the problem considered in Example 6.5.4. To solve it, we first frnd Y(z) by transforming rhe difference equation, with the use of Equation (8.4.15):
vk) -lz-,w(z) + 2zl *f,r-r1v1r1+ 422 + 2zl =
;i
Collecting terms in Y(z) gives
|
-1.' * |.-')ret = 1-1.--, -;i
trom which it follows thar
Ytz)
z' - Iz
*;;;*:T-2z!
=;lfJ
r rr(.(,
.
4.,8
\4
r.,'
r-s,
lzl > I
Sec.8.5
The lnverso
Z-Translorm
397
Carrying out a partial-fraction expansion of the ternlson the right side along the lines of
lhe previous example yields
Y(z)
= ,_._1.
:=
_
* '4.i*
l3i8zll2z96z'
5z-i tlz-to
i",;l!.1
8522+1 8512+l
The frrst two terms on the right side correspond to the homogeneous solution, and lhe last
two terns correspond to the particular solution. From Table 8-2, it follows that
v<a=
13/l\"
l\z)
8 /lY ll2 nn % nl
-rz[a/ + 8s sint-Ecos 2'
n
=0
which agrees with our earlier solution.
Ertarnple &65
[-et us find the inverse transform of the function
x(z)=*--,..
(z-jXz-i)
Direct
pa rt ia
I-
l.l
,j
fracl io n expansion yields
xk)=:,_*
which can be written as
x(z):
z-t--4:
12'4 ,
-
or-'
:!_1
We can now use the table of transforms and the time-shift theorem, Equation (8.4.15), to
wrile the inverse transform as
,(,): o(l)
,(n
-
r)
- r(j)'-',r, -
rr
Alternatively, we can write
xdt=
zk-ik-t)
and expand in partial fractions along the lines of the previous example to get
8t, l6t,
xol=r(9*-q---f9-)=8+
\z z-) z-'rl
z-', - r-j
We can directly use Table 8-2 to write
r(a)
as
r(n) = 8s(n). a(i)",r,r
To verify that this is the same solution
as
-
re(f)",r,1
obtained previously, we note thal fora =
0,r
e have
The Z-Transtorm Chapter
398
I
.r(0)=8+E-16=0
For a = l, we gel
,(,)=r(;) _,.(i). =^(;)
'
-.(il
'
Thus, either method gives the same answer.
8.6
Z-TRANSFER FUNCTIONS OF CAUSAL
DISCRETE-TIME SYSTEMS
We saw that, given a causal system rvith impulse response h(z), output corresponding
to aoy input r(n ) is given by the convolution sum:
y(n):)h(k)x(n-k)
(8.6.1)
t-0
In terms of the respective Z-transforms, the output can be written
Y(z) = H(z)x(z)
as
(8.6.2)
where
H(z) = zlh(n)l =
Yk)
x(z)
(8.6.3)
represents the system transfer function.
As can be seen from Equation (8.6.2), the Z-transform of a causal function contains
only negative powers of s. Consequently, when the transfer function H(z) of a causal
system is expressed as the ratio of two polynomials in z, the degree of the denominator polynomial must be at least as large as the degree of the numerator polynomial.
That is, if
rrt.\:\MzM + PM-FM'| + "'+ 9( + 9o
,"\'/
o nzN + or-rz'-' + ... + arz + ao
(8.6.4)
then N > M if the system is causal. On the other hand, if we write If (z) as the ratio of
two polynomials in z-r, i.e.,
,
I,z +...+ ou_,t :!!E:
* "' * a'v-12-N'r + o''z-n
then if the r^,"r;r;"r" ,iir**".'.'
rr+
H(z) = :
(E.6.s)
Given a system described by the difference equation
5
oor@- /,) =
&-0
5 b6@ - k)
(8.6.6)
k=0
we can End the transfer function of the sysrem by mking rhe Z-transform on both sides
of the equation. We note that in finding the impulse response of a system. and conse-
Soc.
8.6
Z-Trunsler' Functions of Causal Discrete-Time Systems
399
quently, in finding the transfer function, the system must be iniriallv relaxed. Thus. rt
we assume zero initial conditions, we can use the shift theorenr trr gel
fM
I
t-N
I
klX(z)
b*r-,lv(z) = l\ arz
12
Lt--o I
Lr---u I
(6.6.7r
so that
M
2
bo'-o
u(z)=#- 2
(s.6.s)
oo'-r
k=0
The corresponding impulse response can be found
h(n) =
as
z-tlH(z\)
(s.6.e)
It
is clear that the poles of the system transfer function are the sarne as the characteristic values of the corresponding difference equation. From our discussion of stability
in Chapter 6, it follows that for the system to be stable, the poles must lie within the
unit circle in the e plane. Consequently, for a stable, causal function, the ROC includes
the unit circle.
We illustrate these results by the following examples.
n-rnple t.0.1
Let the step response of a linear. time-invariant, causal systcm bc
y@
:
l,t,r - f (j)',t,r * fr (- j)',t,r
To find the transfer funclion H(z) of this system, rye note that
z- * ? -- ,
--1 -l3e-))'
'\" s(z-r)
ls(r*l)
y(z\ =9
-3 _ la-2
<
.1.
(z-r)(z-jlt.+|l
Since
x(z\ = -f '
z- L
it follows that
H@=#=#j
, *!-l
=23z+l 3:-l
Thus, the impulse response of the system is
(8.6.10)
The Z-Transform Chapier
400
h@
I
=l(- l)',,,, . l(l)",r,
Since both poles of the system are within the unit circle, the system is stable.
We can find the difference-equation representation of the system by rewriting Equation (8.6.10) as
y@.
I - |z-t
r(z) (t - jz-')(1 + lz-')
=
=,-*rrl
,
4.
8.,
Cross multiplying yields
[' - 1'-'-
]'-']'t" = [r - ]'-']xr'r
Taking the inverse transformation of both sides of this equation, we obtain
v@)
- f,Y@- r) - lY(, - 2) = x(n) - 1,6 -
11
Example 8.6.2
Consider the system described by the difference equation
y(n)
-
2y(n
- t) + 2y(n -
2) =
r(n) + |r(n
- l)
We can find the transfer function of the system by Z-transforming both skles of this equation. With all initial conditions assumed to be zero, the use of Equation (8.4.15) gives
Y(z)
-
2z-tY(z) + z-zYQ) =
x(zl + lz-tx(z)
i
:
so that
t+ll-'
u,-,-Y(z)\"
"
xzl | - 27-r a 2r-z
-z L L-
=-t
'2t
z2-22+2
The zeros of this system are atz = 0andz = - (l/2), while the poles are at z = I arl.
Since lhe poles are outside the unit circle, the system is unstable. Figure 8.6.1 shows the
location of the poles and zeros of H(z) in the z plane. The graph is called a pole-zero plot.
The impul5e response of the system found by writing H(z) as
Hd\ =
.:\1_;;t]
,.1 v _:;;_n
and using Table 8-2 is
h@) =
({i), co'(X,),", * I rrar.i,(1,),r"r
i
Sec.
8.6
Z-Transler Functions o, Causal Discrele-Time Systems
+!,
'
t.6.1 P,rlc-zero plo( I'or
Example 8.6.2.
tigure
E-n'nple t.03
Consider the system shown in Figure 8.6.2. in which
0.8tr(^:
a(z)=1.-s3y1r-0.5)
where K is a constant gain.
The transfcr function of thc system can be derived by noting that thc output o[ lhe suntmer can be written as
E(z)=x(z)-Y(z)
so that the system output is
Y(z) = x1r161",
= Ix(z) _ y(z)lH(z)
Substituting for
ll(z)
and simplifying yields
,o = #i)tr) x(z) = r. _ o.rli.qg*) * o.sr, l'(z)
The transfer function for the feedback system is therefore
.Y(i)_
,,.,
' \" :
xQ)
0._8_Kz__
z2 + (0.8K
-
1.3)t + o.o.l
The poles of the system can be determined as lhe rools of thc cquation
22
+ (0.8K
-
1.3)z + 0.04 = 0
[igure
t.6.2
control
s1'ste
Ulock diagram of
m of Erample [1.6.1.
The
402
For K
= l.lhe
Z-Translorm Chapter I
two roots are
and
zr = 0'l
z=
=
O.4
Since both roots are inside the unit circle. the system is stable' With K= 4. however.
the roots are
zr =
0.0213
and
z:
=
1.87875
Since one of the roots is now outside the unit circle. the system is unstable.
8.7
Z-TRANSFORM ANALYSIS
OF STATE-VARIABLE S
MS
As we have seen in many of our discussions, the use of frequency-domain techniques
considerably simplifies the analysis of linear. time-invariant systems. In this section. we
consider thi Z-tiansform analysis of discrete-time systems that are represented by a
set of state equations of the form
v(n + 1) = Av(n) +
bx(n),
v(0) =
v,,
(8.7.1)
y(n)=cv(n)+dx(nl
As we will see, the use o[ Z-transforms is useful both in deriving state-variable representations from the transfer function of the system and in obtaining the solution to the
slate equations.
In Chapter 6. starting from the difference-equation rePresentation. we-derived two
alternativi state-space rePresentations. Here, we start with the transfer'function rep'
resentation and dirive two more rePresentations. namely. the parallel and cascade
forms. In order to show how this can be done. let us consider a simple first-order sys'
tem described by the state-variable equations
u(n +
l) :
aa(nl + bx(n
)
(8.7.2)
Y(n'1: a(n) + dr(n)
From these equations it follows that
v(zl =
-L
z-a x(z)
Thus, the system can be represented by the block diagram of Figure 8.7.1. Note that as
far as rhe relation between Y(z) and X(e) is concerned. the gains D and c at the input
and output can be arbitrary as long as their product is equal to bc.
we use this block diagram and the corresponding equaiion. Equation (8.7'2). to
obtain the state-variable representation for a general system by writing ll(e) as a combination of such blocks and associating a state variable with the output of each block.
As in continuous-time systems, if we use a Partial-fraction exPansion over the poles of
H (z), we get the parallel form of the state equations. whereas if we represent H(z) as
a cascadebf such blocks. we get the cascade representation. To obtain the two forms
Sec.
8.7
Z-Translorm Analysis ol State-Variable Systems
r,(
Flgure
&7.1
rr
*
403
l)
Block diagram of a first-order state-spacc system.
discussed in Chapter 6, we represent the system as a cascade of trvo blocks, with one
block consisting of all the poles and the other block all the zeros. II the poles are in the
first block and the zeros in the second block, we get the second canonical form. The
first canonical form also can be derived, by putting the zeros in thc first block and the
poles in the second block. However, this derivation is not very straightforward, since it
involves manipulating the first block to eliminate terms involving positive powers of z.
Esarrple 8.7.1
Consider the system with transfer function
H (z\
=
_j:*-L : 3z+
z, +loz- I (.*lltz-ll
Expanding H(z) by partial fractions, we can write
]--
z+i-+ z-i
H(z)=---t
with the corresponding block-diagram representalion shown in Figure 8.7.2(a). By using
the state variables identified in the figure, we obtain the following set of equstions:
(,.l)n,ut = x(z)
k -i)'un = zx(z)
Y(z)=V,(z)+2Vzk)
The corresponding equations in the time domain are
o,(n+l)=-1t,(n)+r(n)
ur(n
+ t) =lurrn, + 2r(r)
ThEZ-Tlanslorm ChapterS
404
Vz(:l
v'2(:l
X,
V
(:)
Vtlzt. Ylzl
t(zl
Y(z)
(c)
Figure
8.72
BIock-diagram representations for Example 8.7.1.
y0r)=o,(n)+zaz(n)
lI
we use the block-diagram representation ofFig. 8.7.2(b), with the states as shown, we have
(. -'ot)r,,., = xr(z)
x,1zy = (rz
.ta)n^u
(, *l)v,at =
Y(z) =
x1..1
vrk)
which. in the time domain, are cquivalent to
u'(n+l)=l''(')**'(n)
.r,(n)
1rr(z
=
3a2@+ r1 +
+ l)
ar1n1
11
=
-lor1n1
* r1n\
.v(n) = u, (n)
Eliminating r,
(n
) and rearranging the equations yields
Sec.
8.7
Systems
Z-Translorm Analysis ol State-Variable
l-3
l) = A,',(") -;u,(x) + -l-r(,r)
u,(n +
r'r(n
405
I
+ l) = - lt,z@\ * x(n)
y(n) = u1(z)
To get the second canonical form, we use the block diagram of Figure 8.7.2(c) to get
(u
.1,- l)r,,.,: r,.,
vp1 =
(tz. J)n,r.l
By defining
zV,(z\ = V2k)
we can write
zv,(z)
+
Ir^U -l r,(.) = *,.,
1
Y(z)=-ovt(z)+3Vr(z)
Thus, the corresponding siate equations are
?rr(n
+ l) = ?r'(n)
u2tu
+ t)=
ir,,r, -
f,u,@'1
+
t1,t1
1
y(n)=iur\n)+3o2@)
As in the continuous-time case, in order lo avoid working with complex numbers, for slstems with complex conjugate poles or zeros, we can combine conjugate pairs. The representation for the resulting second-order term can then be obtained in either the first or the
second canonical form. As an example, for the second-order system described by
Y@ =
b++++i!l
t+atz'+azz'
xo
(8.7.3)
we can obtain the simulation diagram by writing
Y(z) = (bo t brz-t +
bzz-z)V(z\
(8.7.4a)
where
v(zr) =
l+
I
qJ-'i
1-o ;
X(z)
or equivalently,
V(z) =
-o,r-tnk) -
arz-zV(z) +
X(t\
(E.7.4b)
The
406
Z-Translorm Chapter
I
Y(:)
.Y(il
Flgure
8.73
Simulation diagram for a second-order discrete-time system.
we gencrate Y(i) as the sum ot X(z\, - arz-tV(z),and - a2z-2V (z) and form Y(z)
sum of bolz(z) snd bzz-zv(z) to get the simulation diagram shown in Figure 8.7.3.
as the
Example 8.72
Consider the system with transfer function
H(2) =
l+2.52-t+z-2
(t + 0.52-r + 0.Ez-2)(1 + 0.32-r)
By treating this as the cascade combination of the two systems
H,(2) =
I
+ 0.52-r
1+0.52-r+0.82-2'
H,(z)=i:#
we can draw the simulation diagram using Figure 8.7.3, as shown in Figure 8.7.4'
Using the outpus of the delays as state variables, we get the following equations:
i (z) = zv,(z) = -O'lvlz) + xt(zl
X,(z)=V(z)+O5V2Q)
zVr(z) = v(z) = -g'5v,12)-o'8v3?) + X(z)
zVlz) = lt'171
y(z) = i(z) + ZV,(z)
Eliminating t/(z) and 7(z) and writing the equivalent time-domain equations yields
(:,
\
..i
t:
no
C)
c
(E
IJ.I
o
E
EI)
((,
E
c
i!
E
a^
!F
d
c,
b!
lL
\
407
408
The Z-Transform Chapter
I
t,(n + I) = -0.3r'r(n) - 0.9u.(n) +.r(r)
ur(n + 1) = -0.5u:(n) - 0.8u.(n) + x(n)
uj@ + 11 = It.(nl
y(n):
-
1.lu,(n)
0.8u.(n) +.r(n)
Clearly, by using different combinations of first- and second-order sections, we can obtain
several different realizations of a given transfer function.
We now consider the frequency-domain solution of the state equations of Equation
(8.7.1), which we repeat for convenience.
v(n + 1)
:
y(n):
Av(n ) +
b.r(a), v(0) = vo
dx(n)
cv(n) +
(8.7.5a)
(8.7.5b)
Z-transforming both sides of Equation (8.7.5a) yields
:[v(z) -
vo] =
AV(z) +
bX(z)
(8.7.6)
Solving for V(3), we get
v(z) = z(zl
-
A)-rv,, + (eI
- A)-rbx(z)
(E.7.7)
It follows from Equation (8.7.5b) that
Y(z) = cz(zt
-
A)-,'o + c(zl
-
A)-tbx(z) + dX(z)
(8.7.8)
We can determine the transfer function of this sysrem by setting v(0) = g 1o t",
Y(z) = [c(zl
- A)-'t + dlx(z)
(8.7.9)
It follows that
H(z)=ii:l=c(zl-A)-'|b+d
(8'7'10)
Recall from Equation (6.7.16) that the time-domain solution of the state equations is
v(n):oQr)vo*
i*1r-
l-o
I
-i)b:g)
(8.7.11)
Z-transforming both sides of Equation (8.7.11) yields
V(z) = O(z)vo +
z-'O(z)bx(z)
(8.7.t2)
Comparing Equations (8.7.12) and (8.7.7). we obtain
.D(z) = z(al
- A)-r
(8.7.13)
or cquivalently,
o(n) = A" - Z"tlz(zl
- A)-'l
Equation (8.7.14) gives us an alternative method of determining A,,.
(8.7.14)
Sec.
8.7
Z-Transform Analysis of State-Variable Systems
4rl9
Example 8.7.3
Consider the system
zr,(n+l)=u,(n)
+ l) =
2,2(n
v
(n) =
l r,,r, - la.(l;
a,
+ rrrrl
(n)
which was discussed in Examples 6.7.2,6.7.3, and 6.7.4. We find tltc unit-step rcsPonse (,i
this system for the case when v(0) =
- l]r. Since
[l
A=
lo rl
L; -rl
it follows that
(zl
so we can write
O(z) = .1.1
ir :l
T
z,
- n';-'= |L-s
-2
-
-_l
.-a
A)-r =
z
+
z+')
_! _ _
- _1
14
t'l
.
I
!
.
.r-l
rl
!
r
6
,*l
l_l
We therefore have
A.=
oor)
i(-])' 1{i)'- l( )'l
L:u-:(-l)' l(i)'.i( )L
[3{ll.
which is the same result that was obtained in Example 6.7.2. From Eciuation (8.7.7), for
the given initial condition,
v(3) = (zr
-
,, '[-l] . ,.r - ,,-'[l]
-
-l
Multiplying terms and expanding in partial fractions yields
1-
L't- n-1
c-
l=.;;i-;-rl
'r..,'-' 'rr-o
v(r)=lI rca
t8<
tR4
so that
I
|
L;'-;.;-.-ll
[s
23t
',,,=[;;[;]]=l:.;)
[e
r
\'
z)
rs\- )^
_ 22rl\"1
e
l+/
|
lt(t)"1
TheZ-Transfom ChapterB
410
To find the output, we note that
y(z) = [l
, [l:[i]
= vtz)
and
y@i = ar(n)
These are exactly the same alt the results in Example 6,7.3. Finally, we. have, from Equation (8.7.10),
r{(z) =
r,
,r*j;_5[. ir :][l]
_1
(z+lxz_l)
!1
=3-3
z-l
z+l
so that
n>t
fr(,)=1/1)'-'
3\ 2l
3\4/ -1I'.-1)'-'.
Since lr(0) = 0, we can write the last equation as
,(,,=+(i) -1(-r", r>0
which is lhe result obtahed in Example 6.7.4:
8.8
RELATION BETWEEN THE Z-TRANSFORM
E TRAN
The relation between the Laplace transform and the Z-transform of the sequence of
samples obtained by sampling analog sigial x,(t) can easily be developed ftom our discussion of sampled sigrals in Chapter 7. There we saw that the outPut of the sampler
could be considered to be either the continuous-time signal
,,(r)= i x"(nT)6(t-nT)
(r.8.1)
x(n) = x.(nT)
(8.8.2)
,tE-@
or the discrete-time siggal
The [-aplace transformation of Equation (8.8.1) yields
X"(s)
=,i
(E.8.3)
-r"@r)exp[-nls]
Sec.8.9
411
Summary
If we make the substitution z =
exp
X,(s) l.=
IIs], then
*ptnt=
)
x,(nT)z-'
(8.8.4)
,le-,
We recognize that the righrhand side of Equation (E.8.4) is the Z{ransform, X(z), of
the sequence x(n). Thus, the Z-transform can be viewed as lhe LaPlace transform of
the sampled function x,(t) with the change of variable
(8.8.s)
z = exp [Tsl
Equation (8.8.5) defines a mapping of the s plane to the z plane. To determine the
nature of this mapping, Iet s = o + i(o, so that
:
exp[oI]exp[iorI]
lzl = exp[oT], it is clear that if o < 0, lzl < l.
e
Thus, any point in the left half
Since
of the s plane is mapped into a point inside the unit circle in the z plane. Similarly'
since, foi o ) 0, we have lz | > 1, a point in the right half of the s plane is mapped into
l, so that the loaxis of
a point ouside the unit circte in thez plane. For o = O, l.l
the s plane is mapped into the unit circle in the z plane. The origin in the s plane cor'
responds to the point z = 1.
Finally, let s* denote a set of points that are spaced vertically apart from any point
so by multiples of the sampling frequency ro, = 2r lT. That is,
:
so:so*jkto,,
k = 0,-+7,
!2,"'
Then we have
ik-'' = exp [Isn] : 2,,
since exp[7*or,Tl = explj}kn]. That is, the points s1 all map into the same point
z6 = exp IIso] in the z plane. We can thus divide the s plane into horizontal strips, each
of width r,r,. Each of these strips is then mapped onto the entire z plane. For convenience, we choose the strips to be symmetric about the horizontal axis. This is summarized in Figure 8.8.1, which shows the mapping of the s plane into the z plane.
We have atready seen that X,(or) is periodic with period to,. Equivalently, X(O) is
periodic with period 2zr. This is easily seen to be a consequence of the result that the
process of sampling essentially divides the s plane into a set of identical horizontal
d*
=
explTs1,1
:
er(s"+
strips of width r,r,. The fact that the mapping from this plane to the z plane is not unique
(the same point in the z plane corresPonds to several points in the s plane) is a conse'
quence of the fact that we can associate any one of several analog signals with a given
set of sample values.
.
r
The Z-transform is the discrete-time counterPart of the Laplace transform.
The bitateral Z-transform of the discrete-time sequence x(n ) is defined as
x(z)
= nd'r)
x@)z-'
.1
,1
.., )
I
i
,\
l::.
+
o.
(l.,
t
o
(il
E
o
6
t:
o
o.
0
!
q,)
(!
o.
a)
o
E
o
o
tr
tt,
CL
3
3
I
al
al
3
3
I
I
q,
o
EO
o.
o
at,
.{
aa
c,
EA
k
412
Sec.
r
8.9
413
Summary
The unilateral Z-tr
,r\.t _
ftn^r,,r"
The region of convergence (ROC) of the Z-transform consists o[ those values of i
for which the sum converges.
For causal sequences, the ROC in the z plane lies outside a circlc containing all the
poles of x(z). For anticausal signals, the Roc is inside the circle such that all poles
lrx(z) are external to this circle. If r(n) consists ofboth a causal and an anticausal
part,ihen the ROC is an annular region, such that the poles outside this region cori"spond to the anticausal part of:(n), and the poles inside the annulus correspond
to the causal part.
The Z-transform of an anticausal sequence.r-(n) can be d(.'te Inlined tiom a table of
unilateral transforms as
X-(z) = Zlx-(-n)l
a
O
Expanding X(z) in partial fractions and identifying the inversc of each term from a
table of Zltransforms is the most convenient method for determining x(n). If only
the first few terms of the sequence are of interest, x(n) can he obtained by expanding X(z) in a power series in r-r by a Proccss of long division'
The propcrttes of the Z-transform are similar to those of the Laplace transform.
Among ihc applications of the Z-transform are the solulion ot difference equations
and the evaluation of the convolution of trvo discrete sequcnces.
The time-shift property of the Z-transform can be used to solve difference equations.
Ify(n) represents the convolution of two discrete sequenccs r(n) and lz(n)' then
Y(z) = It(z)Xlz)
The transfer firnction H(z) of a systenl with input r(n). impulse response &(z). and
output y(n) is
It(z) =
zlh(n)l:
IE\
Simulation diagrams for discrete-time systems in the z domairl can be used to obtain
state-variablc iepresentati<,ns. 'fhe solutions to the state equations in the Z domain
are given by
v(z) = :(:I - A)-'rn + Ql - A)-rbx(z)
)'(:):cv(z)+dX(z)
The transfer function is givcn
hY
H(z)
: c(zl - A) 'b + d
The state- transition matrix can he obtained
Q(rr)
:
as
A" = Z-t [:.(;I
- ^l;-rl
414
"
The
Z-Transform Chapter
I
The relation between the Laplace transform and the Z-transform of the sampled
analog signal
x,(t)
is
X(z) l.=".ptrd = X"(s)
"
8.10
The transformation z = exp Ifs] represents a mapping from the s plane to the z
plane in which the left half of the s plane is mapped inside the unit circle in the z
plane, the y'o-axis is mapped into the unit circle, and the right half of the s plane is
mapped outside the unit circle. The mapping efffectively divides the s plane into
horizontal strips of width to,, each of which is mapped into the entire z plane.
CHECKLIST OF IMPORTANT TERMS
Bllateral Z-tanetorm
Mapplng o, tho s plene lnto the z plane
Partlal-tracUon erpanslon
Power*erleo expanslon
Reglon of conyergenco
Slmulaflon dlagrams
8.11
Solutlon ol dlflerence equatons
Stsls-translflon matrll
State-varlable representadone
Transter functlon
Unllateral Z-translorm
PROBLEMS
&L
Determine the Z-transforms and the regions of convergence for the fo[owing sequences:
(a) x(a) =
(-3)'z(-n - l)
ror,t,r={i, ilft=s
(c)
.r(r)
z>
o
I(JI
a<o
l:',
\
(d) -r(a) :26(n)
82
-
2;u(n)
The Z-transform of a sequence x(a) is
x(z) =
z3+4zt-u,
z'+lr'-1r*l
(a) Plot the l0cations of the poles and zeros of X(z).
(b) Identify the causal pole(s) if the ROC is (i) lzl < tiil lzl >
I,
(c) Find.r(n) in both cases.
8J.
z
Use the definition and the properties of the Z-transform to find X(z) for the following
causal sequences:
(a).r(z)=zansinOon
(b) r(z) = n2cos(htt
(c)
:(n)
="(:)" +("-r)(1)'
Sec.
8.11
Problems
415
(d) r(n) = 6(n - 2) + nu(n)
(e) .t(z) = 2expl-nlr'"(;"r)
&4. Determine thc Z-transform of the sequenccs that result whcn thc following
tinuous-time signals are sampled uniformly every
(e) .r(t) = ,cos 1000',,
tJ.
(b) .t(r) = texP[-3(t -
I
1)]
Find the inverse of each of the following Z-transforms hy means of (i) power series expansion and (ii) partial-fraction expansion. Use a mathematical software packagc to verify
your partial-fraclion expansions. Assume that all sequences are causal.
(a)
x(z):,
i - l--l
,
_;l_t-j
44 '
g4
(b)
x(z) =
(z+l)(z+l)
iz-- 5(z -
(c)
x(z) =
)
-l(z - j)'
(d)
_41+
?t_
z2+42+3
&5. Flnd the inverse
transform ot
t.-',
X(z) = 16t11 by the following methods:
(a) Use the series expansion
rog(1
-
a)
= }i
lol
(b) Differentiate X(z) and use the properties of
&7.
causal con-
seconds:
.
r
the Z-transform.
Use Z-transforms to find the convolution of the following causal scquences:
(a)
,,(,)=
(;)., ,r,= {;: nffiJ,
,(,)
(l)
(b)
=
, ,r,= {?: :=; : :
(c)
l(n) = [], -1,2, -1,
l],
.r(n) = 11.0..'1.31
The Z-Translorm Chaptsr
416
t.&
I
Find the step response of the system with transfer function
nat=:;i_;-!_,
'
&9. (a)
' 6{
6
Solve the following difference equations ushg Z-transforms:
(l)
- j(n - l) + y(n - 2) = x(n)
y(-l) = l, y(-2) = o, x(nl = (11"u(n)
(ll) y(r) - lyb - r) - luyb - 2) = x(n) - j.r(1 y(z)
y(- l) = o, y(-2)
(b) Verify your result
&I0.
&f
L
=
o,
1)
.t(z) = (l)"u(n )
using any mathematical software package.
Solve the difference equations of Problem 6.17 using Z-transforms.
(a) Find the transfer functions of the systems in Prohlem 6.17, and plot the pole-zero locations in the e plane.
(b) What is the cor..:.ponding impulse response?
&12. (a) When input x(n) = u(n) + 1- f)'a(a) is applied to a linear, causal, time-invariant
system, the output is
y(,) = 6(-
&[t.
i)',t,-
o(-
i)',t,r
Find the transfer function of the system.
O) What is the difference-equation representation of lhe system?
Find tlre transfer function of the system
i,
r(z)
h
(rr )
r'(z )
3(z)
h2ul
::,',::,
=':,;Ifl ,)
n,@ =
+ s(z _ 2)
/l\'
\r)u(n)
E.14. (a) Show that a simplified criterion for the polynomial F(z) = z2 + arz +
its poles within the unit circle in the : plane is
lrto)l
(b)
. l,
F(- r) >
o,
a2
to have all
r(r) > o
Use this criterion to find the values of K for which the system of Example 8.6.3 is stable.
E.Ui. The transfer function of a linear, causal, time-invariant system
H(:) =
.-r. ,* - f1';i."1
1,
is
- *r;
Sec. 8.1
1
Probl€ms
417
where K and o are constant. Find the range of values of K and a tor which the system is
stable, and plot this region in the K-o plane.
&16. Obtain a reali'zation of the tollowing transfer function as a combination of firsG an<t sec.
ond-order sections in (a) cascade and (b) parallel.
H(z) =
o.oo. '){r + r.z:-, + o.7J: 2)
i(i + 0.42-r + 0.82-:)(, - g)5;i_ g 12-5- i;
19!._lJ
&17. (a) Find the state-transition matrix for the sysrcms of Problem
6.28, using the Z-transform.
(b) Use the frequency-domain technique to find the unirstep response of these systems.
assuming lhat v(0) = 0.
(c) Find the transfer function from the state represenration.
(d) Verify your result using any mathemathical software package.
&l& Repeat Problem 8.17 for the state equations obtained in Problcm 6.26.
&19. A low-pass analog signal.r,(r ) with a bandwidth of 3 kHz is sampled ar an appropriate rate
to leld the discrete-time sequence.r(n I).
(e) What sampling rate should be chosen to avoid aliasing?
(b) For the sampling rate chosen in part (a), determine the primary and secondary strips
in the s plane.
&20. The signal .r,(l) = l0 cos 600zrt sin 24002r, is sampled al rates of (i) 800 tlz, (ii) l@ Ha
and (iii) 3200 Hz.
(e) Plot the frequencies present in r,(l ) as polcs at the appropriatr: krcations in the s plane.
(b) Determine the frequencies present in the sampled signal lor cach sampling rate. On
the s plane, indicate the primary and secondary strips for each case, and plot the frequencies in the sampled signal as poles at appropriate locations.
(c) From your plots in part (b),
can you determine which sampling rate
error-free reconstruclion of the analog signal?
(d) Verify your
E2L
yill
enable an
answer in part (c) by plotting the spectra of the anakrg and sampled signals.
In the text, we saw that the zcro-order represents an easily implcmcntable approximation
to the ideal reconstruction filter. However, the zero-order hold givcs only a staircase
approximation to the analog signal. In the first-order hold, rhe output in the interval
nT= t< (n+ l)Iisgivenby
y(t) = x"(nr)
+t--{p"1,r) - +(n7'-
r-\l
Find the transfer function Gr,(s) of the first order hold, and compare its frequbncy
t2a
resporuie wirh that of the ideal reconstruction filter matched to rhe ratc 7"
As we saw in Chapter 4. filters are used to modify the frequencv content of signals in an
appropriate manner. A technique for designing digital lilters is hased on transforming an
analog filter into an equivalent digital lllter. In order to do so, rvc have to obtain a relalion between the Laplace and Z-transform variables. In Section 8.S. $,e discussed one such
relation based on equaling the sample values of an analog signal rvith a discrete-time signal. The relation obtained was
r
=
exp
[f.rl
4lg
The Z_Transtorm Chapter g
We can obtain other such relations by using different equivalences. For example, by equating the s-domain transfer function of the derivative operaior and the Z-domain transfer
function of its backward-difference approximation, we can write
s=-I -rz
-l
or equivalently,
I
' '- iir
Similarly, equating the integral operator with the trapezoidal approximation (see problem
6.15) yields
2l - z-l
' TL*r-'
or
z=
+ (T/2ls
7=@121s
L
(a) Derive the two alternative relations between the s and e planes just given.
(b) Discuss the mapping of the s plane into the z plane using the two relations.
Chapter 9
The Discrete
Fourier Transform
DUCTION
From our discussions so far, we see that transform techniques play a very useful role
in the analysis of linear. time-invariant systems. Among the many applications of these
techniques are the spectral analysis of signals, the solution of differential or difference
equations, and the analysis of systems in terms of a frequency response or transfer
function. With the tremendous increase in the use of digital hardware in recent years,
interest has centered upon transforms that are especially suited for machine computation, In this chapter we study one such transform, namely, the discrete Fourier transform (DFT), which can be viewed as a logical extension of the Fourier transforms
discussed earlier.
In order to motivate our definition of the DFT, let us assume that we are interested
in frnding the Fourier transform of an analog sigral r,(t) using a digital computer. Since
such a computer can store and manipulate only a finite set of numbers, it is necessary
to represent r, (t) by a finite set of values. The first step in doing so is to sample the signal to obtain a discrete sequence.t,(n ). Because the analog signal may not be time limited, the next step is to obtain a finite set of samples of the discrete sequence by means
of truncation. Without toss of generality, we can assume that these samples are deEned
for n in the range [0, N - U. [.et us denote this finite sequence hy r(n), which we can
consider to be the product of the infrnite sequence x, (n ) and the window function
't')
=
(t.
to,
0<nsN-l
otherwise
(e.l.l)
so that
x(tt)
:
x"(n)w(n)
(e.1.2)
419
42O.
The Discrete Fou.ier
Transtorm Chapler g
Srnce we now have a discrete sequence, wb can take the discrete-trme Fourier transform of the sequence as
x(o)
N-l
r(n) exp[-jfla]
(e.r.3)
n-O
This is still not in a form suitable for machine computation, since O is a continuous variable taking values in [0, 2t]. The final step, therefore, is to evaluate X(O) at only a frnite
number of values Oo by a process of sampling uniformly in the range [0,22r]. We obtain
/V-l
:
X(or) ) r(r)exp[-lorz],
, -0
k=0,1, ...,M
-
|
(e.1.4)
where
ao=2ik
(e.l.s)
The number of frequency samples. M, cao be any value, However, we choose it to be
the same as the number of time samples, N. With this modification, and writing X(f,!* )
as X(&), we finally have
x(k\ =
5'",*rl-i'#^o)
(e.1.6)
An assumption that is implicit in our derivations is thatr(n) can take any value in the
range (-to, co)-that is, that.r(n) can be repre.sented to infinite precision. However,
the computer can use only a finite rvord-length representation. Thus, we quantize lhe
dynamic range of the signal to a finite number of levels. In many applications, the enor
that arises in representing an infinite-precision number by a finite word can be made
small, in comparison to the errors introduced by sampling, by a suitable choice of quantization levels. We therefore assume that.r(n ) can assume any value in (-co, co).
Although Equation (9.1.6) can be considered to be an dpproximation to the continuous-time Fourier transform of the signal x,(t), it defines the discrete Fourier transform of the N-point sequence r (n). We will investigate the nature of this
approximation in Section 9.6, where we consider the spectral estimation of analog signals using the DFT. However, as we will see in subsequent sections, although the DFT
is similar to the discrete-time Fourier transform that we studied in Chapter 7, some of
its properties are quite different.
One of the reasons for the widespread use of the DFT and other discrete transforms
is the existence of algorithms for their fast and efficient computation on a computer.
For the DFT, these algorithms collectively go under the name of fast Fourier transform
(FFT) algorithms. We discuss two popular versions of the FFT in Section 9.5.
Sec.
9,2
9.2
The Discrete Fourier Translorm and lls lnvers€
421
THE DISCRETE FOURIER TRANSFORM
AND ITS INVERSE
Letr(n),n = 0, 1.2,....N- l,
transform of
.r
(n)
be an N-point sequence. we define the discrete Fourier
as
x(r)=b,r,l*rl-t',J,0]
The inversc discrcte
o.z.t)
riur-lransform (IDFT) relation is given by
Fou
.(,)
=;5'rul.-o[,?#"0]
o.2.2)
To derive this relation, we replace nby p in the right side of Equation (9.2.1) and multiply by exp [l2nrr*/N] to gct
xo
If we now sum over
/<
"-pL'?#
,r] : 5''or *o[iTor" -
in the range [0, N
-
l],
or]
we obtair
5 ,ror "* L, T ,-] = Fj 5' ,rpr *, [, ? A(,, - p)]
In Equation (7.2.12) we
sarv
P*
p.2.3)
e.2.4)
that
*t[,,# /'(' - P)] =
{},
::i
so that the right-hand side of Equation (9.2.4) evaluates to Nx (n), and Equation
(9.2.2) follows.
We saw that .t (0) is periodic in O with period 2rr; thus, X(O* ) = X(Oo + 2c). This
can be written as
.Y(/i) = x(ok)
=x(o* + z"\
=
x(z;(k
+ rv)) = x(A
- N)
(e.2.s)
That is, .Y(k) is periodic with period N.
We norv show that.r(n), as detcrmined from Equation (9.2.2). is also periodic with
period N. From that equation, rve have
r(,r + N)
: 5' ,,or *, ,f
[,
(n +
N)r]
=
=
^r-",0,"'o[r?*]
.r(n)
(9.2.6)
The Discrete Fourier
Translorm
Chapter 9
That is, the IDFT operation yields a periodic sequence, of which only the first N values, coresponding to one period, are evaluated. Hence, in all operations involving the
DFT and the IDFT, we are effectively replacing the finite sequence x(n) by its periodic extension. We can therefore expect that there is a connection between the
Fourier-series expansion of periodic discrete+ime sequences that we discussed in
Chapter 7 and the DFT. In fact. a comparison of Equations (9.2.1) and (9.2.2) with
Equations (7.2.15) and (7.2.16) shows that the DFT X(t) of finite sequence .r(n ) can
be interpreled as the coefficient ao in the Fourier series representation of its periodic
extension ro(n ). multiplied by the period N. (The two can be made identical by including the factor l/N with the DFT rather than with the IDFI.)
9.3 PROPERTIES OF THE DFT
We now consider some of the more important properties of the DFT. As might be
expected, they closely parallel properties of the discrete-time Fourier transform. In
considering the properties of the DFT, it is helpful to remember that we are in essence
replacing an N-point sequence by its periodic extension. Thus, operations such as time
shifting must be considered to be operations on a periodic sequence. However, we are
interested only in the range [0, N - 1], so that the shift can be interpreted ari a cLcular shift, as explained in Section 6.4.
Since the DFT is evaluated at frequencies in lhe range [0,2n], which are spaced
aparl by 2t / N, in considering the DFT of two signals simultaneously, the ftequencies
corresponding to the DFT must be the same for any operation to be meaningful. This
means that the length of the sequences considered must be the same. If this is not the
case, it is usual to augment the signals by an appropriate number of zeros, so that all
the signals considered are of the same length. (Since it is assumed that the signals are
of finite length, adding zeros does not change the essential nature of the sipal.)
9.8.1 Ltnearity
Let Xr(k) and Xr(k) be the DFTs of the two sequences .r, (n) and rr(n ). Then
(e.3.1)
DFT[a,rr(n) + anxr(n)l= arXrlk) + arXr(k)
for any constants a, and ar.
9.82 Time Shifting
For any real integer zo,
DFT[x(z + ro)]
: 5'rtr,+
=
rrl*n[-lf;u]
),r-).*n[
-i2fl
=:*[,l]*"]'ttl
where, as explained before. the shift is a circular shift.
rw -
^1f
(e.3.2)
S€o.
9.3
Properlios ot the
DFT
425
0.8.3 Alter:nativelnversionFormula
By writing the IDFT formula, Equation (9.2.2\,
'(')
as
i [; x-o'-n[-if; ,,t]].
-- f torrtx.t*)ll=
(e.3.3)
we can interpret x(n) as the complex conjugate of the DFT of X* (k ) multiplied by 1/N.
Thus, the same algorithm used to calculate the DFT can be used to evaluate the IDFT.
0.9.4 Time Convolutiou
We saw in our earlier discussions of different transforms that the inverse transform
of the product of two transforms corresponds to a convolution of the corresponding
time functions. With this in view, let us determine IDFT of the function Y(k) =
H(k)X(k). We have
y(n) = IDFrlr(k)l
=ieY(k)*P1-,2i;'e]
=
* P* H(k)x(k)'-o [, ?'o]
Using the definition of II(k), we get
v(,)
: 5' (5
*
ot.r
*rl-
izfi *ol)xttr.-nfi
2$,r]
Interchanging the order of summation and using Equation (9.2.2), we obtain
yf"l = 5' h@)r(n moO
m)
(e.3.4)
A comparison with Equation (6,4.1) shows that the right-hand side of Equation (93.a)
corresponds to the periodic convolution of the two sequences.r(n) and i(n).
Example 03.1
Consider the periodic convolution y(n) of the two sequences
ft(r) = ll,3, -1,
Here N = 4, so that exp [i(zr
DFTs of the two sequences as
fl(o) =
1,161
-2J' and
/N)l
= i. By
+ ft(l) + h(2\ + h(3) =
|
.r(n) = (1,2,0,
-
I
I
using Equation (9.2.1), we can calculate the
Tho Discrete Fourler
H(r1 = 1,p1
* ng
H(2) =
111q
+
II(3) =
1 161
+t
"*pl-iil
* rlzt exp[-lrr] * ,1rr.*of-r
fi(l)exp[-izrl+ h(2)expl- j2rl
rr I
*p -i
[
Transform Chapier 9
3f)=, - tt
+ ](3)exp[-i3zr] =
-l
T) * r rrtexp [-l3nl + nplexpl-i \f
= z * is
and
x(o) = r(o) +.r(1) + x(2) + x(3) = 2
x(1) = x(0) + rlrl exp[-7
X(2) =
1161
]]
* ,,r, exp[-jrr] +.1111"*ef-1f] = , - ,,
+ .r(l) exp[-jzr] + x(2) exp[-i2rr] + x(3)
x(3) = 316y +,1r1exp[-;]] + r(z)expt-1r,,1+
exp[-i3r] = g
r1r;exp[-i\f
=r
*
t1
so that
v(0)=H(0)x(0)=2
Y(t) = x11717(-1) : -13 Y(2)= H(2)x(2)=0
ill
v(3) = H(3)x(3) = -13 +111
We can now use Equation (9.2.2) to frnd.v(a) as
y(o) = l[Y(0) + v111 + Y(2) + y(3)] =
ylry = l[r1oy +
r(r)exe[i;]
-6
+ rlzpxplirr
- ror-nli]]]=
y(2) = 1(l'(0) + v(l) explirrl + Y(z)expU?d + Y(3)exp[i3r]) =
rrrl
=
|(rtol + r(r)expljT]*
",rn*o,i3,,,t+
o
7
y(3)exp[rT] = -t
which is the same answer as was obtt ined in Example 6.4.1.
9.8.6 Relation to the Diecrete-Time Fourier
and Z-Ilansfortre
From Equation (9.2.1), we note that the DFT of an N-point sequence x(n ) can be written as
x(e) = 5 r<rl
z-0
exp[-ion]1,1-,p1
(9.3.s)
= x(O)lo=ilr
That is, the DFT of the sequence r(n) is its discrete-time Fourier transform X(O) evaluated at N equally spaced points in the range [0, 2n).
Sec.
9.3
Properties ol lhe
DFT
425
For a sequence for which both the discrete-time Fourier transform and the Z-trans
form exist. it follows from Equation (8.3.7) that
x(k) = x(z)l:-erptirzorrrrt
(9.3.6)
so that the DFT is the Z-transform evaluated at Nequally spaced points along the unit
circle in the e plane.
04.8 Matrix Interpretation
of the Df"T
We can express the DFT relation of Equation (9.2.1) compactly as a matrix oPeration
on the data vector = [:(0)x(l)...r(N l)]r. For convenience, let us denote
expl-ilr/Nl by W,y. We can then write
t
-
x(k)=)x(n)wtfi
ft:0,1,...,N-l
-ww w
W w'n w'r
(e3.7)
Let W be the matrix whose (t, n)th element [W]0, is equal to Wf . That is,
W=l
then follows
wN-l
(e.3.8)
.
yg-'irn 'rthat the transform vector X = [X(0)X(1). .X(N-WN
It
w
W#-'
W2(N-r)
1)]7 can be
obtained as
X=YYx
(e.3.e)
The matrix W is usually referred to as the DFT matrix. Clearly. [W]u = [W]*, so that
W is symmetric (W = Wr).
From Equation (9.2.2),we can write
,(r) = i.>*, @)W"
(e.3.10)
Since W;r = Wi, where * represents the complex conjugate, it follows that the IDFT
relation can be written in matrix form as
' = fiw-x
(e.3.11)
Solving for x from Equation (9.3.9) gives
x --
W-rX
(e.3.t2)
It therefore follows that
tn-'=
**-
(e.3.r3)
426
The Discr€te Fourier
Transrorm
Chapter g
or equivalently,
WXW = NIN
rvhere I," is the identity matrix of dimension N
can write Equation (9.3.14) as
w'$rw
:
x
(e.3.14)
N. Since W is a symmetric matrix, we
,
NI/v
(9.3.1si
In general, a matrix A that satisfies A*rA = Iiscalled a unitary matrix. A reat matrix
A that satisfies ArA = I is said to be an orthogonal marrfu. The matrix lY, as defined
in Equation (9.3.8), is not strictly a unitary matrix, as can be seen from Equation
(9.3.15), However, it was pointed out in Section 9.2 that the factor l/N could bp used
with either the DFT or the IDFT relation. Thus, if we associare a factor l/VN with
both the DFT and IDFT relations and let Wn = I /VN exp [- j2r / Nl in the definition
of the matrix W in Equation (9.3.8), it follows that
X=Wx and x=W*X
(e.3.16)
with W being a unitary matrix, The DFT is therefore a unilary trawlormi often. however, it is simply referred to as an orthogonal transform.
Other useful orthogonal transforms can be defined by replacing the DFT matrix in
Equation (9.3.8) by other unitary or orthogonal matrices. Examples are the WalshHadamard transform and the discrete cosine transform, which have applications in
areas such as speech and image processing. As with the DFT, the utility of these transforms arises from the existence of fast and efficient algorithms for their computation.
9.4
LINEAR CONVOLUTION USING THE DFT
We have seen that one the primary uses of transforms is in the analysis of linear timeinvariant systems. For a linear, discrete-time-system with impulse response t(z), the
output due to any input -r(n) is given by the linear convolution of the two sequences.
However, the product H (k) X (k) of the two DFTs corresponds to a periodb convolution of &(r) and r(n ). A question that uaturally arises is whether the DFT can be used
to perform a linear convolution. In order to answer this question, let us assume that
h(z) and r(z) are of length M and N, respectively, so that /r(n ) is zero outside the range
11, and -r(n) is zero outside the range [0, N
10, M
U. We atso assume that M < N.
Recall that in Chapter 6 we defined the periodic convolution of two finiteJengh
sequences of equal length as the convolution of their periodic extensions. For the two
sequences &(n) and x(n) considered here, we can zero-pad both to the length
K > Max(M, N), to form the augmented sequences /r,(n) and r,(n), respectively. We
can now define the K-point periodic convolution, lr@), of the sequences as the convolution of their periodic extensions. We note that while )p(r) is a K-point sequence,
the linear convolution of h(z) and.r(n), yln), has length L = M + N
l.
As Example 6.4.2 shows, for K s L, yo@) corresponds to the sequence obtained by
adding in, or'time-aliasing, the last L K values of y,(n) to the first L K poins. Thus,
the first L - .K points of yr(n) will not correspond to.v1(a), while the remaining 2K L
-
-
-
-
-
-
Sec.
9.4
Linear Convotution Using the
DFT
427
points witl be the same in both sequences. Clearly, if we choose K = L, yo@) and y,(n)
rvill be identical.
Most available routines for the efficient comPutation of thc DFf assume that the
length ot the sequence is a power of 2. In that case, K is chosen as the smallest power
of 2 that is larger than L. When K > L, the first L points of .r'r(rr) will be identical to
y1(n ), while the remaining K - L values will be zero.
We will now show that the K-point periodic convolution of h (n) and x(n) is identical to the linear convolution of the two functions it K = L. We note that
y,@)= )
,nE
Now, ft(m) is zero for m el0,M
so that we have the following.
-
1],
h(m)x(n-m)
(s.4.1)
-!
and.r(n
-
rn) is zero for (n
-m)
e [0,N
- l]'
O<n=M-l:
n
) h(m)x(n-m)
= Ia,r,r, + h(t)x(n-
yr(n)=
1) +
"'+
/r(n).t(0)
M=n=N-l:
n
m=n-M+l
= h(n
-
M + 7)x(M
- l\ + h(n -
M + Z)x(M
-
2)
+ '..+ h(n)x(0)
N=n=M+N-2:
-n-M+l
=h(n-M+t)x(M-t)
+.'.+/,(N+ l).t(n-N+ l)
(9.4.2)
On the other hand,
&-l
to?r\: m=O
) h,(nt)x,(n- m)
Since the sequence xo(n
-
(9.4'3)
m) is obtained by circularly shifting.t,(n), it follows that
[r,(r-m+K),
n+7=m=K-1
so that
lo@) = h"(o)x.(n) + "' + h"(n)x,(o) + h,(n + I ) r,,(K - l)
+ h,(n + 2)x,(K - 2) + "'+ h,(K - l)t,,(n + 7)
(9'4'5)
425
The Discrete Fouri€r Transform. Chapter 9
Now, if we use the fact that
we can easily verify
iszerofor N + M
h"(n) =
Ino),
O=n<lvl-l
lo.
othenvise
xo(n) =
(xtu\,
io.
0=n<N-l
thatyr(n)
-
(e.4.6)
otherwise
is exactly the same
I = n< K - l.
asy,(n) for 0 < z
<N+M
-
Z and
In sum, in order to use the DFT to perform the linear convotution of the M-point
sequence h(n) and the N-point sequence r(z), we augment both sequences with zeros
to form the K-point sequences ho(n) and x,(n), with K > M + IV - 1. We determine
the product of the corresponding DFTs, H,(k) and X,(t). Then
yr(n) = IDF'I IH,(k) X,(k)l
9.5
(e.4.7)
FAST FOUR]ER TRANSFORMS
As indicated earlier, one of the main reasons for the popularity of the DFT is the existence of effrcient algorithms for its computation. Recall that the DFT of the sequence
x(z) is given by
x (k) =
5',t,)*p [ -,'# *],
For convenience, let us denote
x(k) =
/Y-
)
I
x@)wff
,-0
which can be explicitly written
x(k)
exp[-l2r/Nlby
,
k=
k=
0,r,2,...,N-
I
(9.5.1)
[7,", so that
0,r,2,...,N- I
(e.s.2)
as
= r(0) wfl + x(t)wf, + x(z)wff
+
"'+ r(N -
1)W$-tr*
(e.s.3)
It follows from Equation (9.5.3) that the determination of each X(k) requires N complex multiplications and iv complex additions, where we have also included trivial multiplications by +1 or
io the count, in order to have a simple method for comparing
the computational complexity of different algorithms. Since we have to evaluate X(&)
for k = 0, 1, ..., N 1, it follows that the direct determination of the DFI requires N2
complex multiplications and N2 complex additions, which can become prohibitive for
large values of N. Thus, procedures that reduce the computational burden are of considerable interest. These procedures are known as fast Fourier-transform (FFT) algorithms. The basic idea in all of the approaches is to divide the given sequence into
subsequences of smaller length. we then combine these smaller DFfs suitably to
obtain the DFT of the original sequence.
In this section, we derive two versions of the FFT algorithm, assuming that the data
length N is a power of2, so that N is of the form N = 2P, where Pisa positive integer.
tj
-
Sec.
9.5
Fast Fourier Transrorms
That is, N = 2,4,8. 16,32, etc. Accordingly, the algorithms are referred to
as radix-2
algorithms.
9.6.1 The Decimation.in.TineAlgorithm
In the decimation-in-timd (DIT) algorithm, we divide x(n) into two subsequenoes, each
of length N/2, b,l grouping the even-indexed samples and the odd-indexed samples
together. We can then rvrite X (kl in Equation (9.5.3) as
x(t)= 2r@)W +)r(n)ffi
ncdd
Letting
r
(e.s.4)
= 2r in thc first sum and n : 2r + | in lhe second sum, we can write
,v/2- I
N/2-l
,t'(t)= ) xQr1w'z;k+ ) xQr+l)lYf'tttt
Nlz-l
: N/2-l
g(')wff*+yyi
)
> helw2ik
where
g(r) = x(2r)
n1r1"=x(2r
^rA
Note that for an N/2-point
+
1).
y(n), the DFI is given by
sequence
N/2-
Y(t)
(e.5.s)
= ,,-0
>
|
:r'f'
n=ll
yln)wiirz
,rnr*'r*
(e.s.6)
where the last step follows from the relation
w,,,,
:.*p -' ?in) =(*o[-i'#])'
f
=
**
(e.s.7)
Thus, Equation (9.5.5) can be written as
x(k) = G(k) + wfrH(k), t = 0,1,...,N - I
(e5.E)
where G(& ) and H (k) denote the N /Z-point DFTs of the sequences g(r) and h(r),
respectively. Since G(k) and H(ft) are periodlc with period N/2,we can write Equation (9.5.8) as
x(k)=G(t)+ wfrH(k), k=0,1,....]-,
.r(t * *)
\ 2l
: c&) + wfi+Ni211171
(e.s.e)
The steps involved in determining X(k) can be illustrated by drarving a signal-flow
graph conesponding to Equation (9.5.9). In the graph, we associate a zode with each
signal. The interrelalions bctween the nodes are indicated by drawing appropriate lines
430
The Dlscreto FouderTranslorm Chaptor g
(branches) with arrows pointing in the direction of the signal flow. Hence, each brranch
has an input sigral and an output signal. We associate a weight with each branch that
determines the transmittance between the input and output signals. When not indicated on the graph, the transmittance of any branch is ass,'med to be 1. The sigpal at
any node is the sum of the outPuts of all the branches entering the node. These concepts are illustrated in Figure 9.5.1, which shows the sigpal-flow graph for the computations involved in Equation (9.5.9) for a particular value of &. Figure 9.5.2 shows the
signal-flow graph for computing X(,t) for an eight-point sequence. As can be seen from
the graph, to determine X(k), we first compute the two four-point DFTs G(&) and
H(lc)of thesequencess(r) = [(0),x(2),.r(a),x(6)land/l(r) = [.r(1),r(3),r(5),x(7)]
and combine them appropriately.
We can determine the number of computations required to find X(/c) using this procedure. Each of the two DFTs requires (N/2)2 complex multiplications and (N/2)?
x(kl
G(kl
H(k)
hrfr+Nt2
'(*
* N\
1)
ngure 9S.f Sigpal-flov graph for
Equation (9.5.9)
x(0)
x(l)
x(2)
r(4)
x(3)
x(6)
I
x@\
x(5)
4
-ooint
2'
DFT
: (7)
Flgne
x(6)
x(7t
952
Flow graph for first stage of DIT algorithm for N =
E.
Sec.
9.5
Fasl Fourler Transforms
431
complex additions. Combining the two DFTs requires N complex multiplications and
N complex additions. Thus, the computation ot X(k) using Equation (9.5.8) requires
N + N2/2 complex additions and multiplications, compared to A'r complex multiplications and additions for direct computation.
Since N/2 is also even, we can consider using the same proceclure for determining
the N/2-point DFTs G(k) and H(k) by first determining the N/4-point DFTs of
appropriately chosen sequences and combining them. For N = 8. this involves dividing the sequence g(r) into the two sequences lr(0), r(a)) and {r(2). r(6)l and the
sequence /r (r) into lr(1), x(5)| and (.r(3),.r(7)|. The resulting computarions for finding G(/<) and H(&) are illustrated in Figure 9.5.3.
Clearly, this procedure can be continued by further subdividing the subsequences
until we get a set of two-point sequences. Figure 9.5.4 illustrates the computation of
the DFT of a two-point sequence y(n ) (y(O), y(t)1. The complere flow graph for the
computation of an eight-point DFT is shown in Figure 9.5.5.
A careful examination of the flow graph in the latter figure leads to several observations. First, the number of stages in the graph is 3, which equals logr8. In general,
:
r(0)
c (0)
(4)
c(l)
.r
l,
x(2)
QQI
r (6)
6(3)
(a)
x(l)
H (O)
lr"9r:
r (3)
H(l')
I
.r(5)
H
(2)
r(7)
H
(3t
(b)
Figure
953
Florv graph for compuration of four-poinr DFT.
432
The Dlscrete
FouriErffanslom
Chapter g
v(0)
Wz=
Y(l)
-l
ngrre9S.4 Flowgraphfor
computation of two-poitrt DFT.
.r(0)
x(o)
r(4)
.r0)
x(2)
x(21
r(6)
x(31
x(l)
:(5)
x(5)
:(3)
x(6)
x(7t
.r(7)
wfi
Flgure
955
wfr
wlt
C.omplete flow graph for computation of the DFT tor N = E,
the number of stages ii equal to log2N. Second, each stage of the computatiou requires
eight complex multiplications and additions. For general N, we require N complex multiplications and additions, leading to a total of N logrN operations.
The ordering of the input to the flow graph, which is 0,4, 2, 6, 1, 5, 3,7, is deter-
mined by bit revershg the natural numberc 0, 1,2,3,4,5,6,7. To obtain the bitreyersed order, we revenie the bits in the binary representation of the numbers in their
natural order and obtain their decimal equivalents, as illustrated in Table 9-1.
Finally, the procedure permits in-place computation; that is, the results of the computations at any stage can be stored in the same locations hs those of the hput to that
stage. To illustrate this, let us consider the computation of X(0) and X(4). Both of
these computations require the quantities G(0) and II(0) as inputs. Since G(0) and
I1(0) are nbt required for determining any other value of X(&), once X(0) and X( )
have been determined, they can be stored in the same locatinns as G(0) and II(0). Sim-
Sec.
9.5
Fast Fourier Transforms
43ri]
TABLE 91
Blt-rove?sed ordsr tor lY = 8
Iroclmal
Number
Elnary
BltRever8od
Reprosentatlon
Reprosentatlon
Dealmal
Equlvalent
000
m0
o
001
lm
4
2
6
010
010
0ll
ll0
100
001
I
101
101
5
il0
011
3
7
lll
111
ilarly, the locations of G(1) and H(1) can be used to stsre X(l) and X(5), and so on.
Thus, only 2N storage locations are needed to complete the computatioos.
0.6.2 The Decination-in-Frequenoy Algorithn
The decimation-in-frequency (DIF) algortthtt, is obtelned e;sentially by dividing the
outPut sequence X(&), rather than input sequence.r(lc); lnto smaller subsequences. To
derive this algorithm, we group the first N/2 points and the last N/2 poins of the
sequence r(n) together and write
x(k)=
'X | ,(n)rv# + \
,t-.1
lNlzl -
E0
*(fiwff
n- N/2
=r2' *rrr* + wy'o''['
,(,.I)*f
(e.s.1o)
A comparison with Equation (9.5.6) shows that evetr though the two sums in the right
side of Equation (9.5.10) are taken over N/2 values of n, they do no.t represent DFTs.
WecancombinethetwotermsinEqristlun(9.5.10)bynotingthatWff/z a (-1)r'toget
x(k)
=',j*' [,.,
+ (- r)tx(z
- l)]*,f
(e.s.1l)
Let
g(n) =
x(n)- r(r
. #)
and
h(n) =
where0<n<(N/2)-1.
[,r,
-,(^
. #)]*u
(e.5.12)
4U
The Discrete Fourier
For k even. we can set k = 2r and write'Equation (9.5.11)
(,\/2,-t
Transrorm
Chapter 9
as
(N/2)-l
(e.s.13)
Similarly, setting k = 2r
+I
x(Zr + r)
gives the expression for odd values of &:
='"3-'
hlnywr;,
='"3-'
h(n)wi{,,
(9.5.14)
Equations (9.5.13) and (9.5.14) represent the (N/2)-point DFTs of the sequences G(t)
and H(k). respectively. Thus, the computation of X(&) involves first forming the
sequences g(n) and lr(n) and then computing their DFTs to obtain the even and odd
values of X(t ). This is illustrated in Figure 9.5.6 for the case where N
8. From the
figure, we see that c(0) = x(0), c(1) = x(2), G(2) = x(4), G(3) = x(6), H(0) =
x(1), H(r) = X(3), H(2) = x(5), and H(3) = v171.
We can proceed to determine the two (N/2)-point DFTs G(t) and H(&) by computing the even and odd values separately using a similar procedure. That is, we form
the sequences
:
sln)
=r(r). s(, . #)
x(0)
x t2l
x(4r
.t(3)
x(6)
,\'(4)
x(r)
r(5)
x(3\
(6)
x(s)
x(7)
x(7\
-r
Flgure
95.6
Firsi stage of DIF graph.
Sec.
9.5
Fasl Fourier Transforms
Bz@)
=
435
[et"r
-
s(^
* l)fwp,,
(e._s.
rs)
and
h,(n) = n@) + h(n +
hzb)
N\
4)
N
=lnat - n(,.t!)fwn,,
4
(e.s.t6)
Then the (N/4)-point DFTs, G, (/<), G2(k) and Ht(k), H2&\ correspond to the even
and odd values of G(t) and H(k), respectively, as shown in Figure 9.5.? for N = 8.
We can continue this procedure until we have a set of two-point sequenoes. which,
as can be seen from Figure 9.5.4, are implemented by adding and subtracting the input
values. Figure 9.5.8 shows the complete flow graph for the computation of an eight-
.
8(0)
Glo)
8(l )
Gl2l = X14,
sQ)
;r
6(
wfrrz= wfl
,r(0)
l).
X(l)
-
116,
8(3)
Gt-t1
n(0)
ttit= xrrl
,r
,'(l)
)
,l(l)
h(2't
;l
v,!rt=
llll)-
Xt3l
ll(31=
X (71
w.tl
/r'(l)
h(3)
(b)
Flgure 9J.7
N=8.
= .tr|5,
Flow graph for the A//4-point DFTs of G(&
)
and
H(k),
The Discrete Fourler
436
Translorm Chapier 9
.r(0)
x(0)
.t(l)
x(41
r (1,
x(2)
.t(3)
x(61
,r(4)
x0)
r(5)
x(5)
r(0)
x(3)
r(71
-l
-l
Flgure
95.t
-I
x (7)
Complele flow graph for DIF algorithm, N = 8.
point DFT. As can be seen from the figure, the input in this case is in its natural order,
and the output is in bit-reversed order. However, the other observations made in reference to the DIT algorithm, such as the number of comPutations, and the in-place
nature of lhe computations apply to the DIF algorithm also. We can modify the sigpalflow graph of Figure 9.5.7 to get a DIF algorithm in which the input is in scrambled
(bit-reversed) order and the output is in the natural order. We can also obtain a DIT
algorithm for which the input is in the natural order. In both cases, we can modify the
graphs to give an algorithm in which both the input and the ouput are in their natural
order. However, in this case, the in-place ProPerty of the algorithm will no longer hold.
Finally, as noted earlier (see Equation (9.3.3)), the FFT algorithm can be used to find
the IDFT in an efficient manner.
.$ SPECTHAL ESTIMATION OF ANALOG
SIGNALS USING THE DFT
The DFT represents a transformation of a finite-length discrete+ime signal r(n) into
the frequency domain and is similar to the other frequency-domain transforms that we
have discussed. with some significant differences. As we have seen, however, for analog signals r,(r), we can consider the DFT as an approximation to the continuous-time
Fourier transform X"(r,r). It is therefore of interest to study how closely the DFT
approximates the true spectrum of the signal.
Sec.
9.6
Spectral Estimation of Analog Signals Using the DFT
As noted earlier, the first step in obtaining the DFT of signal .r,(r) is to convert it
into a discrete-time signal r"(r) by sampling at a uniform rate. The process of sampling,
as we saw, can be modeled by multiplying the signal .r"(l) by the impulse train
pr(t)=
j
r1r-nr1
nE -c
so that we have
r"O = x,(t)pr(r)
The corresponding Fourier transform is obtained from Equation
x,(.) =
ts
X o(a
+ mott)
(e.6.1)
(1
.5.12):
(e.6.2)
These steps and the others involved in obtaining the DFT of the signal r,(r) are illustrated in Figure 9.6.1. The figures on the left correspond to the time functions, and the
figures on the right correspond to their Fourier transforms. Figure 9.6.1(a) shows a typical analog signal that is multiplied by the impulse sequence shown in Figure 9.6.1(b)
to leld the sampled signal of Figure 9.6.1(c). The Fourier transform of the impulse
sequeucepr(t), also shown in Figure 9.6.1(b), is a sequence of impulses ofstrength
in the frequency domain, with spacing o,. The spectrum of the sampled signal is the
convolution of the transform-domain functions in Figures 9.6.1(a) and 9.6.1(b) and is
thus an aliased version of the spectrum of the analog signal, as shorvn in Figure 9.6,1(c).
Thus, the spectrum of the sampled signal is a periodic repetition, with period o", of the
spectrum of the analog signal .r,(l).
If the signal x,(t) is band-limited, we can avoid aliasing errors by sampling at a rate
that is above the Nyquist rate. If the signal is not band limited, aliasing effects cannot
be avoided. They can, however, be minimized by choosing the sampling rate to be the
maximum feasible. In many applications, it is usual to low-pass filter the analog signal
prior to sampling in order to minimize aliasing errors.
The second step in the procedure is to truncate the sampled signal by multiplying
by the window function o(l). The length of the data window Io is related to the number of data points N and sampling interval by
1/I
I
To: NT
(e.6.3)
Figure 9.5.1(d) shows the rectangular window function
*-(,)
=f''
[0,
-I='"'- !'
(e.6.4)
otherwise
The shift of T/2 from the origin is introduced in order to avoid having 61n12 samples at
points of discontinuity of the window function. The Fourier transform is
wx(o) =
,o'E{*ur[-,,rrPl
(e.6.s)
@
(a,
wp
(l)
I wRk,J)
^Tt
2
I
(d)
I
X,(o) o Ws(ul
@
(e)
@
(f)
al
z,
@
(s)
Figure 9.6.1 Discrete Fourier transform of an analog signal. (Adapted
witl: permission from E. Oran Brigham, The Fouriet Transform, PrenticeHall. 1987.)
'r38
I
S€c.
9.6
Sp€ctral Estimation ol Analog Signals Using ths
DFT
/t39
and Figure 9.6.1(e) shows the truncated sampled function. The corresponding Fourier
transform is obtained as the convolution of the two transforms X,(o) and Xr(to)- The
effect of this convolution is to introduce a ripple into the sPectrum.
The finat step is to sample the spectrum at equally spaced points in the frequency
domain, Since the number of frequency points in the range 0 < (l, to, is equal to the
number of data points N, the spacing between frequency samples is to,/N, or equivalenlly,2n/Tn, as cdn be seen by using Equation (9.6.3). Just as we assumed that the
sampted sigral in the time domain could be modeled as the modulation (multiplication) of the analog signal x,(t) by the impulse train pr(t), the sampling operation in the
(
frequency domain can be modeled as the multiplication
X"(.) * Wr(r) by the impulse train in the frequency domain:
Pr"ki
=';
"2_r(,
-,,?)
of the transform
(e.6.6)
Note that the inverse transform of p7 (r,r) is also an impulse train, as shown in Figure 9.5.1(f):
pr"(t)=
i
m- -r
uA -mTo)
(e.6.7)
Since multiplication in the frequency domain corresponds to convolution in the time
domain, the sampling operation in the frequency domain yields thc convolution of the
signal .r,(t)arr(t) and the impulse train p, (t). The result, as shown in Figure 9.6.1(9)'
isltre pitoaG exrension of the signal x,(r)roro), with period il,. This result also follows from the symmetry between time-domain and frequency-domain operations, from
which it can be expected that sampling in the frequency domain causes aliasing io the
time domain. This is a restatement of our earlier conclusion that in operations involving the DFT, the original sequence is replaced by its periodic extension.
As can be seen from Figure 9.6.1, for a general analog signal .r,,(t)' the sPectrum as
obtained by the DFT is somewhat different from the true spectrum X"(ro). There are
two principal sources of error introduced in the process of determining the DFT of
-r, (). The frrst, of course, is the aliasing error introduced by sampling. As discussed
earlier, we can reduce aliasing errors either by increasing the sampling rate or by pre-
filteriog the signal to eliminate its high-frequency comPonents.
The second source of error is the windowing operation, which is equivalent to convolving the spectrum of the sampled sigral with the Fourier transform of the window
signat. Unfortunately, this introduces ripples into the spectrum, due to the convolution
operation causing the signat component in r,(r) at any frequency to be spread over, or
to laa& into, othei frequencies. For there to be no leakage, the Fourier transform of the
window function must be a delta function. This corresponds to a window function that
is constant for all time, which implies no windowing. Thus, rvindowing necessarily
causes leakage. We can seek to minimize leakage by choosing a window function whose
Fourier tranaform is as close to a delta function as possible. The rectangular window
function is not generally used, since it does not approximate a delta function very well.
For the rectangular window defined as
MO
The Discrete Fourier
_2r
NN
2r
Figure
9.62
Transfonn
Chapter 9
i
Magnitude spectrum of a rectangular window.
,_(,)={l hffiJ-,
(e.6.8)
the frequency response is
r,,n(o) = .*o
-,n
[
(ry;!]
#rU
(e.6.e)
Figure 9.6.2 shows I W* (O) I , which consists of a main lobe extending from
O : -2n/N to2r./N and a set of side lobes. The area under the side lobes, which is
a significant percentage of the area under the main lobe, contributes to the smearing
of the DFT spectrum.
It can be shown that window functions which taper smoothly to zero at both ends
give much better results. For these windows, the area under the side lobes is a much
smaller percentage of the area under the main lobes. An example is the Hamming win-
dow, defined
as
wnb) = 0.54 -
0.46cosn3,
o
<n<N- I
(e.5.10)
Figure 9.6.3(a) compares the rectangular and Hamming windows. Figures 9.6.3(b) and
9.6.3(c) show the magnitude spectra of the rectangular and Hamming windows,
respectively. These are conventionally plotted in units of decibels (dB). As can be seen
from the figure, whereas the rectangular window has a narrower main lobe than the
Hamming window, the attenuation of the side lobes is much higher with the Hamming
window.
A factor that has to be considered is the frequency resohttion, which refers to the
spacing between samples in the frequency domain. If the frequency resolution is too
low, the frequency samples may be too far apart, and we may miss critical information in the spectrum. For example, we may assume that a single peak exists at a frequency where there actually are two closely spaced peaks in the spectrum. The
frequenry resolution is
Sec.
9.6
Spectral Estimation of Analog Signals Using the DFT
rrlrl
I
(,t
0tt
I
l.rrr rrrrng
0(,
0.4
0.:
l
(a)
0
-t0
=
-40
-60
o
o
-80
-
100
0
-20
=
G
_40
s
-@
o
o
-80
-
100
(c)
Figure 9.6.3 Comparison of rectangular and Hamming windows. (a)
Time functions. (b) Spcctrum of rectangular windorv. (c) Spectrum of
Hamming windorv.
441
42
The Dlscrete Fourlor
Aro =
Translorm Chapter I
o,
_2n _ Ztt
NNTTO
(e.6.11)
where Iu refers to the length of the data window. It is clear ftom Equation (9.6.11) that,
to improve the frequency resolution, ive have to use a longer data record, If the record
length is fixed and we need a higher resolution in the spectrum, we can consider
padding the data sequence with zeros, thereby increasing the number of sanples from
N to some new value No > N. This is equivalent to using a window of longer duration
T, > To on the modified sigral now defined as
,,,, =
0<r<fo
(e.6.t2)
To<t=Tl
{i,,','
f,'ra'nple 0.6.1
Suppose we want to use the DFT to find the spectrum of an analog siEnal that has been
prefiltered by passing it through a low-pass filter with a cutoff of 10 kHz The desired frequency resolution is less than 0.1 flz.
The sampling theorem gives the minimum sampling frequency for this signal as/, = 20
kHz, so that
I<
0.05 ms
The duration of the data window can be determined from the desired frequency resolution A/as
1
To
=
,=
10s
from which it follows that
x
=!>zx
rG
Assuming that we want to use a radix-2 FFT routine, we chooe N to b Xi\l$
which is the smallest power of 2 satisfyitrg the constraint on N. If we chmse /" =
fo must be chosen to be 13.10?2 s.
(= 2t\.
n
kHz.
Ere'nple 0.6.2
In tbis example, we illustrate the use of the DFT in frnding the Fourier spoctruo of analog signals. Let us consider the sigral
r"(t) =
go5400i I
Since the signal consists of a single frequenc?, is continuous-time Fourier transfora is a
pair of 6 functions occurring at !2N HzFigure 9.6.4 shows the magnitude of the DFT spectrum X(/<) of the signal for data
lengths of 32, 64, and 128 samples obtained by using a reaangular window. The sigtal was
sampled at a rate of 2klfz. which is considerably higher than the Nyquist rate of 4fl) [Iz
As can be seen from the figure, the DFT spectrum erhibits two peaks in each case. If we
let &, denote the location of the first peak, the gecond peak
at N
&e itr all cas€&
This is to be expected, since
= X(N *). The aualog frequeacies conrsponding
to the two peaks can be determined to be /o = lkpTlN.
X(-t)
-
eun
-
S€c.
9.6
lx(r)
Spectral Estimaiion of Analog Slgnals Using the DFT
443
r
l5
(a)
t2
I
X(&)
t0
|
E
6
4
2
0
m
tx(r)
|
l5
l0
5
0
60
80
t00
(c)
Ilgure 9.6.4 DFT spectrum of analog signal ,r,(t) using rectangular window. (a)N = 32. (b)N = 6a. (c) N = 128.
444
The Discrete Fourier
.
Transform Chaptet I
Figure 9.6.5 shows lhe results of using a Hamming window on the sampled signal for
data lengths of 32,64,and 128 samples. The DFT spectrum again exhibits two peaks at the
same locations as before.
4
3.5
rx(&)r
3
2.5
,,
1.5
I
0.5
0
t0
l5
20
25
(a)
7
6
r
x(r)
r
5
4
3
2
I
0E
o
(b)
t2
r
x(r)
|
t0
8
6
4
2
or0
(c)
9.65 DFT spectrum of analog signal .r,(t) using Hamming win(a)
N = 32. (b)N = el. (c) N = 128.
dow.
Iigore
30
Sec.
9.7
Summary
445
With both lhe rectangular and Hamming windorvs. the first pe lk occurs at k, = 3. (r.
and l3 for N = 32. 64, and 128 sarnples, respcctivelv. Thesc ctr.rcspond to analog trequencies of 187'5 Hz. 187.5 Hz. and 190.0625 Hz. Thus. as rhe number of data samples
increases. the peak moves closer to the actual analog frequencr. Note thal the peaks
become sharper as N (and hence the resolution in th.: digital [rcquencv domain) increases.
The figures also show thal the spectrum obtained using the Hamming window is some-
what smoother than that resulting from the rectangular window.
Suppose we add anolher frequency to our analog signal. so lhilt the signal is norv
.t,,(r) = cosz[00n, + cos440Tr
To resolve the two sinusoids in the signal, the frequency resolurion .[/must be less than
20 Hz. The duration of the data window, r0, must therelore be chosen to be greater rhan
1/20 s. If the sampling rale is 2 kHz, the number N o[ discrere-tirne samples needed to
resolve the two frequencies must be chosen to be larger than l(X).
Figurc 9.6.6 shows the DFT spectrum of the signal for <Iata lenr:rhs of 61, l2g, and 2s6
samples using a rectangular window, while Figure 9.6.7 shorvs rhc corresponding results
obtained using a Hamming rvindorv. with both rvindorvs, lhc 6-l-point DFT is unable to
resolve thc two peaks. For a rvindorv lengrh of l2ti, there are two Iaree values of
lx(k)
at values of k = 13 and 14. The corresponding frequencies in thc anllog domain are equal
lo 203-125 Hz and 218.75 Hz, respectively. Thus, even though thc rrvo lrequencies do not
appear as peaks in the DFf spectrum. it is neverthelcss possibL: t(, identify ihem.
For N = 256, there are lrvo clearly identifiable peaks in the spcclrum at k = 26 and 28.
These again corrcspond to 20-3.125 Hz and 218.75 Hz irr the :rnalor Ircqucncy domain.
|
MMARY
o
The discrete Fourier transform (DFT) of lhe finite-lenglh scc;uence .r(n) of lengrh
N is defined as
x(k)
x(n)wi!
where
,, =.'o[-i?-]
r
The inverse discrete Fourier transform
1
(IDFI)
lv_
is defined irv
|
.'tr) =
xtr)w;'a
,,r,],
The DFT oI an N-point sequcnce is related to its Z-transfornt
:rs
X(k) = X(r)1.-u.l
The sequence X(k), k = 0, l, 2. ..., N - l. is pcriodic wilh pcrrotl N. The sequence
x(n ) obtained by determining the IDFT of X(k) is also periodic wirh period N.
46
The Discrete Fourler
Translorm
Chapter 9
20
IE
X(&)
I
t6
|
l4
t2
l0
E
6
4
2
0
30
?5
r
x(e)
r
20
l5
--fr
t0
5
0
II
. l\,
80
50
45
r
x(*)
q
|
35
30
25
zo
t5
l0
5
0
0
(c)
9.66
DFT spectrum of analog signal
(a)
N = 6a. (b)N = 128. (c) N: 256.
dow.
Iigure
.rD
(,
) using reciangular win-
tn
Sec.
9.7
47
Summary
20
I8
I
X(r)
|
t6
l4
t2
t0
8
6
4
30
25
I
X(l)
r
20
t5
lo
5
0
50
45
rx(r)
r
.10
3s
30
25
20
t5
t0
5
0
lm
t50
(c)
Iigure 9.6.7 DFT spectrum of analog signal:r(l) using Hamming win'
dow. (a) N = 6a. (b)N = 128. (c) N = 256.
250
k
444
The Discrete Founer
Translorm
Chapter 9
In all operations involving the DFT and the IDFT, the sequence.r(n ) is effectively
replaced by its periodic extension rp(n ).
X(k) is equal to Nao, where a* is the coefficient of the discrete-time Fourier-series
representation of. x r(n),
o The properties of the DFT are similar to those of the other Fourier transforms, with
some significant differences. In particular, the DFT performs cyclic or periodic convolution instead of the linear convolution needed for the analysis of LTI systems.
. To perform a linear convolution of an N-point sequenc€ with an M-point sequence,
the sequences must be padded with zeros so that both are of length N + M - L.
. Algorithms for efficient and fast machine computation of the DFT are known as fast
Fourier-transform (FFT) algorithms.
o For sequences whose length is an integer power of 2, the most commonly used FFT
algorithms are the decimation-in-time (DIT) and decimation-in-frequency (DIF)
algorithms.
. For in-Place computation using either the DI'l' or the DIF algorithm, either the
input or the output must be in bit-reversed order.
o The DFT provides a convenient method for the approximate determination of the
sPectra of analog signals. Care must be taken, however, to minimize errors caused
by sampling and windowing the analog signal to obtain a finite-length discretetime sequence.
r Aliasing errors can be reduced by choosing a higher sampling rate or by prefiltering the analog signal. Windowing errors can be reduced by choosing a window function that tapers smoothly to zero at both ends.
r The sPectral resolution in the analog domain is directly proportional to the data length.
'
.
9.8
CHECKLIST OF IMPORTANT TERMS
Allaslng
Analog specirum
Blt-rcversed ordor
I)eclmaton-ln-trequency algorlthm
Declmadon-ln-tme algorlthm
Dlscrete Fourler translorm (DFf)
Eror reducflon
Fast Fouder transform (FFT)
ln-placa computadon
9.1. Compute the DFT of the following N-point
tet.trzl = Il'
t0,
(b) r(a) = (- lf
lnveree dlscrete Fouder tranetom (lDFf)
Llnear conyolutlon
Perlodlc conYolutlon
Perlodlclty ol DFT and IDFT
Prefllterlng
Spoctral reaoluuon
Wlndowlng
Zero paddlng
sequences:
n= no' 0<nocrv-l
otherwise
Sec.
9.9
449
Problems
(c) x(n) =
t.
{o
I
n even
orherwise
92. Show that if .r(n) is a real sequence, X(/V - t) = X*(t).
93. Let .r(a) be an N-point sequence with DFf X(k). Find the DFI of the followiog
sequences in term
(B) y,(z)
=
of X(& ):
I,(;),
lo,
\
reven
n odd
(b) vz(n) =.r(N-n - l).
(c) )r(/,) = r(zn), 0=x
0<n<rv<N- I
(r(nl.
' 0<nsN-l
(d)yo(a) =
N-n=2N_l
tr,
I
9.4. Letr(n) be a real eight-point-sequence, and let
rtr)
=
(r(n\,
O<n<7
[..1n - 8), 8=z<15
Find Y(k ), given that
x(0) = 1' x(t) =
1 + 2j;
x(2)= I -
jl;
x(3) =
1
+ jt:
andX(4\1 =2-
9.5. (e) Use the DFT to find the periodic convolution of the following sequences:
(l) r(a) = ll, -1, -1, l, -1, 1l and&(n) = 11,2,3,3,2,t|
(ll) :(n) = lr, -2, -1,1l and ft(n) = (1, 0, 0, 1[
(b) Verify your results using any mathematical software package.
9.5. Repeat Problem 9.5 for the linear convolution of the sequences in the Problem.
9.7. Le,l X(O) denote the Fourier transform of the sequence r(n) = (1/3)nu(n), atrd lety(n)
denote an eight-point sequence such that its DFT, (k), corresponds to eight equally spaced
samples of X(O). That is,
Y@=x(+k) o=0,1,"?
9.&
What is y(a)?
Derive Parseval's relation for the DFT:
/v-l
,), lrtr)l'
I N-l
=
.)*
lxttl l'
^,
9.9. Suppose we want to evaluate the discrete-time Fourier transform of an N-point sequence
r(n) at M equally spaced points in the range [0,2n]. Explain how we can use the DFT to
do this if (a) M > N aad (b) M < N.
9.10. Let.r(a) be an N-point sequence. It is desired to find 12E equally spaced samPles of the
spectrum X(O) in the range 7r/16< O= 15n/16, using a radix-2 FFT algorithm'
Describe a procedure for doing so if (i) N = 1000, (ii) N = 120.
9,11. Suppose we want to evaluate the DFT of an N-point sequence .r(n) using a hardware
processor that can only do M-point FFTs, where M is an integer multiple of N. Assuming
that additional facilities for storage, addition, or multiplication are available, show how
this can be done.
4il
Tho Discrote Fourler Translorm Chapbr g
9.tL
Given a six-point sequence r(z), we can seek to lind its DFT by suMividing it into three
two-point DFTs that can then be combined to give X(&). Draw a signal-flow graph lo evaluate X(k) using this procedure.
9.13. Draw a signal-flow graph for computing a nine-point DFT as the sum of three threePoint DFTs.
9.14 Analog data that has been prefiltered to 20 kHz must be spectrum analyzed to a resolution
of les than 0.25 Hz using a radix.2 algorithm. Determine the necessary data length Io.
9.15. For the analog signal in Problem 9.14, what is the frequency resolution if the sigral is sampled at 40 kHz to obtain l()96 samples?
9.16. The analog signal ro(l) of duration 24 s is sampled 8t the rate of 421E2and the DFTof
the resulting samples taken.
(a) What is the frequency resolution in the analog domain?
(b) What is the digital frequency spacing for the DFT taken?
(c) What is the highest analog frequency that does not ca"se aliasing?
9.17. The following represent the DFT values X(k) of an analog sigral r,(r) that has been sanpled to yield 16 samples:
x(o) = 2, 1113' = a - ia, x$) = -2, x(8) = - I, x1r r1 = -2. x(t3) = 4 + i4
All other values are ?sto.
(a) Find the corresponding r(a).
O)
(c)
9.I&
What is the digital freguency resolution?
Assuming lhat the sampling intewal is 0.25 s. find the analog frequency resolution.
What is the duration Io of the analog signal?
(d) For the sampling rate in part (c), what is the highest analog frequensl that can be present in ra(r) without causing aliasing?
(e) Find f0 to give an analog frequenca resolution that is twice that in part (c).
Given two real N.point sequences /(n) and g(a), we can find their DFTs simultaneously
by computing a single N-point DFf of the complex sequence
x(nl=l(n)+js(n)
We show how to do lhis in the following:
(a) kt ,,(n) be any real N-point sequence. Show that
Relr(k)l = H,(k) =
U@:
!12
tt--A
ImlH(&)l = H"(k) = U$l----!'U
G) frt t(n)
be purely imaginary. Show that
Relfr(t)l = H.(k)
ImUr(*)l = H,(kl
(c)
Use your resulrs in Parts (a) and (b) to show that
r(e)=1o111+ixb$)
G(kl=Xp(k)-ixp,(kl
:!)
S@.9.9
Problems
451
X".(&) and X""(k) represent the even and odd parts of Xr(k), the real part of
X(&), and X,"(&) and X,o(t) represent the even and odd parts of Xr(*) tl1s irnnginary
parr of x(t ).
9.M. (a) The signal x,(r) = 4cos(2nt/31is sampled at discrete insrants I to generate 32 points
where
of the sequence :(r). Find the DFT of the sequence if. T = 15t16, and plot the magpiftde
and phase of the sequence. Use a rectangular window in trnding the DFT.
@) Determine lhe Fourier transform of r,(t), and compare its magnitude atrd phase with
the results of Part (a).
(c) Repeat Parts (a) and (b) if = 0.1 s.
9.a). Repeat problem 9.19 with a Hamming window. Comment on your results.
I
92L We want to
ro(l) =
19 cos
determine the Fourier transform of the amplitude-modulated signal
12mnr) cos(100?rr) using the DFT. Choose an appropriate duration Io
over which the signal must be observed in order to clearly distinguish all the frequencies
in r,(t). Asume a sampling interval of = 0.4 ms.
I
(r)
Use a rectangular window, and lind the DFT of the sampled signal for N = 128,
256, and N = 512 samples.
@) Determine the Fourier transform of ro(l), and compare its magnitude and phase vith
the results of Part (a).
9.2L Repeat Problem 9.21 with a Hamming window. Comment on your resulb.
N=
Chapter '1 0
Design of Analog
and Digital Filters
10.1 JNTRODUCTION
Earlier we saw that when we apply an input to a system, it is modified or transformed
at the output. Typically, we would like to design the system such that it modities the
input in a specified manner. When the system is designed to remove certain unwanted
components of the input signal, it is usually referred to as a filter. When the unwanted
components are described in terms of their frequency content, the filters, as discussed
in Chapter 4, are said to be frequency selective. Although many applications require
only simple filters that can be designed using a brute-force method, the desigr of more
complicated filters requires the use of sophisticated techniques. In this chapter, we consider some techniques for the design of both continuous-time and discrete-time frequency-selective fi lters.
As noted in Chapter 4, an ideal frequency-selective filter passes certain frequencies
without any change and completely stops the other frequencies. The range of frequencies that are passed without attenuation is the passband ofthe filter, and the range
of frequencies that are not passed constitutes the stop band. Thus, for ideal continuous-time filters, the magnitude transfer function of the filter is given by lH(ro) | = 1 ;n
the passband ana la1<r)l = 0 in the stop band. Frequency-selective filters are classified as low-pass, high-pass, band-pass, or band-stop filters, depending on the band of
frequencies that either are passed through without attenuation or are completely
stopped. Figure 10.1.1 shows the characteristics of these filters.
Similar definitions carry over to discrete-time filters, with the distinction that the
frequenry range of interest in this case is 0 O < 2n, since If(O) is now a periodic
=
function with period 2zr. Figure 10.1.2 shows the discrete-time counterparts of the filters shown in Fig. 10.1.1.
452
Sec. 10.1
lntroduclion
I
453
fl(or)
I
I
,/(o)
I
0
(a)
I
lr(o)
I
I
lr(o)
I
(c)
Iigure
I
-2t
l0.l.l
fl(O)
(d)
Ideal conlinuous-time frequcncy-sclccrivc filters.
I
0a
-1
trl('})
-aOi
(a)
I
(
,,(O)
b)
lll(sl)
I
Ott
|
-?0t
(d)
(c)
Figure
|
l0.lJ
ldeal discrete+ime frequency-sclective filrers.
In practicc, we cannot obtain filter characteristics with abrupt transitions between
passbands and stop bands. as shown in Figures l0.l.l and 10.1.2. This can easily be s€en
by considering the impulse response of the ideal low-pass filter. which is noncausal and
hence not physically realizable. To obtain practical filters, we rherefore have to relax
4il
Dosign ol Analog and Dlgttal
Flltem
Chapter 10
IH(tt)I
I + 6r
I
-6r
52
0
tlgure 10.13 Specification for practical low-pass frlter.
our requirements on lH(rD) | (or I a(O) | ) in the passbands and stop bands, by permitting deviations from the ideal response, as well as specifying a transition band
between the passbands and stop bands. Thus, for a continuous-time low-pass filter, the
specifications can be of the form
I -E,s la1rll
<t +0,, lrl s.,
ln1,1l <
or,
l.u,
(10.1.1)
lsto
where ro, and ro, are the passband and stop band cutoff frequencies, respectively. The
range o[ frequencies between o, and o" is the transition band, depicted in Fig'
ure 10.1.3.
Often, the filter is specified to have a peak gain of unity. The corresponding speci'
fications for the filter frequency resPonse can be easily determined from Figure 10.1.3
by amplitude scaling by a factor of 1/(l + 6,). Specifications for discrete-time filters
are given in a similar manner as
llatoll-rl<6,,
lrr(o)l = s,,
lol =o,
o"= lol ="'
(10.1.2)
Given a set of specifications, filter desigr consisLs of obtaining an analytical approximation to the desired filter characteristics in the form of a frlter transfer function II(s)
for continuous-time systems and f/(e) for discrete-time systems. Once the transfer
function has been determined, we can obtain a realization of the filter, as discussed in
earlier chapters. We consider the design of two standard analog filters in Section 103
and examine digital frlter design in Section 10.4.
In our discussion of filter design, we confine ourselves to low-Pass filters' since, as
is shown in the next section, a tow-pass frlter can be converted to one of the other types
of frlters by using appropriate frequency transformations. Thus. given a specilication
for any other type of filter, we can convert this specification into an equivalent one for
Sec.
10.2
455
FrequencyTranslormations
a low-pass filter, obtain the corresponding transfer function H(.r) { or
vert the transfer function back into the desired range.
//(e)), and con-
1O.2 FREQUENCY TRANSFORMATIONS
As indicated before, frequency transformations are useful for converting a frequencyselective frlter from one type to another. For example, supPose u'e are given a continuous-time low-pass filter transfer function H(s) with a normalized cutoff frequencv of
unity. We now verify that the transformation which converts it into a low-pass filter
with a cutoff frequency ro. is
sn
= Jo.
(10.2.1)
where s' represents the transformed frequency variable. Since
(r)
u
= tl(o.
(10.2.2\
it
is clear that the frequency range 0 - lrl - 1 is mapped into the range
0 s lor'l s r,r.. Thus, H(st) represents a low-pass filter with a cutoff frequency of to..
More generally, the transformation
,':r4
(10.2.3)
or.
transforms a low-pass filter with a cutoff frequency r,l. to a low-pass filter with a cutoff
frequency of ro!. Simitarly, the transformation
-o-9c
(10.2.4)
s
transforms a normalized low-pass filter to a high-pass filter with a cutoff frequency of
o". This can be easily verified by noting that in this case we have
rrr'
=-
or
(10.2.s)
0.)
so that the point lorl = 1 corresponds to the point lrol = ,.. Also, the range lt'rl s 1
is mapped onto the ranges defined by."
lr,rol =
Next rve consider the transformation of the normalized low-pass filter to a band-pass
filter with lower and upper cutoff frequencies given by to,j, arld o]r.,, respectively. The
required transformation is given in terms of the bandwidth of the filter,
-
-.
BW=o..-ro.,
(10.2.6)
atrd the frequency,
tuu
ffo".-orn
(10.2,7)
ali
'= #(;;. p)
(10.2.8)
Design of Analog and Digiial
Filters
Chapter 10
This transformation maps ro = 0intothe points r,r0 = + co, and the segment lrrrl
the segments ro., > ltool = to.,.
Finally, the bimd-stop filtei is obtained through the transformation
J=
BW
slto
(10.2.e)
*(*.3)
where BW and o, are defined similarly to the way they were in the case of the bandpass filter. The transformations are summarized in Table 10-1.
TABLE 1Gl
Fr€quency translormEllons lrom low-pass analog llllor roapon8g.
Fllte, Typo
Transtormallon
Low
Pass
High
Pass
Band
Pass
s'
lic
lt
# (* . p),
BandStop
r,rs
= \6.J+
BW = o.,
l"*L;,*
,.l.*
-
ro.,
7/
We now consider similar transformations for the discrete-time problem. Thus, sup
pose that we are given a discrete-time low-pass 6lter with a cutoff ftequency O", and we
want to obtain a low-pass frlter with cutoff ftequency Of . The required transformation is
,'=fi
More conventionally, this is written
(10.2.10)
as
l,r\-l:z-l-o
tz")':l_az-,
(10.2.11)
By setting z = exp [iO] in the right side of Equation (10.2.11), it foltows that
.'=",p[i,un'##H*]
rrc.2.t2)
Thus, the transformation maps the unit circle in the z plane into the unit circle in the
et plane. The required value ofc can be determined by setting zr = expUOS] and
O = O. in Equation (10.2.11), lelding
+drri
a=
^_sin[(O"-oj)/2]
sinid
(10'2'13)
Sec. 10.3
Design ot Analog Filt€rs
457
.
TAELE 1G2
Frcquencry tsanslomadons Irom los-pa88 dlgtlEl llltor'rpsponso.
Fllter Type
Assoclatod Formul,ss
Transtormauon
. o.-o:
-', -'
srn
Low Pass
(z')-r =
i-*+
Ol =
High Pass
Band Pass
'
k+l'
-L;-a
desired cutoff frequency
oi-o_
-' 2-o!+ocos-'7cos
z-l + o
- r;;=i
2ok
O +0!
sin
c=-
O:. + o:,
. k-l
-r+=--
t*2
k+l
"
=
k=
t;. -
n:,
-'-2 O!-niir
cor-1t-
o
tan
f
o:,, o.t, = desired lower and upper
cuioff frequencics, respectively
,-, Band Stop
t_- k
-Lr-, I l+t
O.'. + O:,
-t- '" = ---o=n:.
cos -J
Oi-n!'' r.r
k = lan
2 tanf
cos
Transformations for converting a low-pass filter into a high-pass, band-pass, or bandstop filter can be similarly defined and are summarized in Table 10-2.
The design of practical filters starts with a prescribed set of specifications, such as those
given in Equation (10.1.1) or depicted in Figure 10.1.2. Whereas procedures are available for the design of several different analog filters, we consider the desigp of two
standard filters. namely, the Butterworth and Chebyshev filters. The Butterworth filter provides an approximation to a low-pass characteristic that approaches zero
smoothly. Tbe Chebyshev filter provides an approximation that oscillates in the passband, but monotonically decreases in the transition and stop bands.
458
Design ot Analog and Digital
Filters
Chapt€r 10
10.3.1 Ihe Butterworth Filter
The Butterworth filter is characterized by the magnitude function
la(.)1, =
ilrr-*
(10.3.1)
where N denotes the order of the filter. It is clear from this equation that the magnitude is a monotonically decreasing function of to, with its ma$mum vatue of uiity
occurring at ro 0. For o = l, the magnitude is equal to l/\/r, for all values of N.
Thus, the normalized Butterworth filter has a 3-dB cutoff frequency of unity.
Figure 10.3.1 shows a plot of the magnitude characteristic of this'filter as a function
of ro for various values of N. The parameter N determines how closely the Butterworth
characteristic approximates the ideal filter. clearly. the approximation improves as N
is increased.
The Butterworth approximation is called a maximally fiat approximation, since, for
given
a
N, the maximal number of derivatives of the magnitude function is zero at the
origin. In fact,.the first 2N 1 derivatives of lfflroyl are zero ar o, 0, ali we can see
by expanding la1rll in a power series about ro = 6:
:
:
-
la(r)l'
=
t-
i.il
+
lro+'v
-...
To obtain the filter transfer function l/(s), we use
I Ir(ro)
|
Ideal response
A/= 4
M-3
N-2
Lrv=t
t23
Flgure
103.1 Magnitude plot of normalized Butterworth filrer.
(10.3.2)
Sec.
10.3
Design of Analog Filters
11(s)
459
// t -
s) l, -
p=
' (10.3.3)
lH( ,u)l'
I
l+
i",?']'
so that
H(s)H(-s)
l,.m
=-
(10.3.4)
,.(i)
From Equation (10.3.4), it is clear rhat the poles of H(s) are givcn by the roots of
the equation
(;)- = -,
(10.35)
= expU(zk - l)"1,
k=
0,
l,2.... ,2N
-
1
It follows that the roots are given by
s*
= exp[j(2k + N -
By substituting sr
:
t)r/2N] t
= 0.1,2.....2N
-I
(103.5)
orr + 7to1, we can write the real and imaginary parts :ls
l2k+N-t It)\
cosl--r,.l2k-lrr.\
=""\-r-
or =
t/
,*
=
sin
l2k+N-l
l-ZN
\
(toj.7)
")
/2k-lt\
= cosl\-
N-rJ
As can be seen from Equation (10.3.6), Equation (10.3.5) has 2N roots spaced uniformly around the unit circle at intervals of n/2N radians. Since 2/< - 1 cannot be even,
it is clear that there are no roots on the 7ro axis, so that there are exactly N roots each
in the left and right half planes. Now, the poles and zeros of H(.s) are rhe mirror images
of the poles and zeros of H(-s). Thus. in order to get a stable transfer function, rve
simply associate the roots in the lcft half plane rvith H(s).
As an example. for N = 3, from Equation (10.3.6), the roots are located at
as shown in
[ "tlj.
l.zrl
exO[i:i:].
sr,
= exnll.
',
=.-nfilil. '. =.-n[,T].
Figure
10.3.2.
s, =
sz
= exp[lnl,
ss
=r
,
460r
Design ol Analog and Digital
Filtsrs
Chapt€r iO
I
I
xI
t
\
Flgure 1032 Roots of the
Butterworth polynomial for N =
l.
To get a stable transfer function, we choose as the poles of fl(s) the left-half plane
roots, so that
fl(s):
ls
-
explj?r /3ll[s
(10.3.8)
- exp[lzr]l[s - exp[ianl3]l
The denominator can be expanded to yield
H(s) =
I
Gr.
"lix"
(10.3.e)
.,' 1)
Table l0-3 lists the denominator of the Bunerworth transfer function in factored form
for values of N ranging from lY = I to N = 8. When these factors are multiplied, the
result is a polynomial of the form
s(s):
ansfl +
a,r-,C-r
* "'*
a,s
*I
(10.3.10)
:
These coefficients are listed in Table 104 for N = I to N 8.
To obtain a filter with 3-dB cutoff at toc, we replace s in II(s) by s/to.. The corresponding magnitude characteristic is
TABLE 10€
Eutlsrwoih polynotnlalo (tadored lorm)
I
s+1
2
s2
3
\6s + 1
(s2+s+f)(s+l)
4
(s2
5
6
7
E
+
+ 0.7653s + l)(s2 + l.B476s + l)
(s + l)(s2 + 0.6180r + 1)(r2 + l.6lEtu + l)
(.s2 + 0.5176s + 1)(s2 + V2s + l)(s2 + 1.931& + t)
(s + l)(s2 + 0.4450r + 1)(s2 + 1.2455s + l)(s2 + 1.8()22s + l)
(s2 + 0.3986s + l)(s'? + l.lllos + lxs2 + 1.6630r + l)(s2 + t.g62x +
l)
Sec.
10.3
Design ot Analog Filters
481
TABLE 10.4
Bullemorth polynomlal8
ar
a.
a,
as
c
I
\/i
1
2
2
I
2.613
3.414
2.613
1
3.?36
5.236
5.236
3.2%
1
3.864
7.4U
9.141
7.&4
3.W
4.494
10.103
14.6(b
14.6M
10.103
4.494
5.126
13.128
21.828
25.691
21.84E
r3.13E
1
I
5.126
la(.)l'=r*#Jil
(103.11)
I-et us now consider the design of a low-pass Butterworth filter that satisfies the following specifications:
la1.;l
>t-0,, l.l =,,
s
(10.3.12)
Ez, lrl ,.,
Since the Butterworth filter is defined by the parameters iV and o", we need fwo equations to determine these quantities. From the monotonic nature of the magpitude
respome, it is clear that the specifrcations are satisfied if we choose
la(ror)l
=l-Er
(10.3.13)
: t,
(10.3.14)
and
la1o,;l
Subatituting these relations into Equation (10.3.11) yields
('J.)*=(+)'-,
and
(**)"=#-,
Eliminating o. from these two equations and solving for N regults in
,-,[*ctrHbl
"=
*::
'L
l
(10.3.15)
..462..
Deslgn ol Analog and Digital Fllters.
.
Chapter tO
Since N must be an integer, we round up the value of ,itr' obtained frbm Equation
(10.3.15) to the nearest integer. This value of N can now be used in either Equatiol (103.13) or Equation (10.3.14) ro determine ro.. If ro. is determined ftom Equation
(10.3.13), the passband specifications are met exactly, whereas the stopband specifrcations are exceeded. But if we use Equation (10.3.14).to determine to". the reverse is
true. The steps in finding II(s) are summarized as follows:
1. Determine N from Equation (10.3.15), using the values of 6,, 5r, ror, and o,, and
round-up to the nearest integer.
2. Determine o., using either Equation (10.3.13) or Equation (10.3.14).
3. For the value of N calculated in Step l, determinp the denominator polynomial of
the normnlized Butterworth filter, using either Tdble 10-3 or Tabte 104 (for values
ofN < 8) or using Equation (10.3.8), and form t/(s).
4. Find the unnormalized transfer function by replacing s in H(s) found in Step 3 by
s/o.. The filter so obtained will have a dc gain of unity. If .some other dc gain is
desired, H(s) must be multiplied by the desired gain.
Erample
l0J.l
We will design Butterworth filter to have an attetruation of no more than .l dB for
lrol s 2000 radis and at least 15 dB for l,ol = SOOO rad/s. From rhe specilications
20log,o(l - Er): - I
and 20lo9,o6, = -15
:
that Er 0.10E7 and Ez = 0.1778. substituting these values into Equation (103.15) yields
a value of 2.6045 for /v. Thus we choose N to be 3 and obtain the normalized frlter from
Table 10-3 as
so
'1
H("):
s, + 2.a +
A+
1
of Equation (10.3.14) yields r,r. : 2826.8 radds.
The unnormalized filter is therefore equal to
Use
H(s) =
(s /2826.8)1
+
2(s /2826.8)2
+
2(s / 2826.8)
+t
_128r9!I_
sr + 2(2E26.8)s2 + 2(2826.8)2s + (2826.8)3
Figwe 103.3 shows a plot of the magnitude of the filter as a funciion of o. As can be seen
from the plot, the filter meets the spot-band specifications, and the passband specifications
are exceeded.
10.8.2 the Ghebyshev f ilter
The Butterworth filter provides a good approximation to the ideal low-pass characteristic for values of or near zero, but has a low faltoff rate in the transition band. we now
consider the chebyshev filter, which has ripples in the passband, but has a sharper cutoff in the transition band. Thus. for filters of the same order, the chebyshev filter has
Sec.
IH
10.3
463
Design of Analog Filters
(ull
6 artrad/s
Figure 10.3.3 Nlagnitude function
of the Butterrvorth filter of Example
10.3.1.
a smaller transition band than lhe Butterworth filter. Since the derivation of the
Chebyshev approximation is quite complicated, we do not give the details here, but
only present the steps needed to determine H(s) from the specifications.
The Chebyshev filter is bascd on Chebyshev cosine polynomials, defined as
cos(Ncos-to)' lrl s
= cosh (N cosh-t or), lr,rl ,
C,u(r) =
t
t
(10.3.16)
Chebyshev polynomials are also defined by the recursion formula
C,r(r) = 2oCn-,(or)
with Cs(to) =
1 and C1(<,r)
- Cr-z(.)
(10.3.17)
= 6.
The Chebyshev low-pass characteristic of order N is defined in terms of Cx(o) as
la(,)l'=;-.t-t e'zcfr(to)
(10.3.18)
To determine the behavior of this characteristic, we note thal for any N, the zeros ol'
C^,(ro) are located in the interval l.l = t. Further, for lol ' t. lCrloll < l, and for
l.l , t, lC"(r)l increases rapidly as lrl becomes large. It follorvs that in the interval l.ol - t, lf 1o1l'? oscillares about unity such that the maximum value is l and the
minimum is 1/(l + e'?). es l.ol increases, lg(r) l' approaches zero rapidly, thus providing an approximation to the ideal low-pass characteristic.
The magnitude characteristic corresponding to the Chebyshev filter is shown in
Figure 10.3.4. As can be seen from the figure, lH(r,r) | ripples between I and
l/!t + e2. Since Ci(l) = I for all N, it fotlows that for or = l.
lstrll
=*+
(r 0.3.1e)
o
(!
E
-4,
EO
(J
-EL
<G
CL
a,
.o
o)
E
q!-
oq)
3
o
()
c)
J()
all
GI
(,
oJ
!a
c
it6
2
I
J
-t
.i
c
a)
'E
-Ir
3
4il
Ea
SEc.
10.3
Design ol Analog Fitters
For large values of
,-that
is, values in the stop
lnr,)l
band-we can appr.ximare
-= --',1
E C,\' ( u,
: - 20 log,,, lH{t,r;
: 20 log e + 20 log Cr(or)
=
as
as
I
rt,lN,
For large o, Cn,(to) can be approximated by 2Nloss
|
(10 3'rlr)
The dB attenualion (or loss) from the value at r,r = 0 can thus hc wrirren
loss
lalr)
20
loge + 6(N
-
(10.3.21)
so thar we have
1) + 20N logo
(.'10.3.22)
Equations (10.3.19) and (10.3.22) can he used lo determine rhc rrvo parameters N
and e required for the chebyshev filter. The parameter e is dercrnrined by using the
passband specifications in Equarion (10.3.19). This value is rhen used in nquirion
(10.3.22), along with the stop-band specifications, to determine N. In order ro find
H(s), we introduce the parameter
F
:
(10.3.23)
fisinr,-'1
The poles of H(s), s, = o* -r f ro^, & : 0,
l, ..., N -
1, are givcn b1,
"- =,i"(?51)],i,r,o
,,
=.",(4;,)]
*,nu
(10.3.24)
It follows that the poles are locared on an ellipse in the s plane givcn by
ol rinh'g
ofi
"o.tr:B
=
'
(10.3.25)
The major semiaxis of the ellipse is on the lo axis, the minor senriaxis is on the o axis.
and the foci are at or + l, as shown in Figure 10.3.5. Ttre 3-dB cutolf frequency occurs
at the point wh.-re the ellipse intersects the it,l axis-that is, ar t,,r = cosh B.
It is clear from Equation (10.3.24) rhar the chebyshev poles are relared to the Burterworth poles of the same order. The relation between these poles is shown in Figure
10.3.6 for IV = 3 and can be uscd to determine the locations ol rhe chebyshev poles
geometrically. The corresponding H(s) is obtained from lhe lefr-half plane poles.
:
Elrenrple 1032
we consider the design of a chcbyshev filter to have an attenuariorr o[ ntr more than I dB
for lr,r | - l([0 rads/s and at leasi l0 dB for l, | = 50)0 railsls.
We will first normalize or, to I, so that to, = 5. From the pas:;hand specifications, rve
have. from Equarion ( 10.3. l9 )
466
Design ol Analog and Digital
Filters
Chapter 10
lct
coslt 6
fir
\
sinh P
o
\,
)
Flgure 103.5 Poles of the
Chebyshev filter.
i/f
,t'1
Q
:'t,
X
-x-x- 'n-lV
...' {
z1t
't
,i.
-:t,
t,.
t
,ii
I
'1
sinh P
l6t
J-x
,t/l
Buncrworth poles
Chebyshev poles
t.
lr
r!
\
i-l-+-,
Vt i
ilt
t
I
/(--_
Buttereorth
pole locus
-x-
Ilgure 103.6 Relation between the Chebyshev and Butterworth poles for
N:
3.
I
20loero;-;
It follows that e =
=
-l
0.509. From Equation 10.3.22.
l0 = 20 logro0.509 + 6(N
-
t) +
20N tog,65
so thal N = l.G)43. Thus. we use a value of N = 2. The parameter p can be determined
from Equation (103.23) to b€ equal to 0.714. To find the Chebyshev poles. we determine
the poles for the corresponding Butterworth filter of the same order and multiply the real
parts by sinh p and the imaginary parts by coshp. From Table l0-3. the poles of the nor.
malized Butterworth filter are given by
Sec.
10.3
Dssign ol Anatog Fitters
467
1_
@P
where r,rr
=
10C0.
\/,
I
lj
;
v2
The Chebyshev poles are. rhen.
, = -#
(sinho.7l4)
= _545.31 +
.,#(cosho.7t4)
j892.92
Hence,
H(r) =
(s +
545.31)'z
+
(892.92)2
--1
The corresponding filter with a dc gain of unity is given by
H(s; =
(s4s.3ly + (892.q2)2
(s + 545.31)2 + (8ct2.92)'z
The magnitude characteristic for this filter is shown in Figure 10.3.7.
I Hlti)
I
t.t2
I
qJ
(
kHz)
Ilgure 103.7 Magnitude
characteristic of the Chebyshev
filter of Example 10.3.2.
An approximation to the ideal low-pass characteristic, which, for a given order of
frlter has an even smaller transition band than the Chebyshev filrer. can be obtained in
terms of Jacobi elliptic sine functions. The resulting filter is called an elliptic filter. The
design of this filter is somewhat complicated and is not discussed here. We note, however, that the magnitude characteristic of the elliptic filter has ripples in both the passband and the stop band. Figure 10.3.8 shows a typical elliptic-filrer characteristic.
468
Design ot Analog and Digilal
I
H(utt
t
12
Filters
Chapter 10
ttlot l:
I
I
I
l;?
I
- .'
'ic
N odd
Figure
N
103.8 Magnitude characteristic of an elliptic
even
Jilter.
ITAL FILTERS
In recent years, digital filters have supplanted analog filters in many applications
because of their higher reliability. flexibility, and superior performance. The digital filter is designed to alter the spectral characteristics of a discrete-time input signal in a
specified manner, in much the same rvay as the analog filter does lor continuous-time
signals. The specifications for the digital filter are given in terms of the discrete-time
Fourier-transform variable o. and the design procedure consists of determining the discrete-time transfer function H(l) that meets these specifications. We refer to H(z) as
the digital filter.
In certain applications in which a continuous-time signal is to be filtered, the analog
filter is implemented as a digital filter for the reasons given. Such an implementation
involves an analog-to digital conversion of the continuous-time signal, to obtain a digital signal that is filtered using a digiral filter. The outpur of the digital filter is then converted back into a continuous-time signal by a digital-to-analog converter. In obtaining
this equivalent digital realization of an analog filter, the specifications for the analog
filter, which are in terms of the continuous-time Fourier-transform variable o, must be
transformed into an equivalent set of specifications in terms of the variable O.
As we saw earlier. digital sysrems (and, hence, digital filters) can bc either FIR or
IIR filters. The FIR digital filter. of course, has no counterpart in the analog domain.
However, as we saw in previous sections, there are several well-established techniques
for designing IIR filters. It would appear reasonable, therefore, to try and use these
techniques for the design of IIR digital filters. In the next section, we discuss two commonly used methods for designing IIR digital filters based on analog-filter design techniques. For reasons discussed in the previous section, we confine our discussions to the
design of low-pass filters. The procedure essentially involves converting the given digital-filter specifications to equivalent analog specifications, designing an analog filter
that mects these specifications, and finally, converting the analog-filter transfer function H.(s) into an equivalent discrete-time transfer function I/(a).
Sec.
10.4
Digital
Filters
469
10.4.1 Design of IIR Digital Filters Using
Impulse Invaria"ce
establishing an equivalence betrveen a discrete'
time system and a corresponding analog system is to require that the responses of the
two systems to a test input match in a certain sense. To obtain a meaningful match, we
assume that the output y,(t) of the continuous-time system is sampled at an appropriate rate T. We can then require that the sampled output y,(nf) be equal to the output
y(n ) of the discrete-time system. If we now choose the test input as a unit imPulse, we
require that the impulse responses of the two systems be the same at the sampling
instants. so that
A fairly straightfonvard method for
(10.4.1)
h,(nT) = h(n)
The technique is thus referred to as impulse'invariant design.
It follows from Equation (10.4.1) and our discussions in Section 7.5 that the relation
between the digital frequency O and the analog frequency to undcr this equivalence is
given by Equation (7.5.10),
o
(t0.4.2)
'=i
Equation (10.4.2) can be used to convert the digital-Iilter specifications to equivalent
analog-filter specifications. Once the analog filter H,(s) is dclcrrnined, we can obtain
the digitat filter H(z) by finding the sampled impulse response h,(nT) and taking its
Z-transform. In most casesi we can go directly from H,(s) to H(z) by expanding H.(s)
in partial fractions and determining the corresponding Z-transform of each term from
a table of transforms, as shown in Table 10-5. The steps can be summarized as follows:
1. From the specified passband and stop-band cutoff frequcncics. Q and O" respectively, determine the equivalent analog frequencies, oo and r,r,.
2. Determine the analog transfer function H,(s), using the techniques of Section 10.3.
3. Expand H,(s) in partial fractions, and determine the Z-transform of each term from
a table of transforms. Combine the terms to obtain fI(z).
fairly straightforward to use, it suffers from
one disadvantage, namely, that we are in essence obtaining a discrcte-time system from
a continuous-time system by the process of sampling. We recall that samPling introduces aliasing and that the frequency response corresponding to the sequence h,(n?')
is obtained from Equation (7.5.9) as
while the impulse-invariant technique
H(O) =
is
;_i. a"(a.2]
r)
(10.4.3)
so that
H(o) = ,ir"rn,
only
if
(10.4.4)
470
Oosign ol Analog and Digital
Filtors
Chapter 10
TABLE 1(}5
Laplaco bansrorms and thelr Z-transtorm equlvalents
Transtoh,
Laplaco
Z-Transtorm,
H14
H(s)
I
,
Tz
(.-tl,
s-
r'rS:-U
(z - l)'
2
sl
l-
z
s*a
z
I
G*"i'
(z
-
expl- aTl
Tz
expl- aTl
exp[- aTl)'1
-
1
I
l_
(b-o)\z - exp[-al]
(s+a)(s+b)
a
s2(s + a)
-
expl- bTl
Tz __
(z-
lF_ _(r_-Ip_1-o4)z
a1! - 9["*o-1-n7p
Tz exp[-
1
tr * ,l:
(z
-
aTl
exp[-aI])2
a2
s(s-+ljl
z-l
- -9e s2+-j
_
z2
z
z
z
-
z2
rrro'
-
-
[-aI]
aT expl- aTlz
(z expl-aTl)2
-
sinool
-
cosoo
I
I)
_
2z costl,oT + I
3
__.._.9o..
(s+a)2+(o;
exp
2z cosooT +
_z (z
s
-;- -_-;
t' *
z
exp[:rfl_fiqgsl_
all cos roo T + expl-?aTl
_
_ z' - z expl-rll "gryLl__
-z2 - 2z expl- oI] cosorl + expl- 2aTl
z2
s+d
(s+a):+(o3
H,(.)
=
-
2z expf-
0. lrl = T
7t
(10.4.s)
which is not the case with practical low-pass filters. Thus. the resulting digital filter does
not exactly meet the original design specifications.
It may appear that one way to reduce aliasing effects is to decrease the sampling
interval T. However, since the analog passband cutoff frequency is given by
?r..: Ar/ T, decreasing 7 has the effect of increasing <or. thereby increasing aliasing. It
follows, therefore, that the choice of r has no effect oir the performance of the digital
filter and can be chosen to be unity.
For implementing an analog filter as a digital filter, we can follow exactly the same
procedure as beforc, except that Step 1 is not required. since the specifications are now
Sec. 1 0.4
Digital Filters
471
given directly in the analog domain. From Equation (10.4.-l). rrhen rhe analog filter is
sufficiently band linrited. lhe corresponding digiral filter has a grin of l/I. which can
become extrenrely high for low values of L Gencrally. thereforc. the resulting transfer function H(t) is multiplied by 7'. The choice of I is usuall,"- determined by hardware considcrations. We illustrarc the procedure by the follorving cxample.
Example lo.4.l
Find the digital equivalent of the analog Butterworlh filter derived in Example 10.3,t using
the impu lse-inva rian t method.
From Example 10.3.1. with ro,. = 2826.E, thc filter transfer function is
H(s) =
.r,* + (2sr6.s).
si + 2(2826.E)srl"r'lrll
2826.8
= s + ZSlO.g
2826.8(.r
1.s
+
+
1413.4)+
0 s(2826.8F
+ (2448.1):
1413.4)2
15
+ lJl3..l)r fdd.tl'
We can determine lhe equivalent Z-transfer function from Tablc l().5 as
t
H(:) = 2826.8[.
-
_
- z2-
' sin(2A8.lf[l
,.,ii;;;;.(;44s.rr)
2-.
+ e-2826.8r I
zc-r{r'1'{r[cos(22148.17)
.r]
- "',.',0,
If the sampling interval I is assumed to be I ms, we get
- :',,- :l:*o"t'
+
n(z) = 2s:o.sf :".".
lz - 0.2433
I
(,.()5921
O.37422
which can be amplitude normalized as desired.
n=ample 1O.42
l.et
of a Butterworth lorv-pass digital filt,:r rhar mees the following specifications. The passband magnitude should be constanr to within 2 dB for frequencies helow 0.2rr radians, and the stop-band magnitudc in thc range 0.4n < 0 < n
should be lcss lhan -10 dB. Assume that the magnitude at O = () is normalized to unity.
With ()/ = 0.2rr and O. = 0.4n, since the Butterworth filtcr has a monotonic magnitude characteristic, it is clear that to meet thc specifications. wc rlrust have
us consider the design
20logro lH(0.21r 1l
or
= -2.
lfl(o.zrr
l
l'
-- to-o:
and
logrolr/(0.4r)l = -
or
lH1tt.+" 11t = 19-'
For the impulse-invarianl dcsign techniquc. we obtain the equiralent analog domain specifications by setting r,r = Of. rvith I = l. so that
20
10,
la"1o.zr'11'= to'o'
la"1o.ln;lr = to-'
For the Butterworth filtcr.
l1ltiu,tl:rvhere t,, uttd Nntust
Lrc tL:1,-'r t,tirt,rr.'l
I
I
r
ur
'/r"' )'
'
ito,rt li:c r,rrc1tlj..1i;,r;i,
li,
-r
ie
ltls t hc
l$o cqualions
472
Design ot Analog and Digttal
r*
(94)-
Fllters
Chapter 10
l'r,P,
r*l,o.o")*=,0
\ t'1. /
Solving for lV gives N = 1.918, so that we choose N : 2. With this value of ly, we can solve
for
of the last two equations. If we ,se the first equation, we just meet the
passband specifications, but more than meet the stop-band specifications, whereas if we
use the second equation, the rcverse is true. Assuming we use the first equation, we get
to. from either
ro.
=
0.7185 rads
The corresponding Butterworth filter is given by
I{,(s) =
o.5162
(s/o.)2+!21t1-,1
+t
sz+1.016s+0.5162
with impulse response
h,(t) =
1.01 exp [-0.5081] sin0.508r z
(r)
The impulse response of the digital filter obtained by sampling ft"() with
&(a) = 1.61 exp [-0.508r] sin0.50E n z(n)
I
=I
is
By taking the corresponding Z-rransform and normalizing so that the magnitude at O =
is unity, we obtain
O
H(z\ =
0.58542
zz-l.Ostz+01162
a ptot of lH(o)l for () in the range [0, rrl2]. For rhis parricutar
Figure 10.4.1 shows
example, the analog filter is sufficiently band limited, so that the effects of aliasing are not
noticeable. This is not true in general, however. one possibility in such a case is to choose
a higher value of N than is obtained from the specifications.
I,,(O)
I
I
0.89t
O (rad)
ngure f0Af Magpitude function
of the Butterworth desigp of
Example 10.4.2.
Sec.
10.4
Digiial Filters
473
10.4.2 IIR Desigrr Using the Bilinear Transforrnation
As slated earlier. digital-filter dcsign based on analoq fillers involvcs converting discrete-domain specifications into the analog domain. The impulse -invariant design does
this by using the lransformation
ot =
A/T
or equivalently.
z = exp [?ns]
We saw, however. that because of the nalure of this mapping, :rs discussed in Section
8.8, the impulse-invariant design leads to aliasing problems. ()nc approach to overcoming aliasing is to use a lransformation that maps the Z-domain onto a domain thal
is similar to the.r domain, in that the unit circle in the e plane ntaps into the vertical
axis in the new domain, the interior of the unit circle maps onto the open left half
plane, and the exterior of the circle maps onto the open right half plane. We can then
treat this new plane as if it were the analog domain and use standard techniques for
obtaining the equivalent analog filier. The specific transformation that we use is
2l-zl
Tl*r-'
'
(10.4.6)
or equivalently,
t + (T/Z\s
'
|-
{1o.4.7)
(T/z)s
where Tnis a parameter lhat can be chosen to be any convenient value. It can easily be
verified by setting Z = r-t exp[jO] that this transformation, which is referred to as the
bilinear transformation, does indeed satisfy the three requiremcnts lhat we mentioncd
earlier. We have
s=o+i_:?t#_ffi_L#
2
TI
For r < l, clearly, o
inary, with
>
7-12
.'
2
+i:
+ r2 + 2r cos0
TI
0, and for
(r)
=
r
. From
0
+ r2 + 2r cos ()
) l, we have o < 0. For r = l..r is purely imagsin O
2A
tan
Il+coso = 7'2
This relationship is plotted in Figure 10.4.2.
The procedure for obtaining the digital filter
I
2r sin
//(i)
can be sunrnrarized as follows:
the given digital-tilter spc-cifications. find thc correspon(lin!l analog-filter specifications by using the relation
Design ol Analog and Oigital
474
Filters
Chapter 10
Flgure 10,42 Relation between O
and ro under lhe bilinear
transformation.
20
a=
ltanl
(10.4.8)
where f can be chosen arbitrarily, e.9., T = 2.
2. Find the corresponding analog-filter function H.(s). Then find the equivalent dig-
ital filter
as
H(z) = H,(s) l,=;l;::
(r0.4.e)
The following example illustrates the use of the bilinear transform ln digital filter design.
Exampte 10.4.3
We consider the problem of Example 10.4.2, but will now obtain a Butterworth design
using the bilinear iransform method. With f = 2, we determine the corresponding passband and stop-band cutoff frequencies in the analog domain as
,,)p=
$nY-
= 0.3249
."=tanT=0.7265
To meet the specifications, we now set
1*/9.142\*:16-.,
\ro./
,*/o7265),N_1s_r
\ro"/
and solve for N to get N
=
1.695. Choosing
(,,).
N=
2 and determining o. as before gives
= 0'4195
The corresponding analog filter is
at'l
We can now obtain the digital filter
= 7;fl0re3,6*
qn6
H(r) with gain at O = 0 normalized to be unity
as
Sec.
10.4
Oigital Fillers
475
H(z) = H,,(s)i.-l.
i=
+ l):
- 2.1712 + 1.7 16
{).1355(3
_r
Figure 10.4.3 shorvs the magnitude characteristic of the digital liltcr for 0 in the range
lo,rnl.
I,,(O)
I
0.4n
0.f
zr
Figure 10.4,-1 Magnitude
charactcristic rrf the filter [or
Exanrplc 10..1.3 using the bilinear
method.
f,! (rad)
10.4.3 FIR Filter Design
In our earlier discussions, we noted that it is desirable that a filtcr have a linear phase
characteristic. Although an IIR digital filter does not in gencrirl havc a linear phase,
we can obtain such a characteristic with a FIR digital filter. In rhis section, we consider
a technique for the design of FIR digital filters.
We first establish that a FIR digital filter of length N has a lirrcrrr phase characteristic, provided that its impulse response satisfies the symmetrv condition
h(n)=11111
-t-n)
(10.4.10)
This can be easily verified by determining H(O). We consider thc case of N even and
N odd separately. For N even, we write
N-
|
H(o): ) t(")cxp[-ion]
n =O
N12- |
: 2
x=0
Now we replace n by N
tion (10.4.10) to get
n@lexp[-ioz]
-n-
1
+
)
a
(n) e.rp
[-lon
]
in the second term in the last equation and use Equa-
476
Design ot Analog and Digital
\N/71-
tNlzt
|
+>
n=o
which can be written
H(o) =
-
|
Filters
Chapte, 10
h(n)exp[-lo(N-1 -z)]
as
?)l)"*[-,"(?)]
{r2"'*''*'[n('
Similarly, for N odd, we can show that
H(o) =
{,
(?) .':i" zr,r ... [o(, - ?)]
]
*,
[-,"(?)]
In both these cases, the term in braces is real, so that the phase of H(O) is given by the
complex exponential. It follows that the system has a linear phase shift, with a corresponding delay of (N l)/2 samples.
Given a desired frequency response I/r(O), such as an ideal low-pass characteristic,
which is symmetric about the origin, the corresponding impulse response fta(z) is symmetric about the point n = 0, but, in general, is of infinite duration. The most direct
way of obtaining an equivalent FIR filter of length N is to just truncate this infinite
sequence. The truncation operation, as in our earlier discussion of the DFT in Chapter 9, can be considered to result from multiplying the infinite sequenoe by a window
sequence ut(n).lt hr(n) is symmetric about n
0, we get a linear phase 6lter that is,
however, noncausal. We can get a causal impulse response by shifting the truncated
sequence to the right by (N
1)/2 samples. The desired digital filter H(z) is then
determined as the Z-transform of this truncated, shifted sequence. We summarize
these steps as follows:
-
:
-
1. From the desired frequency-response characteristic Hr(O), find the corresponding
impulse response ftr(r).
2. Multiply ho@) by the window function ar(n ).
3. Find the impulse response of the digital filter as
h(n) =
1,01n
-
(N
-
t)/zlw(n)
and determine H(z). Alternatively, we can find the Z-transform II,(z) of the
sequenoe h/n)u(n) and find H(z) as
H(z) = z-rl,-t't2H' (z)
As noted before, we encountered the truncation operation in step 2 in our earlier
discussion of the DFT in chapter 9. There it was pointed out that truncation causes the
frequency response of the filter to be smeared.
In general, windows with wide main lobes cause more spreading than those with
narrow main lobes. Figure 10.4.4 shows the effect of using the rectangular window on
the ideal low-pass filter characteristic, As can be seen, the transition width of the resulting frlter is approximately equal to the main lobe width of the window function and is,
hence, inversely proportional to the window length N. The choice of N, therefore,
involves a compromise between transition width and filter length.
Sec.
10.4
Digiial Filters
477
i Ir(rt)
|
ltu
l4r
Figure 10.4.4 Frequency response
obtained by using rectangular
window on idcal filter response.
I
F_
The following are some commonly used window functions.
Rectangular:
=
","(,)
{;: :;J;J
_
'
(10.4.11a)
Bartlett:
O=n<
N-t'
2n
1_-------:--
-r,
'
N-l
,
N-l
_
<
2 -"-<N_
N-1'
(r0.4.11b)
elsewhere
{
Hanning:
2rn \
0<n<N-l
-.osr-rt1/,
(10.4.11c)
?.//a,n(n)
{'l'
Hamming:
[o.ro
@Hu'(n) =
elsewheru
0<n<N-l
- o.aocosffi,
(10.4.11d)
l,
elservhere
Blackman:
42
ws(n):
{,
-
O.s.o,
ff1
+ 0.08 cos
ffi
,
0<l=N-l
(10.4.1le)
elsewhcrc
478
oesign ol Analog and Drgital
Filler."
Chapter 10
Kaiser:
(,
("[(';')'zo*
(n) =
1)']'
- 'r-
,,)
' o'n=N-'
!)]
t["("r{:
1ro.a.ttg
elsewhere
where /o-(.r) is the modified zero-oder Bessel function of the first kind given by
/s(x) : Jo"' exp [x cos 0ldl /2r and o is a parameter that effects the relative widths of
the main and side lobes. When o is zero, we get the rectangular window, and for a :
5.414, we get the Hamming window. In general, as o. becomes larger, the main lobe
becomes wider and the side lobes smaller. Of the windows described previously, the most
commonly used is the Hamming window, and the most versatile is the Kaiser window.
Erernple
1O.4.4
I-et us consider the design of a nine-point FIR digital filter to approximate an ideal low-pass
digital filter with a cutoff frequency A, = O.2n. The impulse response of the desired filter is
h,,(o) =
f'z
expllan'ldo = Ino'2'1
nn
zn
^! J _.2a
For a rectangular window of length 9, lhc corresponding inrpulse rcsponse is obtained by
evaluating /r.,(n) for -4 - n s 4. We obtain
to.t47 0.117 0.475 0.588 0.588
0.475 0.317 0.147
--, - l
h,tln)=\
-,1,
-Lzr1Tn,Ifft?[nfrl
1
1
The filter function
is
-'*
u'171 =0'147
n?lTSn
0'1!Z
,, *9'47s.., a 9'588. *
I
0.475 . 0.317
0.147
+ 0.588
2., +
z-, + --_ z-, + --_- z-c
Tr7tIt?I
so that
H(z) = 7-es'P1 =
t
o.475
7t
o't'1?
1r
'i
* .-rl * o*' e-t + 2-t.1
(z-t + z-") +
O't*,r-,
It
+7-s1
For N = 9. the Hamming window defined in Equation (l0.4.lld)
+r-a
is given by the sequence
ta(n) = [0.081, 0.215, 0.541,0.865, l, 0.865, 0.541, 0.215,
t
Hence, we have
0.0811
Sec.
10.4
Digital Filters
rr,,(n)w
(n)-
479
0'012. 0'068, 0'257, 0'508
{
[1r1t7tn
,1.
q10!
0.257
0.(b8
0.0121
lt1Jt
1f
T
The filter funclion is
0.m6E ,
. 0.012
H'(z)=
z" +
-z'+
0.508
0.257
+ -- z-l +-z
7f
1f
0.25't
0.s08
" +.-i*l
------ 22
TT
_r
*
0.068
,_,
a9O_12 ,_o
7t?r
Finally,
H(z)
=
z-o H'(2,
=
o#?
(l + z-8) .. 9'EQ 1.-' + z-') *
94
p-z +
z'c1
*, o'!99 1.-r + .-r) + z-.r
The frequency responses of the filters obtained using both the rectangular and Hamming
,I0.4.5,
windows are shown in Figure
with the gain at O = 0 normalized to unity. As can
be scen from the figure, the response corresponding to the Hamming window is smoother
than the onc for lhe recta[gular window.
I
,,(O)
I
|
,,(O)
I
Sl (rad)
(a)
Q (md)
(b)
tigure 10.45 Response of the FIR digital tilrer of Example
lar window. (b) Hamming window.
10.a.4.
(a) Reclangu-
.l80
Deslgn ol Analog and Digital
Fllters
Chaptor tO
f,'.sample 10.4.5
FIR digital filten can be used to approximate filters such as the ideal differentiator or the
Hilbert transformer, which cannot be implemented in the analog domain. In the analog
domain, the ideal differentiator is described by the frequency respo le
H(r,r) = jro
while the Hilbert transformer is described by the frequency response
H(o) = -jsgp(ro)
To design
discrete-time implementation of such filters, we start by specifying the desired
response in the frequency domain as
a
-l=-=|
Ho@) = H(a),
whcre o, = 2r
/T for
some choice
of 7. Equivalently,
H/A)=Y1'7'''
-rsOsn
where (! = ro7" Since Ifr(O) is periodic in O vith period 2rr, we can expand it in a
Fourier series as
Hd(o) =
,t.oo1n'1"-rn
where the coeflicients llr(a) are the correspondiog impulse response samples, given by
h"@) =
*
I:,Hd(a)eanda
As we have seen earlier, if the desired frequency function Hr(O)is purely real, the
impulse response is even and symmetric; that is, ir(n) = ha? n). On the other hand, if
the frequency response is purely imaginary, the impulse response.is odd and symmetric,
so rhat ir(a) = -hdFn).
We can now design a FIR digital filtdr by following the procedure given earlier. We wilt
illustrate this for the case of the Hilbert transformer. This transformer is used lo generate
signals that are in phase quadrature to an iniut sinusoidal signal (or, rnor" g.n"--Uy,
input narrow-band waveform). That is, if the input to a Hilbert transformer is the signal
.r,(r) cos roor, the output is y,1l; = sinoot. The Hilbert transformer is used in communication systems in various modulation schemes.
'The
impulse response for the Hilbert lransformer is obtained as
*
:
h,@)
:
* I:,-i
(
_)
I
0.
-lz
t nrr
For a rectangular window of length
15, we
ssn
(o)e,,ndo
n even
zodd
obtain
S€c.
10.4
41
Digital Filters
ho@'1
=
{-
* , - fr,0, -*,r.-?.0.?.n..1- , * ,.-ri}
which can be realized with a delay of seven samples by the transfer fuction
H@=:t-
,
l r-,-lr-,-z-6+z-8+Jz-,' *lr "*1,
{
The frequency response H(O) of this filter is shown in Figure 10.4.6. As can be seen, the
response exhibits considerable ripple. As discussed previously, the ripples can be reduced
by using window functions other than the rectangular. Also shown in the frgure is the
response corresponding to the Hamming window.
H@ttj
Figure 10.4.5 Frequency response of Hilbert lransformcr.
10.4.4 Computer.Aided Design of Digital Filters
ln recent
years, the use of computer-aided techniques for thc dcsign of digital filten
has become widespread, and several software packages are available for such design.
Techniques have been developed for both FIR and IIR filters that, in general, involve
the minimization of a suitably chosen cost function. Given a desired frequencyresponse characteristic Hr(O), a filter of either the FIR or IIR type and of fixed order
is selected. We express the frequency response of this filter, H(O), in terms of the vector a of filter coefficients. The difference between the two responses, which represents
the deviation from the desired response, is a function of a. We associate a cost function with thls difference and seek the set of filter coefficients a that minimizes this cost
function. A typical cost function is of the form
482
Design ol Analog and Digilat
fI',
t(a) =
l_ r.y(o)lHi(o)
- H($lzdo
Fllters
Chapter lO
(10.4.12)
where tlz(O) is a nonnegative weighting function that reflects the significance attached
to the deviation from the desired response in a particular range of frequencies. w(o) is
chosen to be relatively large over that range of frequencies considered to be important.
Quite often, insread of minimizing the deviation at all frequencies, as in Equation
(10.4.12), we can choose to do so only at a finite number of frequencies. The coit function then trecomes
M
/(a) =
w(o,)lHd(o,)
- H(o,)1,
(r0.4.13)
where Q, I < i < M, are a set of frequency samples over the range of interest. Typically, the minimization problem is quite complex, and the resulting equations cannot
be solved analytically. An iterative search procedure is usually employed to determine
the optimum set of filter coefficients. We start with an arbitrary initial choice for the
filter coefficients and successively adjust them such that the resutting cost function is
reduced at each step. The procedure stops when a further adjustment of the coefficients does not result in a reduction in the cost function. Several standard algorithms
and software packages are available for determining the optimum filter coefficients.
A popular technique for the design of FIR filters is based on the fact that the frequency response of a linear-phase FIR filter can be expressed as a trignometric potynomial similar to the Chebyshev polynomial. The filter coefficients are chosin to
minimize the maximum deviation from the desired response. Again, computer programs are available to determine the optimum filter coefficients.
Frequency-selective filters are classified as Iow-pass, high-pass, band-pass, or bandstop filters.
a
a
The passband of a filter is the range of frequencies that are passed without attenuation. The stop band is the range of frequencies that are completely attenuated.
Filter specifications usually specify the permissible deviation from the ideal characteristic in both passband and stop band, as well as specifyinB a transition band
between the two.
Filter design consisrs of obtaining an analytical approximation to the desired filte:
characteristic in the form of a filter transfer function, given as t/(s) for analog filters and H (z) tor digital filters.
Freqirency transformations can be used for converting one type of filter to another.
Two popular low-pass analog filters are the Butterworth and chebyshev titters. The
Butterworth filter has a monotonically decreasing characteristic that goes to zero
smoothly. The chebyshev filter has a ripple in the passband, but is monotonically
decreasing in the transition and stop bands.
Sec.
10.7
Problems
A given set of specifications can be met by a Chebyshev filtcr of lower order than a
Butterworth filter.
The poles of the Butterworth filter are spaced uniformly around the unit circle in
the s plane. The poles of the Chebyshev filter are located in an ellipse on the s plane
and can be obtained geometrically from the Butterworth polcs.
a
a
Digital filters can be either IIR or FIR.
Digital IIR filters can be obtained from equivalent analog designs by using either
the impulse-invariant technique or the bilinear transformation.
Digital filters designed using impulse invariance exhibit distortion due to aliasing.
No aliasing distortion arises from the use of the bilinear transformation method.
Digital FIR filters are often chosen to have a linear phase characteristic. One
method of obtaining an FIR filter is to determine the impulse resporre ho(n) corresponding to the desired filter characteristic Hr(O) and to lruncate the resulting
sequence by multiplying it by an appropriate window function.
For a given filter length, the trarrsition band depends on the window function.
10.6 CHECKLIST
OF IMPORTANT TERMS
Frequency-selectlve llltet
Frequency translormallone
Hlgh-pase lllter
lmpulso lnvarlance
Allaslng enor
Analog llltel
Band-pass lllter
Band-atop fllter
Blllnear transtormatlon
Buttemorth llltet
Chebyshev fllter
Dlglta! fllter
Fllter speclllcatlons
FIR fllter
10.7
tlR fllter
Llnear phase characterleilc
Low-pass lllter
Paesband
Transltlon band
Wndow tunction
PROBLEMS
10.1. Design an analog low-pass Butterworth filter to meel thc follorving specifications: lhe
attenuation to be less than 1.5 dB up to I kHz and to be at lcast-15 dB for frequencies
greater than 4 kHz.
102
Use the frequency transformations of Section 10.2 to obtain an analog Buttersorth filter with an auenuation of less than 1.5 dB [or lrequencies up to 3 kHz, from your design
in Problem 10.1.
103. Design a Butterworth band-pass filter to meet the following specifications:
ro.,
= lower cutoff frequency = ZffJHz.
(l)..
=
upPer cutoff frequency
The altenuation in the passband is to be less than
is to be at least l0 dB.
I
= 3ff) Llz
dB. The attcnuation in the stop band
.,,M
Design of Analog and Digiltal
Filters
Chapter 10
a passband ripple < 2 dB and a
cutoff ftequency of 15@ Hz. The attenuation for frequencies greater thatr 50m rlz must
be at least 20 dB. Fird e, N, and H(s).
Consider the third-order Butterworth and Chebyshev filters wilh the 3-dB cutoff frequency normalized to I in both cases, Compare and comment on the corresponding characteristics in both passbands. and stop bands.
ln Problem 105, what order of Butter*,orth filter compares to the Chebyshev filter of order 3?
Design a Chebyshev filter to meet the specifications of Problem 10.1. Compare the frequency response of the resulting filter to that of the Butterworth filter of Problem 10.1.
Obtain the digiul equivalent of the low-pass filter of Problem 10.1 using the impulseinvariant method. Assume a sampling frequency of (a) 6 kHz, (b) 10 kHz.
Plot the frequency responses of the digital filters of Problem 10.8. Comment on your
10.4 A Chebyshev low-pass filter is to be designed to have
105.
10.6
10.7.
rc8.
10.9.
results.
10.10. The bilinear transform technique enables us to design IIR digital filters using standard
analog designs. However, if we want to replace an analog filter by an equivalent A./D digital 6lter-D/A combination, we have to prewarp the given cutoff frequencies before
desigping the analog filter. Thus, if we want to replace an analog Butterworth filter by a
digital filter, we first design the analog frlter by replacing the passband and stopband cut
off frequencies, ro, and r,r,, respectively, by
, 2 .n,T
u; = rta'i L
ui,2a-T
= Vrai;
The equivalent digital filter is then obtained from the analog design by using Equation
(10.4.9). Use this method to obtain a digital filter to replace the analog filter in Problem
10.1. Assume that the sampling frequency is 3
kllz
10.1L Repeat Problem 10.10 for the bandpass filter of Problem 10.4.
10.12 (a) Show that the frequency response II(O) of a filter is (i) purely real if the impulse
response ft(z) is even and symmetric (i.e., &(a) = -n(-z)) and (ii) purely imaginary if i(z) is odd and symmetric (i.e., i(z) = - h(- n)),
O) Use your result in Part (a) to determine the phase of an N-point FIR filter if (i)
h(n) = 111v - 1 - z) and (ii) n(z) = -h(N - 1 - n).
(a)
10.13.
The ideal differentatior has frequency resporuie
Ho(A)=jA O<l0l <zr
Show that the Fourier series coetfrcient for
h,(n\ =
@) Hence,
design a
IIr(O)
are
(-l)'
n
lGpoint differentiator using both rectangular and Hanning windows.
12) to approximate the ideal low-pass characteris-
10.1d (a) Design an ll+ap FIR frlter (N =
tic with cutoff r/6 radians.
(b) Plot the frequency response of the 6ller you designed in Part (a).
(c) Use the Hanning window to modify the results of Part (a). Plot the frequency
response of the resulting filter, and comment on it.
Appendix A
Complex Numbers
Many engineering problems can be treated and solved by methods of complex analysis. Roughly speaking, these problems can be subdivided into two large classes. The
first class consists of elementary problems for which the knowlcdge of comptex numbers and calculus is sufficient. Applications of this class of problcms are in differential
equations, electric circuits, and the analysis of signals and systems. The second cless of
problems requires detailed knowledge of the theory of complex analytic functions.
Problems in areas such as electrostatics, electromagnetics, and heat transfer belong to
this category.
In this appendix, we concern ourselves with problems of the first class. Probtems of
the second class are beyond the scope of the text.
A.1
DEFINITION
A complex numtrer z. = x +,7y,wherel = VJ, consists of two parts, a real partr and
an imaginary part y.t This form of representation for complex numbers is calted the
rectangular or Cartesian form, since z can be represented in rectangular coordinates
by the point (.r, y), as shown in Figure A.l.
The horizontal.t axis is called the real axis, and the vertical y axis is called the imaginary axis. The r-y plane in which the complex numbers are represented ia rhis way is
called the complex plane. Two complex numbers are equal if their real parts are equal
and their imaginary parts are equal.
The complex number z can also be written in polar form. The polar coordinates r
and 0 are related to the Cartesian coordinates x and y by
I
V-
Mathematicians use i to tepres€nl
I, bur
used to rcpresent currenl in electric circuits.
"l."iricrl
engineers use,, tor this purpose because i is usually
Complex
486
Numbers
Appendix A
Imaginaty axis
Real
l--, "*ai'
oxis
Figure A.l The complex number z
in the complex plane.
x= rcos0 and y=rsin0
Hence, a complex number z =
x
(A.l)
+,7y can be written as
z
:
rcos0 +yisin0
This is known as the polar form, or trigonometric form, of
Euler's identity,
exp[j0]
:
cos0
(^.2)
a
complex number. By using
+/sin0
rve can express the complex z number in Equation
(A.2) in exponential form
z = rexp[i0]
where r, the magnitude of z, is denoted
fy
as
(A.3)
lz | . From Figure
A.l,
l.l ='=
(A.4)
o = arctan
I:
arc"in /- = u...o,
xfr
I
(A.s)
The angle 0 is called the argument of z, denoted 42, and measured in radians. The
argument is defined only for nonzero complex numbers and is determined only up to
integer multiples of 2n. The value of 0 that lies in the interval
-tr (
0 < rr
I is the length of
point
the vector from the origin to the
z in the complex plane, and 0 is the directed
angle from the positive x axis to z.
is called the principal value of the argument of z. Geometrically, le
f,;omple.dl
For the complex number z
=1+ j\/1,
,={fa1l.3Y=z
The principal value of x
1 is
ancl 42=arctan{3=!+Znn
n/3. antl thcrcfrrre,
:
*- 2(cosz/3 + i
sinr/3)
Sec.
4.2
Arithmetic Operalions
The complex conjugate
o[:
487
is defined as
z*=r_jy
(A.6)
Since
z+i4=U
and Z- z* =2i!-
(4.7)
it follows thar
Rel;l
I
=.s=-(:+3*).
and Im(il =-v
: I r, - r-1
(A.8)
Note that if 2 = ,*, then the number is real, and if z = - : +. t hen the number
purely imaginary.
A,2
is
ARITHMETIC OPERATIONS
A-2.f Addition and Subtraction
The sum and difference of two complex numbers are respectively defined by
Zr
+ 3: = (x, + rr) + j(yr +
zt
- Zz: (tr -
),2)
(A.e)
.vz)
(A.10)
and
.r:) + jor
-
These are demonetrated geometrically in Figure A.2 and can be interpreted in accordance with the "parallelogram law" by which forces are added in rnechanics.
zt+ zl
//
i---'
A.2 Addition
A.,2.2 Multiplication
Figure
The product
zr z2 is Biven
and subtraction of complex numbers.
by
zrr2 = (-rl
+ jy1)Q2+ jv,)
= (.rr.r: -.yry:) + 7(,r,y. +.r,v1)
(A.11)
488
Complex
Numbers Appendix A
In polar form, this becomes
z1i.t
=
ty
exp[j0,]r, exp[i02] = r,r. erp[i(0, +
0r)]
(A.12)
That is, the magnitude of the product of two complex numbers is the product of the magnitudes of the two numbers, and the angle of the product is the sum of the two angles.
A.2.3 Division
Division is defined as the inverse of multiplication. The quotient zr/zris obtained by
multiplying both the numerator and denominator by the conjugate of er:
z,
=
z2
(r, +/.y,)
@z + jYr)
- (\
+
iy)@2
x22 + yl
-
iyz)
x,xz *
,xzlr - xrlz
-, t- -i. lJz
r?i vZ
rt-
_
Division is performed easily in polar form
as
(A.13)
follows:
z, _ r,exp[ie,J
z2 r, exp[jOr]
: lexp[l(0, - 0r)]
(A.14)
That is, the magnitude of the quotient is the quotient of the magnitudes, and the angle
of the quotient is the difference of the angle of the numerator and the angle of the
denominator.
For any complex numbers 21, !2, and 23, we have the following:
o
Commutative laws:
lrr*rr=it*zr
l_
(41!2
o
(A.1s)
Associative laws:
[(2, +
zzl
t
[2,(2213)
c
- 4:{l
zt= zr + (r, + zr)
-
(zrzr)21
(A.16)
Distributive law:
zt(zz+
z) = z1z2* 1r7,
(A.17)
I
Sec.
4.3
Powers and Hoots of Complex Numbers
489
A.3 PCWEBS AND ROOTS
OF COMPLEX NUMBERS
The
n
th power oi the compiex number
z = rexp[le]
is
expUn$l = r"(cosne + Tsinn 0)
from which we obtain the so-called formula of De Moivre:l
z',
=
r',
(cos0 + /sinO)i
= (cosl0 + jsina0)
(A.18)
For example,
(l
+ i1)5
=
({)explir/tD'
=
q{iexpfitu/al = -4 - i4
The nth root of a complex number z is the number ro such that ?{r' = z Thus, to find
the nth root of :. we must solve the equation
u,^
which is of degree
-
lzl expliol =
0
r and. therefore, has n roots. These roots are given by
,,,,
= l.l,r,"*p[i9l
ro,
= |rIr/'exP[i e-t'ztt]
u, =
lzlt/'exp[r-,ou]
(A.re)
:
u, =
lzl
r/'exe[i o + 1Q--!)l]
For example. the five roots of 32 exp [i tr] are
",,
=
z"*n[i{],
,. = , .-o
l,
?J,
,,,, =
r*oliT-], rrr,:2exp[in],
,. = z *n
ei-]
li
Notice that the roots of a complex number lie on a circle in the conrplex-number plane.
The radius of the circle is li I ri'. The roots are uniformly distributed around the circle,
2
Abraham De Moivre (1667-1754) is a French mathematician who introducc'd imaginar.v quantities in
trigonometry and contributed to the theor:i of mathenratical probability.
490
Complex
, "-,
[i
Numbers
Appendix A
f1
.: exp
i,;l
2expflrt
)
I
2 exp
,*r[,f]
[,?]
Flgure
A3
Roots of 32 exp
[jtJ.
and the angle between adjacent roots is 2n/n radians. The five roots of 32 exp [7rt] are
shown in Figure A.3.
q.4
INEQUALITIES
For complex numbers, we observe the important triangle inequality,
lr, +,rl
'lz,l
+ lzrl
(A.20)
That is, the magnitude of the sum of two complex numbers is at most equal to the sum
of the magnitudes of the numbers. This inequality follows by noting that the poins 0,
e1,and z,
zzare the vertices of the triangle shown in Figure A.4with sides lz1l,lz2 l,
and lz, + z, l, and the fact that one side in the triangle cannot exceed the sum of the
other two sides. Other useful inequalities are
*
Inelzll =
lzl and
These follow by noting that for any z
llmlzll < lzl
(A.21)
=.r + 7i, we have
lzl:t/7+j>lxl
and, similarly,
l.l = lvl
Flgure
A,4
Triangle inequality.
Appendix
B
Mathematical Relations
Some of the mathematical relations encountered in electrical engineering are listed in
this section for convenient reference. However. this appendix is not intended as a substitute for more comprehensive handbooks.
8.1
TRIGONOMETRIC IDENTITIES
exp[ti0] = cosO t/sin0
cose
:
sine
= 1|(exp[i0]
j(exRllel + exp[-ieff = ti"(e
*
;)
- exp[-ie ]) = *.(, -
,")
sin20+cos2o=1
cos2e-sin20=cos2o
cos20-J(t+cos20)
sin20:](t-cos2e)
lt3 cosO + cos-30)
rtnrt = j{3 sino - sin3e )
sin(c + 9) = sina cosB = sinB cosc
cos(o + P) = cos.t cosP i sinc sinP
cosr0 =
491
492
Mathematical
tan(a
P) :
=
tan
a -r tan p
l*-t."" t."p
sina sinp = |[cos(o - B) - cos(a + B)]
cosc cos B = ][cos(c - p) + cos(c + p)]
sina cosp =
sinc
*
j[rin1"
.rrl
p) + sin(c + B)]
zsinaf c*9-/
sinp =
z"orff.orf
coso+cosp=
cosq
-
- cosp - z.ir1f .inLt'
cosq, +
sinho
:
Bsina =
|(exn["J
\E
+ Ei cor(o
- t."-';)
- exp[-a])
coshc = ](exn[cl + exp[-a])
tanho
sinh o
: ----cosh o
coshza-5inh2q=1
coshc*sinhc=exp[c]
coshc-sinhc=exp[-c]
:
sinh(cl
sinhc coshp -+ coshc sinhp
= 9)
cosh(c + 9) = coshc coshp t sinhcr sinhp
tanh c
tanhB
tanh(a+B)=-=
I tanhc tanhB
=
sinh2a=|(cosh2c-1)
coshzcr:|(cosh2a+l)
8.2
EXPONENTIAL AND LOGARITHMIC
FUNCTIONS
exp[c] exp[p] = exp[a + B]
'"#[ii=
(exp[c])P
exPra
:
- Bl
exp[cp]
Relations Appendlx B
Sec.
8.3
Special Functions
493
lncB=lno+lnP
hfr=lnc-ln0
lroe =.B lna
ln
c is the inverse of exp [a]: that
exp[lno]=
logu
:
M
lna,
is,
a and
exp[-ln<r]: r*of,n
(ll] - ,^
,\l = loge :0.4343
1l
lnc = plotsa,
ru-2.3OZd
log o is the inverse of 10"; that is,
l0loE' = a
1g-,o*.
= oI
Figure
B.l
Nr
tural logarithmic
and exponen lial functions.
8.3
SPECIAL FUNCTIONS
B.8.1
I(")
Gnrtrrrra f.rrngf,igag
I, r"-rexp[-r]dr
f(o+l):af(c)
l(&+t1=1t. k=0,1.2,...
=
r(l) = v?r
494
Mathematical
Relations Appendix B
B.S-2 Inconplete Gamma Functlons
1(c,p)
rF
= J6| r"-rexp[-t]dt
t@
O(c,g): Jg|
expl-tldt
1o-t
I(a) = I(a, p) + g(",
B3.8
F)
Beta tr\rnctions
rl
g(p,y):lrr-t(l -t)'-tdr, F>0, v>0
J6
9(rr,r)=lfrH
8,4
POWER-SERIES EXPANSION
I
rn(1 + x)
=, -,.*
sinx = x
=
.f3
- t.;
r.
+...+xn
*r + Lr' * "' + \,x" + "'
exp[xf =
cosx
.(;)*
=t+nx*#"-
(1+r),
f5
-..(-1)n*'
+...+
* ....,
l,l
.
r
t?n+l
(-l)'(2h
+ ..'
1I
-*..i.- #.
.
+ (-r)"
tanr=r+f,*f,r'*.'.*
a, : t *rlna +
ry+...*
#.
(,Ti,)' +...
I
sinhr =.r
+,X3+ 15 +... + X1r++ q1 +'..
12
'i
x2 x4
xh
coshr=1+r+
(1 +
r). = t
t
ox
4t.+...
+12r;1 +...
*tk,]r,
-Wr3
+...,
lrl
.
t
Sec.
8.5
Sums ol Powers of Natural Numbers
495
where d is negative or a fraction.
(l +.r)-t = I -.r *.r2 -.rr
(l
+.r;'/? = I + j.r
+
...: ) (- l;r'-'..r
*=0
- 1.1.., + jlj... - jj
'
ii,' -
MS OF POWE RS OF NATUBAL NUMBERS
$o=.rutn*rl
Z
k=t
{,
rl,., _
--
l-l
N(N + l)(2N + l)
O
$ or: ry2(NI + l)2
k=t
) r. :
{iv1ru + t)(2N + l)(3N2 + 3N
)
*s
,,.rur(ly
)
to = urrlvllu + l)(2N + l)(3N4 + 6Nr - 3N
.)
k=t
t'=
-
rLoN'(N
+
D2(zNz + 2N
-
+ l)2(3Nr + 6Nr -
-
l)
1)
N2
- 4.\ F 2)
2 <zk - l) = ry2
l-l
2 e* -
ry: = lrrvlarur
-
rr
-
1)r = N2(2tv2
-
t)
2
<zr
l-l
)
t1l + r'y: = ;!ru1N + l)(,ry + 2)(3N + s)
)
r1rr1 = (N +
r)t-
I
.F
l)
Mathematcal
496
S.S.f
Sums of Binomial Coefficlents
e(';-) =r.,:i')
'.
(l)
:2n-'
(;) . (;).
.(;).(!)
$,,0
*
,,(il
.
=
=2n-'
zw-'\t(n +2)
5 t-rl-(flo,= (-l)NN!
N=o
e(t'= ("il)
8.6.2
Seriee of Exponentials
"{,o={,-+
.**o[,T] =tl;
2"=*'
lsk<N-1
&:0,N
r'l <l
i*"=T+,
Ln'o':t#'
8.6
a*'L
.t
lol .t
lrl
DEFINITE INTEGRALS
exp
[-
o-r2]dr
:
Belatons
Appendlx B
Sec.
8.6
Definite lntegrals
.['r".*p[-.,r14.
r2
/
497
=.,11! n r(]
exp[-o.r'1d, =
ot
,ln
,o
J
exp[-cxl cospxdr =
p,
lo
= p,
r'u
exo[-".v; sinP.r dr
exp[-ox'?]cosprdr =
f
f'sinccrd-r
n
=
f
'l -
cosc-r
f
'1 - cosox
a'=nsincB
sgno
-
orr
c>0
I or. u - u
.,';"*[-(r*)']
z1
J. -;
,
i or,
J,, ,'-.o': z'
o>o
g" o>O',
i, ,(r-B)
r9!9iar=rnl. a>0,
/-cosS:Ys_stlE=frn*I dr : ao rn
/fi,
cosB.r.L1
(F ,,)r.
=
['coso'r
.--,
,\''
h
adx _ rt
|r- --__
)n 12 +.r2-
c>
2
r" sin-.---"
2ru
sln.r
d.r:0
1"/2 sin (2n - l).r ,
|Jo
----sin.r ax = n2.(- I )', '
f"/2 sin2rr.r
I +I
IJo
rLr=I-...+
slnr
3 5
2n'-l
f" cos (2n + 1).r
d.r = (- l)"rr
|Ju 'cos.r
b+0
B>o
o.
rt . lr
1" sin(2n - I ),r
- -d.t = rr
|Jo
---':stn.r
|
)o
Brea[,
B
;_r
>0
4gg
Math€matical
1oi2 sin 2ar
tin'
J,
r2o
J,
fza
I
I
. =
ot
T
,
:
(l -
cosr)"
,, -
cos.r)'cosnrdr =
ro sinnx
R7
cosr
sin n'r
d'r
o
n
cosrnr
* = [o'
'r'*
(-D'#
=
m
U,' I3I:. :::"y:*
INDEFINITE INTEGRALS
["a"=uu-[ra,
r7
)r"*=;|1r'*t+C,
+
C
(ar
-
/ exptxl dx = explxl
I
I
t
explaxl a, =
x"
explaxl
\
n*-r
rlexp [ax] + C
* - lr" exp [ar] - i I *-, exp [ax] dr
ff=rnl.rl +c
llurar=xtnlrl -x+C
(rilf-[(z + 1)lnl'rl-t]+ c
JI x, hx dx: -jn*t
I#
dx =tnlrn.rl +
I cosrd, = sinx + C
J
[;nxat=-cosx+c
c
Relaffons Appendh B
Sec.
8.7
lndetinite lntegrals
499
Is#rdx=tanx+C
I
oira* = -cot.r + C
t
tanxax:
I
cotxtlx = In lsin.rl + C
|
ln
+ tan.rl + C
"ecra, = lsecr
/
cscr
a,
:
ln lsecrl
ln lcscr
+
C
- cotrl + C
lsecxtanxdr=secr*C
:
/ o"r.o,, dr
dx:
l'
-cscr
*
C
- i sin2-r + C
I
sn'zx
I
coszx
I
.fln'ra, = tanr
dx: ], * ]sin2r + C
-.r
+
C
Ico*xdx=-cotr -xt-C
:
I s#xdx -!tz+
I
cos3, ax
= lQ
*
sin2,r) cos.r
cos2.r)
sinx
*
*c
C
: -lrlnn-'r.*, * L1 sin,-2.rd-r
/
l-.or,,-'rsin.r I L
cos', ax:
+ I
I cor,-r, d,
/
sin'rdr'
I
xsinx dx = sinx
-
.r
cosr J. C
500
J'
J
Mathematical
,.o...
r"
/ ,'
r/.r
=
sin., r/.v =
.r sin.r
:o:li.t
/
costr.r d.r
= sinhr + C
rrnt ,
=
/
cott
I .:.,;i,.'
.
j
ln cosh.r -t- C
..
tan-rlsinhrl +
sechs.rd.r
= tanh.r +
/
csch'?r r/.r
= -cothx + C
:
/ ,e.t, tanh.r r/.r
J
C
-sech.r + C.
o"t , cothx dr = - cschr + C
..t
tt-r
I
;'*"i;= ;,(,-t bx - n ln lxl) + c
,]*K, +
I ;*:
rdrll.rl :
J
C
;,. ,... .,...
/
r
cos.r,,Lr
, dx: lnltanh.rl + c
/sectr.rdr=
/
I
-c
sirrh
r/.r
C
- n / -r"-r sinx dr
sinr
J
/
+
-.r" cos.r + ,r J .r"
.os,r dx = x"
t,1.:
*
cos.r
Relations Appendix B
,G;
a*r
,'n
1,,
bx)z
-
4a(a
+
* a..l * c
I#=rnlx+ t/?-71+c
bx)
+ b2tila
+ D.rll +
c
SEc.
B.7
lndefinito tntegrals
d.x
lt --....=:-_
J vz1/rz - oz
\/7-
fdtr
-
+C
a?t
ldr-W
oz1J7
t dx
Lx
+c
-tan-'I a +7=
tdx
dx
+
-;
t.l{d+rr+"1
J;ffii=,rnl_-,
d,
\/7;?
I ------___::.:
-
I r2!a2 + xz
a2x
ldxr
J
dx
I
dt
) t5-*,
=
J+c
-r
c
7\/-+r,* c
Gr+W=
I
c
+!a2+x2+c
lm==lnl:l
I
501
,r
sin-'- + c
-,a-x
JA-=cos-'-;--c
rxfu
-\/ir - x, * acos-r9-o_x + c
lffi=
dr
f _------.---.rL
J x!?ax
I
- 12
dr
l;@i=
J2{ux
[
I
-}
r\/2",
\/2a;-
-
ax
I
.-r
-sec-'- + c
a, = x dx
a
\/i; -}+ f
cos-'
=b' - g, - 3o' \/2n -t
T *,
+
r4
cos-,
o
-u_o
*
"
Appendix C
Elementary Matrix Theory
This appenrlix presents the nrinimum amount of matrix theory needed to comprehend
rhe ntaterial in Chaptcrs 2 and (r in the tcxl. It is recommcnded that even those well
versed in nratrix theory read the material herein to become familiar with the notation.
For those not so well versed. the presentation is terse and oriented toward them. For
a more comprehensivc presentation of matrix theory. we suggest the study of textbooks solely concerned with the subject.
c.1
BASIC DEFINITION
A matrix, denoted hy a capital boldface letter such as A or rD or by the notation [a7]'
is a rectangular array of elements. Such arrays ocur in various branches of applied
mathematics. Matrices are useful because tlrey enable us to consider an arTay of many
numbers as a single ohject and to perform calculations on these objecls in a compact
tbrm. Matrix elements can be real nunrbers, complex numbers. polynomials, or func'
tions. A matrix that contains only one rorv is called a row matrix,and a matrix that con'
tains only one column is called a column matrix. Square matrices have the same
number of rows and columns. A mntrix A is of order m x n (read m.by n) if it has rn
rows and r columns.
The complex conjugate of a matrix A is obtained by conjugating every element
in A and is denoted by A4. A matrix is real if all elements of the matrix are real.
Clearly, for real matrices, A't = A. Two matrices are equal if their corresponding
elements are equal. A = B means ln,il = lb,,l, for all i and i. The matrices should be
of the same ordcr.
502
Sec.
C.1
Basic Operations
503
c.2 BASIC OPERATIONS
C.2.1 MatrixAddition
A matrix C : A + B is formed by adding corresponding elements; that
is,
(c.l)
[c,,]=la,,l+[bi1)
Matrix subtraction is analogously defined. The matrices musr be of the same order.
Matrix addition is commutative and associative.
C.2.2 Differentiatioaandlntegration
The derivative or inregral of a matrix is obtained by differenriating or integrating each
element of the matrix.
C.2.3 MatrixMultiplication
Matrix multiplication is an extension of the dot product of vecrors. Recall that the dot
product of the two N-dimensional vectors u and v is defined as
N
u.v = )rr,n,
i.l
Elements [cr] of the producr matrix c = AB are found by taking the dot product of
the ith row of the matrix A and the 7th column of the matrix B, so that
[c,,]: ) a,1,bo,
t=l
(c.2)
The process of matrix multiplication is, therefore, convenicntly referred to as the multiplication of rows into columns, as demonstrated in Figure C.l.
This definition requires that the number of columns of A be the same as the number of rows of B. In that case, the matrices A and B are saicl to be compatible. Otherwise the product is undefincd. Matrix mulriplicarion is associative [(AB)C = A(BC)],
but not, in general, commutative (AB # BA). As an example, let
,{
n
k
A
c
I",
Figure
C.l
Matrir multiplication.
504
Elementary Matrix
^=[i 1]
and
Ll
"=[-o
Then, by Equation (C.2),
(r)(r)+(s)(6)
L(1)(_.4) + (s)(r)
and
:
tl
6)
[-r+ -sl
l=L r :rl
[(-4)(3) + (l)(r) (-4)(-2) + (txs)l
L(r)(3)+(6)(r)
Appendix C
(3)(l) + (-2X6)l
o"=[tslt-a)+(_2)(t)
BA
Theory
(r)(_2)*1oy1sy
Matrix multiplication has the following properties:
[_rr
J:I s
_31
za)
(tA)B =,t(AB) = a1;rs;
A(BC): (AB)c
(C.3a)
(c.3b)
(A+B)C=AC+BC
C(A+B)=CA+CB
AB +
AB =
0
BA,
(C.3c)
(c3d)
in general
does not necessarily imply
A=0
(C.3e)
or B=0
(C.30
properties hold, provided that A, B, and c are
such that the expressions on the
]ese
left are defined (t is any number). An example oflC.fg
is
[s rl
[_r
Lo o) L r
rl _ [o ol
-i.l: L; ;l
The properties expressed by Equations (c.3e) and (c.3f)
are quite unusual because
they have nocounterparrs inrhe;randard muttipticatiin
oinr.u"o *Jrioura]ii"r"_
fore, be carefully observed. As with vectors, trrlr"
matrix division.
i*o
zero
Matrix
The zero mahix, denoted by 0, is a matrix whose erements
are a[ zero.
Diagonal Matrit The diagonal matrix, denoted
by D, is a square matrix whose
off-diagonal elements are all zeros.
unit Matrit' The unit matrix, denoted by I, is a diagonar
matrix whose diagonar
elements are all ones. (Note:
= Ar = A, where.l, ir *y compatibre
N
upper Triangurar Matrb.
.
main diagonal.
Lower Trianguhr M atrit.
main diagonal.
matrix.)
The upper triangurar matrix has alr zeros berow the
The lower triangular matrix has all zeros above
the
Sec.
C.3
Special Matrices
ln upper or lower triangular matrices, the diagonal elements need not be zero. An
upper triangular matrix added to or multiplied by an upper trianqular matrix results in
an upper triangular matrix, and similarly for lower triangular matrices.
For example. matrices
4
[t
21
[r
o
3 -21 ana r,=l z 1
r, =lo
[o o
-s_]
o
[-r
ol
oJ
7)
are upper and lower triangular matrices, respectively.
Transpose Matrix. The transpose matrix, denoted by A r. is the matrix resulting
from an interchange of rows and columns of a given matrix A. lt A = [a,rl. then Ar =
la,il, so that the element in the ith row and the /th column of A becomei-the element
in the lth row and ith column of A r.
Complex Conjugate Transpose Matrit. The complex conjugate transpose
matrix, denoted by Ar, is the matrix whose elements are complex conjugates of the elements of Ar. Note that
(AB;r =
3r4r
and
(AB;t = 3t"'
The following definitions apply ro square marrices:
Symmetric
Matrb.
if
Matrix A is symmetric
A=Ar
Hermitian
Matrit.
Matrix A is Hermitian
if
A=At
Skew-Symmetic
Matrix. Matrix A
is skew
symmetric
if
A=-Ar
For example, the matrices
2
s
[-r
A=l z
41
[o -3
o
-3 I and B=l 3
[a -3 6]
L-a 7
are symmetric and skew-symmetric matrices, respectively.
Normal
Matix.
Matrix A is normal if
ArA = AA1
Uniury Matrit. Matrix A is unitary if
ArA=I
A real unitary matrix
is called an orthogonal matrix.
4l
-71
o_l
506
C.4
Elem€ntary Matrix
Theory
Appendlx C
THE INVERSE OF A MATRIX
A
is denoted by
A-r
and is an n
x rl matrix
such that
AA_r=A_rA=I
where I is the z x z unit matrix. if the determinant of A is zero, then A has no inverse
and is called singular; on the other hand, if the determinant is nonzero, the inverse
exists, and A is called a nonsingular matrix.
In general, finding the inverse of a matrix is a tedious process. For some special
cases, the inverse is easily determined. For aZ x Z matirx
l
=
orrl
l-a', an)
Lat
we have
provided that arra,
*
"'= a#"'^'l-i?'
-:"")
(c.4)
anazr. For a diagonal matrix, we have
I
a\
f o,,
oao
0
A=
I
-...0
AD
A-t =
..
_0
:l
(c.5)
;
i
am_)
provided that a,, * 0 for any i.
The invene of the inverse is the given matrix A: that is,
(A-t1-t
-
(c.6)
The invene of a product AC can be obtained by^inverting each factor and multiptying
the results in reverse order:
(AC;-t
- C-rA-l
(c.7)
For higher order matrices, the inverse is computed using Cramer's rule:
a-r =
;j^aaj a
(c.8)
Here, det A is the determinant of A, and adj A is the adjoint matrix of A. The following
is a summary of the steps needed to calculate the inverse of an n x n square matrix A:
Sec.
C.5
Elgenvalues and
Elgenvectorc
SO7
l.
calculate the matrix of minors. (A minor of the element ar, denoted by det Mr, is
the determinant of the matrix formed by deleting the rth row and theiti columi of
the matrix A.)
2. calculate the matrix of cofactors. (A cofactor of the element ar. denoted by cr, is
related to the minor by c, = (- l),+i detM,r.)
3. calculate the adjoint matrix of A by transposing the matrix of cofactors of A:
adj
4. C.alculate the determinant
det
ofA
A
A = [c'.]r
using
: f o,,rr,
i=l
for any column
I
or
n
det
A=2
i'r
o,ic,i,
for any row
i
5. Use Equation (C.8) to calculare A-t
C.5
EI
ENVAL ES AND EIGENVECTORS
The eigenvalues of an
z X n matrix A are the solutions to the equation
Ax=Ix
(c.e)
Equation (c.9) can be written as (A - rl)x = 0. Nontrivial solution vectors x exist only
if det (A - rI) -- 0. This is an algebraic equation of degree n in )t and is called the chararteristic equation of the matrix. There are n roots for this equation, although some
may be repeated. An eigenvatue of the matrix A is said to be distinct if itls not a
repeated root of the characteristic equation. The polynomial
S()t) = det[A N] is
called the characteristic polynomial of A. Associited with eich eigenvaiue r,, ii a
nonzero vector xd of the eigenvalue equation r, = )r,x,. This solution vector is callled an
eigenvector. For example,.the eigenvalues of the mairix
-
A: [3
Ll
4l
3l
'are obtained by solving the equation
lr-x
detL
I
+I
3-^l=o
or
(3-\)(3-r)-4=0
This second-degree equation has two real roots, trr = 1 and trz = 5. There are rwo
eigenvectors. The eigenvector associated with )., = i is the solutlon to
508
Elementary Matrix
Theory
Appendix C
t;;l[;l]=[;;]
Then 2x,
ing
x,
:
*
lz al
: lol
[r z) [,,]:
L,,l Lo_]
4rr= 0andx, + 2t, -- 0, from which it follows that x r =
1, we
find that the eigenvector is
-, =
The eigenvector associated with )r,
or
5 is the solution to
[;;]=,[;;]
f_z
q1
br.
By choos-
[-?]
tl il
L
which has the solution xz =
=
-b,
r -z) lol__o
Lo-r
Choosing
r, = 1 glves
*: t-rl
Lrl
C.6
FUNCTIONS OF A MATRIX
Any analytic scalar function /(r) of a scalar t can be uniquely expressed in a conver-
gent Maclaurin series
as
r(,,:
i{#ro},-,i
The same type of expansion can be used to defrne functions of matrices. Thus, the function /(A) of the n x z matrix A can be expanded as
/(A): > {#n,,},-,#
(c.ro)
For example,
sinA = (sin0)I + (cos0)A + 1-sino)
+
f;
"'+ (-cosoY-a1.t- * "'
:^-f;.f- (-,r##.
Soc.
C.6
Functions ot a Matrix
509
and
exp[Ar] = exp[0]I + exp[0]A,
+ exp[o]
+ exp[o] {n!1
+.
*.
.
=I+Ar *A"t
2l *...*4"'n, *"'
The Cayley-Hamilton (C-H) theorem states that any matrix satisfies its ovn characteristic equation. That is, given an arbitrary n x x matrix A with characteristic polynomial g(I) = det(A - }.I), it follows that S(A) = 0. As an example, if
A=l-3 4l
3J
LI
so that
det[A
- II] = g(I) = 12 -
6I + 5
then, by the Cayley-Hamilton theorem, we have
g(A):A2-6A+5I=0
OI
A2 = 6A
- 5I
(C.11)
In general, the Cayley-Hamilton theorem enables us to express any power of a
matrix in terms of a linear combination of Ar for & = 0, l,Z, ..., n - 1. For exanple,
N can be found from Equation (C.ll) by multiplying both sides by A to obtain
A3:6A2
- 5A,
:6[6A-sr]-sA
= 3lA - 30I
Similarly, higher powers of A can be obtained by this method. Multiplying Equation
(C.ll) by A-r, we obtain
l-r = 9l:4
5
assuming that A-l exists. As a consequence of the C-H theorem, it follows that any
function /(A) can be expressed as
/(A)
: 5 r*ro
&-0
The calculation of 1o,.yr, ...,.yn_r can be carried out by the iterative method used in
the calculation of An and A'*r. It can be shown that if the eigenvalues of A are distinct, then the set of coefficients .y0, ]r, ...,.ya_r satisfies the follorving equations:
510
Elementary Matrlx
Theory
Appendlx
c
,f(I') ='Yo *'Yr\r * "'+ "Y,-r).T-t
,f(I:) :'yo + Jr12 * ... * 1,-,)\j-r
:
:
,f(I,)
"yo
+...+'y,-rI;-r
+ "yrlo
As an example, let us calculate exp [Ar], where
A= 13 4l
LI
The eigenvalues of A are )tr
=I
and
\,
=
3J
5, with
/(A) = exp [Ar]. Then
I
exp[Arl =
)
t-0
1o1r;a,trr
=.yo(t)I + 1,(r)Ar
where 1o(t) and 1,
(l)
are the solutions to
exp[t]:10(t)+1,(r)
exp[5t]= ro(r) + 51,(r)
so that
rr(r)
: j(exn[5rl - exp[t])
ro(r) = jts exnlrl
- exp[5r])
Therefore,
exp[Ar]
:
(exp[r]
-
l*pp,ll[l f] * t1.,.o1r,t -.,ot,p[i
: []"-00,, -
exp[r]
exp[r])
[],.-0,r,, -
exp[5r]
- *0t,1
]exptsrl
1]
-l
-.-pt,ll
If the eigenvalues are not distinct, then we have fewer equarions than unknowns. By
differentiating the equation corresponding to the repeated eigenvalue with resepect to
I, we obtain a new equation that can be used to solve for 1n(t). fr(r). ...,1,_,(t). For
example, consider
[-r o ol
^:Ls-i;]
Sec.
C.6
Functlons o, a Matrk
511
This matrix has eigenvalues Ir =
and )r, = trr = -2. The coefficients
and 1r(r) are obtained as the solution to the following set of equations:
-l
fo(), fr(r),
exp[-r] = ro(r) - rr(r) + r,(r)
expl-2tl = ro(r) - 2tr!) + 41rg)
texpl-Ztl: rr(r) - 4r:(r)
Solving for 1, yields
ro(r) =
4
exp[-r]
- 3 expl-2rl - 2t exp[-2tl
rr(r) = 4exp[- tl - 4exp[-Zt] - 3rexp[-2r]
1r(r) = exp[-rl - exp[ - Ul - t expl-2t]
Thus,
[r o ol [-r o ol t-r o ol
exp[Ar]=1n1r;l o r ol+1,1ryl o -4 4l+1,1ryl o rz -16
[oooJ Io-roJ
[o 4 4)
I
0
o
[exp[-r]
I
exp[-2rl-2texp[-2tl
=| 0
4texpl-Ztl
L 0
-tcxp[-2tl -4exp[-r] + 4expl-?.tl + atexp[-tl)
I
Appendix D
Partial Fractions
Expansion in partial fractions is a technique used to reduce proper rational functionsl
of the form N(s)/D(s) into sums of simple terms. Each term by itself is a proper rational function with denominator of degree 2 or less. More specifically, if N(s) and D(s)
are polynomials, and the degree of N(s) is less than the degree of D(s), then it follows
from the theorem of algebra that
rr + rz+ ...+ r,
#31 =
where each
I
(D.1)
has one of the forms
G+al-
or
Bs+C
Gt;ps+qf
where the polynomial s2 + ps + q is irreducible, and p and u are nonnegative integers.
The sum in the right-hand side of Equation (D.l) is called the partial-fiaction decomposition of N(s)/D(s), and each is called a partial fracrion. By using long division,
improper rational functions can be written as a sum of a polynomial oflegrie M N
and a proper rational function, where M is the degree of the polynomial N(s) and N is
the degree of the polynomial D(s). For example, given
I
-
sa+3s3--5s2-l
s3+2s2-s+l
we obtain, by long division,
rA proper ralional funclion
degree of the denominator.
512
is a ralio of two polynomials. with the degree o[ the numerator tess than lhe
Sec. o.1
513
Case 1 : Nonrepeated Linear Factors
- .5s2 - I =J+1
s3+?sz-.s+l
5t + 3sr
6s2
-
-'
2
s3+2s2-.r+l
-
-
The parrial-fraction decomposition is then found for (6.s2 2s ?)/(s3 + 2s2 s + 1).
Partial fractions are very useful in integration and also in finding the iavene of many
transforms, such as kplace, Fourier, and Z-transforms. All these operators share one
property in common: linearity.
The first step in the partial-fraction technique is to express D(.s) as a product of factors s + D or irreducible quadratic factors s2 + ps + q. Repeated factors are then coltected, so that D(s) is a product of different factors of the form (s + 6)ts or
(sz + ps + q)', where p and u are nonnegative integers. Thc form of the partial fractions deperrds on the type of factors we have for D(s). There are four different cases.
-
D.1
CASE 1: NONREPEATED LINEAR FACTORS
To every nonrepeated factor s + D of D(s), there corresponds a partial fraction
A/(s + b).ln general, the rational function can be rvritten as
ffi=,
*^1'1
where
, = (*#0,),=_,
(D.2)
Example D.l
Consider the rational funclion
37
- lls
s3-4s2+s+6
The denominator has the tactored form (s + l)(s - 2)(s - :t). All lhese factorc are liner.r
nonrepeated factors. Thus, for the factor .s + l, there corresponds a partial fraction of lhe
form A/(s + 1). Similarlv, for the factors r - I and s - 3, thcrc correspond partial fractions 8/(s - 2) and C/(s - 3), resPectively. The decomposition of Equation (D.l) then
has the form
;i-
37-lls
4F;" +; =,
A * B +,
i
r
r-lJ
The values of A, 8. and C are obtained using Equation (D.2):
^
=
'=
(,;Ir]'jii),.
_,
=
o
/ 37-ll.s \
lAi irt' l rr/, , = -'
C
-
3
Si4
parflal Fracltons Appendtx D
37-lls \ =r
c=(
- \(s + l)(s - 2)/.-.1
The partial-fraction decomposition is, therefore.
37- lls
=i
-airii+o
'i
4
+r-
5 +,
i2
I
-
l
xample D.2
Let us find the partial-fraction decomposition of
2r+l
i-+ irt--a'
We factor the polynomial as
D(s) =
5:t
+
3s2
-
4s = s(s + 4)(s
- l)
and then use the partial-fraction form
2s+1
A +s+4ts_
B
C
si+3rr_4s= s
I
Using Equarion (D.2), we find that the cocfficients are
/ 2r+l \
d=(t'*nlt'-r)/.="=-a
_7
B=/2'1r\
\s(s I)/,--o 20
c=/2J*l\
-3
\s(s + 4)/..r 5
The partial-fraction decomposition is, therefore.
2s+l
I ++ 7
3
sr+3s2-4s 4s 2o(s+4)' 5(s-l)
,./
CASE ll: REPEATED LINEAR FACTORS
To cach repeated factor (s + b)P, there corresponds the partial fraction
At - -!2...- -.4*
s*D'(s+D)2 "" (s + 6;*
The coefficients r{* can be determined by the formula
\ D(s) J,. .o
" f('*i.'(rt)
u. =
(D.3)
D.2
Sec.
Case ll: Repeated Linear Faclors
515
=
r'2,
'- (o-] r;,ta=*#'), --oo:
,p
-I
(D.4)
Exanple D.8
Consider the rational .function
2s2-25s-33
F - isl es=
The denominator has the factored form D(s) = (s + l)2(s - 5). For rhe tacrors 5, there
corresponds a partial iruction of the form 8/(s - 5). The facr<.rr (s + I is a lhear. repeared
)2
factor to which there cr-'rrs5pon6s a partial fraction of the form zl2l(s + l), + el/(; + l)
The decomposition of Equation (D.1) then has the form
tu2-25s-33
B
__
=
A,
+ __---L +
s'-3sz-9s-5 s-5 s+1
The values of
A.
(s+
I, Ar. and A2 are obtained using Equations (D.2),
(D.s)
l)2
(D.3), and @.4)
as
follows:
,:(I=t?#),.,=-,
Ar=( 2s2-25r-33
s-5
),.,
= -,
I
ld2r2-25s-33\
^'-@-r[\a
,-5 i,,
/Zs2-20s+158\
=(--i-/,-,='
Hence, the rational function in Equation (D.5) can be rvrirren as
2s2-2ss-33
3
5
s'-3s'-9s-5
s-5 s+l
I
(s+
l)2
Eqnrnple D.4
[-et us find the partial-fraction decomposition of
3s3-1812+9s-4
sa-5s3+fu2+2os+8
Thc denominator can be factored as (s + l)(s - 2)3. Since rve have a repeared
facror of
order 3, the corresponding partial fraction is
3rr- l8s2+9s-4
B +;1,*i;--2):+G=F
A,
A,
A,
sr-s;+ou.:+zos+a=s+I
The coefficient I can be obtained using Equation (D.2):
t_lJ
I = i3"'_t$-_*
\ (s - 2)sr J,.-,
The coefficients A,.
i = 1.2.3, are obtained
=
1
using Equarions 1D.3) and (D.4). Firct,
(D.6)
516
Pardal F
/3-"3-1&s2+9s-4\
A,=
' t---.- --__-- i
s+l
\
/,-z
actons
App€ndix D
=2
td 3s3-18s2+29s-4\
a,=(*
s41
),-,
=
(s-+:#ga!r),., = -,
Similarly, ,,1, can be found using Equation (D.4).
In many cases, it is much easier to use the following technique, especialty after finding
all but one coefficient: Multiplying -tboth sides of Equation (D.6) by (s + f)(s 2)3 gives
-
3s3
-
t&2 +
9s
-
4 = a(s
-
2)3
+ A,(s + lxs
If we compare the coefficient of s3 on both
-
Z)2
+ Ar(s + l)(s
-
2) +.Ar(s +
t)
sides, we obtain
3=B+At
Since B = 2, it follows thal Ar
=
1.
3s3-1&2+9s-4
r{
-
5s3
+
612
+
20s
+
D.3 CASE lll: NONREPEATED
In the case of
of the fonr
a nonrepeated
The resulting partial-fraction decomposition is then
8 s+1 s-2
-!__
(s-2)2'(s-2)3
IRREDUCIBLE
irreducible seconddegree polynomial, we set up fractions
As+B
(D.7)
@+ps+q)
The best way to find the coeffrcients is to equate the coefficients of different powers of
s, as is deEonstrated in the following example.
Ilqnmple D.6
C-onsider the ratioDal futrctiotr
s2-s-21
t5-+ &E
- 1) and use the partial-fraction form:
As+B C
We fector the polynomial as D(s) = (s2 + 4)(2s
s2-s-21
t3-+8s-,4=;r+4 -2"-l
Multiplying by the lowest common denominator gives
s2
- s - Zl:
(As + B)(zs
-
1) + C(sz + 4)
(D.8)
Sec.
D.4
Case lV: Repeated lrreducible Second_Degree Factors
517
The varues of .4. g. and c can be found by comparing
the coefficicnrs of differenr powen
ofs or by substituting values for s that mike ,ariousiactoi.
,"r,r. For exampre, substituting s = l/1, we obrain (1/4) -.(1/z)
21 = a|/qa, *ii"r, r,". rhe solution c
= _5.
The remaining coefficients can be^found Uy
diff..enr poruen of". Rearranging
"ornprring
the
right-hand side of Equarion (D.g) gives
'
sz
- s-
2l = (2A + C)sz +
1_
4 + B)s _ u +4C
Comparingthecoefficientofs2onbothsides,weseethat2A+C=l.KnowingCresults
ill = l Similarly, comparing rhe conslanr t.rr. yi.fa, _A * 4C = _2',ot B= l. Thus,
the partial-fraction ttecomposition of fhe rationaliun"rion
lri
s:-s-21
3s+l
5
2sr-;r+&-a=rr+?-2"_r
D.4
QA.SE lV: REpEATED TRREDUCTBLE
GREE FACT
For repeated irreducible second-degree factors, we
have factors of the fomr
Pt *
!rt-+
+ps +
q (s, + pslBz
+ q)2 - "'-
sz
A..s
4rt
1r,
+
It.,
(D.e)
+rr + Sf
Agaitr, the best way to find the coefficients is to equate
the different powers of
s.
Erample D.0
As an example of repeated irreducible second_degree
factors, consider
sa-6s+7
(s'?-4s+t),
(D.10)
Note that the denominalor can b€ written as.[(s _
2f + l12. Thcrefore, applying Equation (D.9) with v = 2. we can write the partial'fracrions
foi'Eqror,on (D.10)
as
{-q+7
(s2
-
4s +
s),
Azs+8,
- is=, + I * A.s+8,
[?, _'-;1r*-,,,
(D.ll)
MultipryingbothsidesofEquarion(D.11)by[(s-2)2+l]2andrea,angingterms,sgeltain
s2-6s+ 7 =AJ1 +(Br -
4Ar)s2 + (sA,
+
Ar)s + 81
_5Bl
(D.12)
The.constants z{ ,, Br, Az, and g, can be determined
by comparing the coefficients ofs in
the lefi- and right-hand sides of bquarion (D.12). rr,.i*tfrli""i-"f
,i ytrd";, =
the coefficient of s? yields
il;;
l=Br-4A1, or Br=l
Slg
Partial
Fractons
Comparing the coeffrcient of s on both sides. we obtain
-6=5Ar+ A2 or Az=-6
Comparing the constant term yields
7=5Bt+8, or Br=2
Finally, the partial-fraction decomposition of Equation (D.10) is
sa-6s+7 =
i;t-
4s +
,l
6s-2
1
(, - 2)ti-tf (' - ,Tl
Appendix D
Bibliography
l.
2.
3.
4.
5.
Brigham, E. oram. The Fast Fourier Tronslonn ant! tU Applicatiarr.s. Englewood cliffs, NJ:
Prentice-Hall, 1988.
Gabel, Robert A., and Richard A. Roberts. Signals and Linear Systcms,3d ed' New York:
Wiley, 1987.
Johnson, Johnny R. Introduction to Digitat Signat Processing. Eng,lewood Cliffs, NJ: Prentice-Hall, 1989.
krhi, B. P. Signab and Systenr-s' Berkeley-Cambridge Press. Carmichael' CA' 1987'
McGillem, claire D.. and ceorge R. Cooper. Conrinuous and Discrete signal and system
Analysk,2d ed. New York: Holt, Reinhart and Winslon' l9M
6. O'Flynn. Michael, and Eugene Moriarity. Linear
systems: Tinrc Domain and Transfornt
Analysb. Nerv York: Harper and Row' 1987.
7. Oppenheim. Alan v., and Ronald w. Schafer. Dbcrete-Time Signol I'rocessing. Englervood
Cliffs. NJ: Prentice-Hall, 1989.
8. Oppenheim, Alan V., Alan S. Wilsky' and S. Hamid Nawab. Signals and Systems' 2d ed'
Englewood Cliffs, NJ: Prenticc-Hall, 1997.
9. Papoulis, Athanasios. The Fourier Inregral and lts Applications. Ncrv York: McGrarv-Hill,
t962.
10. Philip, Charles L., and John Parr. signab, systems and Transfornts. Englewood cliffs, NJ:
Prentice-Halt, 1995.
Svstems. Boston: PWSI L Poularikas, Alexander D., and Samuel seely. Elemens of signals utl
Kent, 19E8.
12. Proakis, John C., and Dimitris G. Manolakis. lntra duction to Digirtl Signal Processing. New
York: Macmillan, 198E.
13. Scolt, Donald E. An Inrroduction to circuit Analysis: .A systerns Approach. New York:
McGraw-Hill, 1987'
519
BtbIography
14. Siebert, William
M. Circuiu, Signab, and
york: McGraw-Hi[, 19g6.
principles of Dicrete systems and
Digiul sig-
Sysrems. New
strum. Roberr D., and Donald E ..Ki,rk. Firct
nol Processing. Reading, MA: Addison-Wesley, l9gE.
16. swisher, George M. Intrcduction to Lhear systems
Anatysb. Beaverton, oR: Matrix, 1g6.
t?. ziemer, Roger E.. william H. Tranter, and D. Ronard Fannin. signals
and sysrems: continuous and Discrete,2d ed. New york: Macmillan, 19g9.
15.
lndex
A
A/D conversion 33,364 (see
also Sampling)
Adders, 69, 306
Amplitude modulation, l9()
Analog signals, 33
Anticipatory system, 48
Aperiodic signal, 4
Average power, 7
B
Band edge, 202
BandJimited sign al, 122, 185, 197
Band-pass filter, 200
Bandwidth:
absolute, 205
defDition, 204
equivalcnt,206
half-power (3-dB), 205
null-to-null, 206
rms, 207
27o,207
Basic system components, 6E
Bilateral l-aplace transform, see Laplacr
Transform
Bilateral Z-transform, see Z-transform
Bilinear lransform. 473
Binomial coefficients, 496
Bounded-input/bounded-outpur
slability, see Stable sysrem
Buttervorth Iilter, 458
c
Canonical forms, continuous-time:
first form, 70, 250
second form, 71, 52
Canonical forms, discrete-lime:
Iint form, 307-308
second form, 30E-309
Cascade interconnection, 252
Causal signal, 65
Causal sptem, tE, 64
Cayley-Hamilton theorem, 83. 91. 314. 509
clraracteristic equation,
Chebyshev filter, ,152
91
Circular convolution. see Periodic
convolution
Circular shift, see Pcriodic shift
Classification oI continuous-time sysrems, 42
Closed-loop system. 256
Coefficient multipliers, 306
Cofactor, 507
Complex exponcntial, 4, 285
Complex numbers, .185
arithmetical opcrations. zl{17
conjugate,.l8T
polar form, 485
powers and roots.489
trigonometric [orm, el86
Conlinuous-time signals, 2
piecewise continuous, 3
Continuous-lime systems:
causal,4li
classification.
.ll-52
detinition,.53
ditferential equation representation, 67
impulse responsc representatioa, 53
invertibility and invcrse, 50
linear and nonlinenr, 42
with and rryithour memory. 47
simulation diagrams, 70
stable,5'l
stability con:;idcrations, 91
state-variable rcprcsentation, 76
lime-varying and t ime-invariant, t[6
Convolution intcgral:
de{inition, -52
graphical intcrprctation. 58
properlies, 54
Convolution propcrry:
continuous-limc Fourier transform, l8l
discrele Fourier transform. 423
discrete-Timc Fourier trans[orm, 346
Fourier series, ll I
Z-transform. 375
Convolution sum:
defrnition. 287
graphical interpretalion, 2E8
properties, 293
tabular forms tor finite sequences, 291-92
521
522
lndex
D
frequenry function, 347
D/A conversion, 364
Decimation, 359
Detinite integrals. 496
6-function:
definition, 22
derivatives, 30
properties, 5-29
De Moiwe &eorem, zE9
Design of analog filters:
Butterwosth tilter. 458
Chebyshev filler,462
frequency transformations, 455
ideal filters, 452-53
specifications for low-pass filter. 454
Design of digital filten, 468
computer-aided design, zEl
FIR design using windows, 475
frequency transformations, 456
IIR filter design by bilinear
transformation, 473
IIR filter design by impulse invariance,
inverse relation, 342
469
I
linear-phase filten, 475
Diagonal matrix, 5(X
Difference equations:
characteristic equation, 300
characteristic roots, 300
homogeneous solution, 300
impulse response from, 305{E
initial conditions, 299
particular solution, 302
solution by iteration, 298-29
Differential equations, see a/so Continuoustime systems
solution, 67
Digiral sitnals, 33
Dirichlet Conditions. 122
Discrete Fourier lransform:
circular shift, 422
definition, 421
inverse transform (lDFf), 421
linear convolulion using the DF"I,426
matrix interpretation, 425
periodicity of DFT and IDFT, 421
properties, 422-25
zero-augmenting, 426
Discrete impulse function, 283
Discrete step function, see Unit step function
Discrete-time Fourier series (DTFS):
convergence of, 333
evaluation of co€fficients, 333
prop,erties, 338-39
representation of periodic sequences, 331
table of properties, 339
Discrete-time Fourier transform (DTFT):
delinition. 340
periodic property, 341
properties, 345-51
table of properties, 351
table of transform pairs, 352
use in convolution, 346
Discrete-time sign al, 3, 27 E
Discrete-time systems, 41, 287
difference-equation representation, 298
finite impulse response, 288
impulse response, 287
infinite impulse response, 288
simulation diagrams, 307
stabiliry ol 316
state-variable representatiotr, 310
Distortionless sy,stem, 139, 347
Down Sampling, 359
Duration, 2M, 208
E
Eigenvalues, 83,
9l
Eigenvectors, 507
Elementary signals, 19, ?52
Energy signal, 7
Euler's form, 35, zlE6
Exponential function, 5, 492
Exponential sequence, 284
F
Fast Fourier transforms
bit-reversal in, 433
(FFf):
decimation-in-frequency (DIF) algorirhm,
4t1
decimation-in-time (DIT) algorithm, 429
in-place mmpuiation, 432
signal-flow graph for DIF, 436
signal-flow graph for DIT, 412
spectral estimation of analog signals, 436
Feedback connection, 256
Feedback sptem, 256
Filters, 200, see alsa Design of filters
Final-value theorem:
laplace transform, 244, 269
Z-transform, 389
Finite impulse response system, 2E8
First difference, 283
Fourier series:
coeflicienrs, 109, 113
exponential, I l2
generaliz-ed, 109
of periodic sequences, s€e
Discrete-time Fourier series
properties:
convolution of two signals, 13l
integration, 134
525
lndex
least squares approximation, 125
linearity, 129
product of two signals, 130
shift in rime, 133
symmetry, 127
trigonometric, 114
Fourier transform:
applications of:
amplitude modulation, 190
multiplexing, l92
sampling theorem, 194
signal fillering, 2fl)
deEnition, 164-65
development, 163
examples, 166
existencc, 165
ProPerties:
Inverse l:place transform, 2z16-17
Inverse of a matrix. 506
Inverse syslem, 50
Inverse Z-translbrm. 392
Inversion propert)'. tl6
lnvertible LTI systems, 65
K
Kirchhofl's current law. 259
Kirchhoffs voltage law. 137, 259
Kronecker delta, 107
L
laplace transform:
applications:
control, 260
RLC circuit analysis, 25E
solution of dilfcrcntial equations. 257
bilateral,225
bilateral using unilateral. Z
inverse,246-47
l8l
differentiation, 17
convolution,
duality, 184
linearity, 171
modulation, 185
symmetry. 173
lime scaling, 175
time shifiing, 175
discrcte-time, see Discrete-time Fourier
transform
G
Gaussian pulse, 28, 210
Cibbr phenomenon, 142
Gram-Schmidt orthogonaliz:tion, 148
H
Harmonic content, 159
Harmonic signals:
continuous-time, 4
discrete-time, 2E6
Hermitian malrix, 505
Hold Circuits. 357
I
Ideal tilten, 198, 2ffi
Ideal sampling, 194
IIR tilter, see Design of digital filters
Impulse function, see 6-function
derivative of, 30
Impulse train, 186, 194
Impulse modulation model, 195, 35-5
Impulse response. 73
Indefinite integrals, 49&-501
lnitial-value theorem:
l:place transform, 24!, 259
Z{ransform.389
Integrator,69
Interpolation, 199, 359
ProPerties:
convolution. 2.lt)
differentiation in the s-d<
differentiation in time do
final-valuc lhcorcm, 244
initial-valuc thcorcm, 243
integralion in tinle domai
linearity,232
modulation,239
shifting in thc s-Domain,
timc scaling. 23.1
rime shitting. 232
region of convcrgenca, 226
unilateral,22tl
Left half-plane,267
Linear constant coelficiens Dl
Linear convolution. 295
Linearity, 42, 2tt7
Linear time-invariant syslem, i
properties,64
Logarithmic function, 492
M
Marginally stahle system, 267
Malched filrer, 56
Matrices:
definition, 502
delerminant. -506
inverse,506
minor, 507
operations,503
special kinds. 504
Memoryless systems. 47, 64
Modulation. see Amplitude modulation
rtl
I
lS -rl
524
lndex
Ivlultiple-order polcs, 268
Multiplication of matrices. 503
N
Nonanticipatory syste:n, 4E
(see a/so Causal system)
Noncausal system, see Anticipatory system
Nonperiodic signals. see Aperiodic signals
o
Orthogonal representation of signals,
to7-tt2
Orthogonal signals, 107
Orthonormal signals, 108
Overshoot. 143, 262
P
Parallel interconnection, 255, 293
Parseval's theorem, 132, 179
Partial fraction expansion, 247, 512
Passband, 2(D, 453
Period, 4, 285
Periodic convolution:
continuous-time, 131
discretc-time, 295-96
Periqlic sequence:
definition, 285
fundamental period, 285
Periodic shifl, 296
Periodic signals:
defrnition, 4
fundamental period, 4
representatioD, I l3
Plant, 260
Power signals, 7
Sampled-data system, 352
Sampling function, 22
Sampling of continuous-time functions:
aliasing in, 197, 354
Fourier transfgrm of, 351
impulse-modulation model, 195, 355
Nyquist rate, 195
Nyquist sampling theorem, 196, 354
Sampling property, see E-function
Sanrpling raie conversion, 359
Scalar multipliers, 80
Scaling property, see 6-function
Schwartz's inequality, 209
Separation property,
E6
Series of exponentials, 496
Shifting operation, 10
Sifting propeny, see 6-function
Signum function, 21, l7E
Simple-order poles, 267
Simulation diagrams:
continuous-time systems, 70
discrete-time systems, 306
in the Z-domain, ,l(2-8
Sinc function, 22
Singularity functions, 29
Sinusoidal signal, 4
Special functions:
beta, 494
gamma, 493
incomplete gamma, 494
Spectrum:
amplitude, 113
enegy, 179, 1M
estimation, see Discrele Fourier transform
line, I 13
Principle value, tlE6
a
Quantization, 33, 364
R,
Ramp function, 2l
Random signals, 32
Rational function ,226, 247 , 394
Reconstruction Filters:
ideal, 195-6, 356
practical,357
Reclangular function, 3, 20
Reflection operation, 13
Region of convergence (ROC):
laplace transform, 226
Z-transform, 37&{0
Rectifier, I 18
Residue theorem, 394
Rise time, 262
' '; , . .. ,
y,,ittt,i
,--.,ii-'
s
ll
phase, 113
power-density, 180
two-sided, I l5
State equalions, continuous-time:
definition, 77
first canonical form, 86
sccond canonical form, 87
timc-domain solution, 78
State equations, discrete-time:
fint canonical form, 310-t3
frequency-domain solution, 40E
parallel and cascade forms, {D
second canonical form, 310-13
timedomain solution, 313
Z-transform analysis, .102
State-transition matrix. continuous-time:
definition, 80
determination using liplace transform,
263-$
propcrties, 85, 86
iime-domain evaluation, 81-5
State-transition matrix, discrete:time:
lndex
525
definition,315
delermination using Z-transform, 408
properties,315-16
relalion lo impulse response, 315
time-domain evaluation of. 3l{
State-variable representation, 76, 310
equivalence, 89, 313
Stable LTI systems, 65
Stable system, 51. 9l
Stability considerations. 9l
Stability in the s-domain, 266
Stop band, 20
Subtractors, 69
Summers, 306
Symrnetry:
effects of, 127
even symmetric signal, l5
odd symmetric signal, l5
Time-scaling:
continuous-time srgnals. l7
discrete-time signals, 281
Trvo-sided exponential, I 68
U
Uncertainty principlc. 2M
Unilateral l:place transform, see laplace
transform
Unilateraf Z{ransform. see Z-lransform
System:
causal.
,18
continuous-time and discrete.time,
distortionless, I39
Transformation:
of independent variahle, 281
of state Yector. 89
Transition matrix, ic., Stale-transition matrix
Transition propcrly. sj
Triangle inequalirv. .l9t)
Triangular pulse,60
Fourier transform of. 167
Trigonometric idenrirics. 491 -92
4
I
function, 135
inverse,50
linear a-od nonlinear, 42
memoryless, 47
with periodic inpurs, 135
lime-varying and timc-invariant, 46
Uni lorm quantizes.
Unit delay, 307
-165
Unit doublet. 30
Unit impulse function.
Unit step function:
continuous-limc. l9
see
6.function
discrete-time, 2S3
Up sarnpling, 359
T
w
Tables:
effects of symmerry, 128
Walsh [unctions. l5(] 51
Window functions:
in FIR digital trlrer design,476-Tl
in spectral estimation, 439-41
Fourier series properlies:
discrelc-time, 339
Fourier transform pairs:
continuous-time, 172-73
discrete-time, 352
Fourier transform properties:
continuous{ime, 189
discrete-timc, 351
Frequency transformalions:
analog, 456
digital, 457
laplace transform:
Z-transform:
convolution propcrty, 390
dcfinition,3T6
inversion by series cxpansion. 394
invcnion integral. -1()2
invcrsion by partial-f raction expansion,
395
pairs, 230
properties of the unilateral Z-transform.
prop€rties,246
383
[:place transforms and their Z-transform
equivalens. 470
Z-transform:
pairs,393
properlies, 392
Time average, 49-56
Time domain solution. 78
Time limited, 9
Transfer function, 135, 242,
open-loop, 256
z
26
region of convergencc, 378-79
relation to Laplacc transform, 410
solution of dilfcrcncc equations, 386
lable of propertics. 392
table of transforms. 393
Zero-input componcnt. 2
Zero-order hold.357
Zero padding.2Sti, a/so srz Discrelc
Fourier lransform
Zero-state componcnt. 26zl
Download