Class

advertisement
Contents
1 Introduction to Signals and Systems
1.1 Introduction . . . . . . . . . . . . . . . . . . .
1.2 Continuous and Discrete Time Signals . . . .
1.3 Signal Properties . . . . . . . . . . . . . . . .
1.3.1 Signal Energy and Power . . . . . . .
1.3.2 Periodic Signals . . . . . . . . . . . . .
1.3.3 Even and Odd Symmetry Signals . . .
1.3.4 Elementary Signal Transformations . .
1.4 Some Important Signals . . . . . . . . . . . .
1.4.1 The Delta Function . . . . . . . . . .
1.4.2 The Unit-Step Function . . . . . . . .
1.4.3 Piecewise Constant Functions . . . . .
1.4.4 Exponential Signals . . . . . . . . . .
1.5 Continuous-Time and Discrete-Time Systems
1.5.1 System Properties . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 Linear, Time-Invariant (LTI) Systems
2.1 The Impulse Response and the Convolution Integral . . . . . . . . . .
2.1.1 The Continuous-Time Case . . . . . . . . . . . . . . . . . . . .
2.1.2 The Discrete-Time Case . . . . . . . . . . . . . . . . . . . . . .
2.2 The Laplace Transform and its Use to Study LTI Systems . . . . . . .
2.2.1 Properties of the Laplace Transform . . . . . . . . . . . . . . .
2.2.2 The Laplace Transform of Some Useful Functions . . . . . . . .
2.2.3 Inversion by Partial Fraction Expansion . . . . . . . . . . . . .
2.2.4 Solution of Differential Equations Using the Laplace Transform
2.2.5 Interconnection of Linear Systems . . . . . . . . . . . . . . . .
2.2.6 The Sinusoidal Steady-State Response of a LTI System . . . .
3 The Fourier Series
3.1 Signal Space . . . . . . . . . . . . . . . . . . . . .
3.1.1 Signal Energy . . . . . . . . . . . . . . . .
3.1.2 Signal Orthogonality . . . . . . . . . . . .
3.1.3 The Euclidean Distance . . . . . . . . . .
3.2 The Complex Fourier Series . . . . . . . . . . . .
3.2.1 Complex Fourier Series Examples . . . . .
3.2.2 An Alternative Form of the Fourier Series
3.2.3 Average Power Of Periodic Signals . . . .
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
5
6
7
7
9
10
11
12
12
14
16
16
17
18
.
.
.
.
.
.
.
.
.
.
26
26
26
29
30
33
37
38
41
41
44
.
.
.
.
.
.
.
.
46
46
47
47
47
48
50
52
57
2
CONTENTS
3.3
4 The
4.1
4.2
4.3
4.4
3.2.4 Parseval’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.5 The Root-Mean-Square (RMS) Value of a Periodic Signal . . . . . . . . . . . 60
Steady-State Response of a LTI System to Periodic Inputs . . . . . . . . . . . . . . . 61
Fourier Transform
Properties of the Fourier Transform . . . . . . . . .
Fourier Transforms of Selected Functions . . . . . . .
Application of Fourier Transform to Communication
4.3.1 Baseband vs Passband Signals . . . . . . . .
4.3.2 Amplitude Modulation . . . . . . . . . . . . .
Sampling and the Sampling Theorem . . . . . . . . .
4.4.1 Ideal (Impulse) Sampling . . . . . . . . . . .
4.4.2 Natural Sampling . . . . . . . . . . . . . . . .
4.4.3 Zero-Order Hold Sampling . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
65
66
73
75
76
76
78
78
81
82
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
1.9
1.10
1.11
1.12
1.13
1.14
1.15
1.16
1.17
1.18
Examples of audio signals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Dow Jones daily average starting January 5, 2009. . . . . . . . . . . . . . . . . .
The Dow Jones Industrial average over a 50 day period starting on January 5, 2009.
Example finite energy signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A continuous-time periodic signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A discrete-time periodic signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Example of time-shift: original signal x(t), delayed signal x(t − 3) and advanced
signal x(t + 3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The continuous-time and discrete-time delta functions. . . . . . . . . . . . . . . . . .
A unit-area rectangular pulse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Illustration for proof of the sifting property of ∆(t). . . . . . . . . . . . . . . . . . .
Illustration of f (t)δ(t − a) = f (a)δ(t − a). . . . . . . . . . . . . . . . . . . . . . . . .
The continuous and discrete-time unit-step functions. . . . . . . . . . . . . . . . . .
A piecewise constant function and its derivative. . . . . . . . . . . . . . . . . . . . .
A system with one input and one output. . . . . . . . . . . . . . . . . . . . . . . . .
A system with one input and one output. . . . . . . . . . . . . . . . . . . . . . . . .
Illustration of the linearity property. . . . . . . . . . . . . . . . . . . . . . . . . . . .
Illustration of the commutation property of linear systems. . . . . . . . . . . . . . .
Illustration of the time-invariance property. . . . . . . . . . . . . . . . . . . . . . . .
11
13
13
14
15
15
16
17
18
19
20
22
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
Illustration of convolution for Example 1. . . . . .
Illustration of convolution for Example 2. . . . . .
Illustration of discrete convolution for Example 1. .
Illustration of discrete convolution for Example 2. .
Illustration of ROC for ejω0 t . . . . . . . . . . . . .
A parallel interconnection of two systems. . . . . .
A series interconnection of two systems. . . . . . .
A feedback interconnection of two systems. . . . .
Illustration of equalization. . . . . . . . . . . . . .
Circuit for Example 4. . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
28
29
31
32
33
42
42
43
43
45
3.1
3.2
3.3
3.4
3.5
3.6
Illustration of signal-space concepts. . . . . . . . . . . .
Rectangular pulse periodic signal with period T . . . . .
Amplitude spectrum of the periodic signal in Figure 3.2.
Rectangular pulse periodic signal with period T . . . . .
Spectrum of periodic signal in Figure 3.4. . . . . . . . .
Half-wave rectified signal with period T . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
48
50
51
52
53
54
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
6
6
9
10
10
4
LIST OF FIGURES
3.7
3.8
3.9
3.10
3.11
3.12
3.13
3.14
Spectrum of periodic signal in Figure 3.6. . . . . . . . . .
Half-wave rectified signal with period T . . . . . . . . . .
Spectrum of periodic signal in Figure 3.6. . . . . . . . . .
Fourier series using 10 (red) and 50 terms (green) . . . .
Fourier series using 5 (red) and 10 terms (green) . . . . .
Simple low-pass filter for extracting the dc component. .
Rectangular pulse periodic signal with period T = 4. . .
The magnitude of the Fourier coefficients of x(t) and y(t).
4.1
4.2
Rectangular pulse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Fourier transform of a rectangular pulse as a function of normalized frequency
fT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The frequency response of an ideal low-pass filter. . . . . . . . . . . . . . . . . . . . .
(a) The Fourier transform of a baseband signal x(t); (b) the Fourier transform of
x(t) modulated by a pure sinusoid; (c) the upper sideband; (d) the lower sideband.
The response of a LTI system to an input in the time-domain and frequency domains.
Triangular and rectangular pulses. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The sign function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Fourier transform of a pure cosine carrier. . . . . . . . . . . . . . . . . . . . . .
Amplitude modulation (DSB-SC). . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Demodulation of DSB-SC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Envelope demodulation of AM radio signals. . . . . . . . . . . . . . . . . . . . . . . .
The sampling function h(t). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Illustration of the sampled signal xs (t). . . . . . . . . . . . . . . . . . . . . . . . . . .
Illustration of sampling above and below the Nyquist rate. . . . . . . . . . . . . . . .
The low-pass filter G(f ) and its impulse response g(t). . . . . . . . . . . . . . . . . .
The pulse approximation to an impulse. . . . . . . . . . . . . . . . . . . . . . . . . .
The sample-and-hold signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reconstructing the original signal from zero-order hold samples. . . . . . . . . . . . .
Generating 1/P (f ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
4.18
4.19
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
55
56
57
58
59
62
62
64
66
67
68
70
71
71
75
76
76
77
78
79
79
80
81
82
83
83
84
Chapter 1
Introduction to Signals and Systems
1.1
Introduction
In general, a signal is any function of one or more independent variables that describes some
physical behavior. In the one variable case, often the independent variable is time. For example, a
speech signal at the output of a microphone as a function of time is an example of an audio signal.
Two such signals are shown in Figure 1.1.
"
!+&
!
!!+&
!"
!
"!!
#!!
$!!
%!!
&!!
,-./
'!!
(!!
)!!
*!!
"!!!
!
"!!
#!!
$!!
%!!
&!!
,-./
'!!
(!!
)!!
*!!
"!!!
"
!+&
!
!!+&
!"
Figure 1.1: Examples of audio signals.
An example of a signal which is a function of two independent variables is an image, which
presents light intensity and possibly color as a function of two spatial variables, say x and y.
A system in general is any device or entity that processes signals (inputs) and produces other
signals (outputs) in response to them. There are numerous examples of systems that we design to
manipulate signals in order to produce some desirable outcome. For example, an audio amplifier
will receive as input a weak audio signal produced, for example, by a microphone, and will amplify
it and possibly shape its frequency content (more on this later) to produce a higher power signal
that can drive a speaker. Another system is the stock market where a large number of variables
(inputs) affect the daily average value of the Dow Jones Industrial Average Index (and indeed the
value of all stocks). Indeed, the value of the Dow Jones Industrial Average Index as a function of
time is another example of a signal (an output of an underlying financial system in this case). The
Dow Jones industrial average closing values over a 100 day period staring on January 5, 2009 is
shown in Figure 1.2 as a function of time and it is another example of a signal.
5
6
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
Closing Value
10000
9000
8000
7000
6000
0
10
20
30
40
50
Time, Days
60
70
80
90
100
Figure 1.2: The Dow Jones daily average starting January 5, 2009.
1.2
Continuous and Discrete Time Signals
There is a variety of signals and systems that we encounter in every day life. In particular, signals
can be continuous-time or discrete-time. Continuous-time signals, as the name implies, take values
on a continuum in time, whereas discrete-time signals are defined only over a a discrete set of
normally equally spaced times. Examples of continuous-time signals are the audio signals at the
output of a condenser (electrostatic based) microphone, as those shown in Figure 1.1. An example
of a discrete-time signal is the Dow Jones daily average shown in Figure 1.2, which takes values
only over discrete times, i.e. at the end of each day. Unlike in the case of Figure 1.2 where the
discrete-time signal is plotted as a continuous-time signal by connecting the points with straight
lines, in general, discrete-time signals are plotted as shown in Figure 1.3, which plots the Dow Jones
daily average over a 50 day period.
10000
8000
6000
4000
2000
0
0
5
10
15
20
25
Time, Days
30
35
40
45
50
Figure 1.3: The Dow Jones Industrial average over a 50 day period starting on January 5, 2009.
We will use the continuous variable t to denote time when dealing with continuous-time signals
and the discrete time variable n when dealing with discrete-time signals. For example, we will
represent mathematically a continuous-time signal by x(t), for example, and a discrete-time one by
x[n].
Note that both the continuous-time and discrete-time signals described above have amplitudes
that take values on a continuum (in other words the real line). Although we will only focus on
these two classes of signals in this course, there are in fact two more signal classes which are very
important and in fact necessary when signals are processed digitally (by DSPs or computers).
7
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
Computers have a finite world length (although quite large currently) which can only describe
a countable number of amplitudes. Therefore, so that signals can be processed by digital signal
processors one must forgo the infinity of possible amplitudes and only consider a finite set of such
amplitudes, i.e. by discretizing also the amplitude axis. The process of discretizing the amplitude
axis is referred to as quantization. The process of discretizing the time axis if the original signal is
a continuous-time one is referred to as sampling. We will consider sampling later in the course and
we will show that if it is done appropriately (sampling theorem) there need not be any information
loss. In other words, one can reconstruct in this case the original continuous-time signals from
its samples. In contrast, quantization always entails information loss (infinitely many amplitudes
mapped into one) and all we can do is limit it. The two additional classes of signals then are derived
from the continuous and discrete-time signals described above by discretizing the amplitude axis.
1.3
Signal Properties
Practically, signals can be very diverse in nature and shape of course, but they are often characterized by some ”bulk” properties that provide signal users information without the need for a
complete description of the signal.
1.3.1
Signal Energy and Power
Consider a circuit in which the voltage across a resistor of resistance R is v(t) and the current
current through it is i(t). Then the instantaneous power delivered to the resistor is
p(t) = i(t)v(t) =
v 2 (t)
= i2 (t)R.
R
If the current i(t) flows through the resistor for a time (t2 − t1 ) seconds, from some time t1 to
another time t2 , then the energy expended is:
E=
Z t2
p(t)dt =
t1
1
R
Z t2
v 2 (t)dt = R
t1
Z t2
i2 (t)dt.
t1
We also define the average power dissipated in the resistor over the T second interval by the
time-average:
P =
E
1
=
(t2 − t1 )
(t2 − t1 )
Z t2
Ri2 (t)dt =
t1
1
(t2 − t1 )
Z t2
1 2
v (t)dt.
t1
R
In general and analogous to the above definition of energy which depends on the value of the
resistance R, we abstract the definition and define the energy for any signal x(t) (in general complex
and not just currents or voltages) over some time period T to be:
E=
Z t2
t1
|x(t)|2 dt,
where |x(t)| is the magnitude of the (possibly) complex signal x(t). Similarly, the average power of
any signal is defined by
Z t2
1
P =
|x(t)|2 dt.
(t2 − t1 ) t1
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
8
In a similar way the energy and average power of a discrete-time signal x[n] are defined as
E=
n2
X
|x[n]|2
n=n1
n
2
X
1
|x[n]|2 .
(n2 − n1 + 1) n=n1
P =
We often are interested in the energy and power of signals defined over all time, i.e. for
continuous-time signals for −∞ < t < ∞ and for discrete-time signals for −∞ < n < ∞. Thus, for
continuous-time signals we are interested in the limits:
E∞ = lim
Z T
T →∞ −T
P∞
1
= lim
T →∞ 2T
|x(t)|2 dt,
Z T
−T
|x(t)|2 dt.
Similarly, for discrete-time signals, we have:
E∞ = lim
N →∞
N
X
n=−N
|x[n]|2
N
X
1
|x[n]|2 .
N →∞ (2N + 1)
n=−N
P∞ = lim
We distinguished classes of signals based on total energy and total average power based on
whether each is finite or infinite. Based on this binary-valued classification, there would normally
be 4 classes of signals, i.e.:
1. Signals with finite energy and finite power;
2. Signals with finite energy and infinite power;
3. Signals with infinite energy and finite power; and
4. Signals with infinite energy and infinite power.
However, the second of the four classes above, i.e. finite energy and infinite power, cannot possibly
exist since a signal with finite energy must necessarily have finite (and in fact zero) finite average
power. This is easily shown. Consider for example a signal with finite energy A, i.e.,
E∞ = lim
Z T
T →∞ −T
|x(t)|2 dt =
Z ∞
−∞
|x(t)|2 dt = A.
Clearly, we can write (since the term in the integral cannot be negative, and thus the integral
cannot decrease as we increase the region of integration):
Z T
−T
Thus,
P∞
1
= lim
T →∞ 2T
|x(t)|2 dt ≤ A.
Z T
−T
A
= 0.
T →∞ 2T
|x(t)|2 dt ≤ lim
9
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
x(t)
1
t
1
Figure 1.4: Example finite energy signal.
For example, consider the signal x(t) in the form of a rectangular pulse in Figure 1.4 below:
The energy in this signal is clearly E∞ = 1, so this is a finite energy signal. The total average
power is thus zero. In general, finite energy signals imply finite power signals.
An example of a signal that has infinite energy, but finite power is, for example x(t) = 1. Clearly
Z ∞
E∞ =
However,
P∞
1
= lim
T →∞ 2T
−∞
Z T
12 dt = ∞.
2T
= 1.
T →∞ 2T
12 dt = lim
−T
An example of a signal with infinite energy and infinite power is x(t) = t. We have
Z T
E∞ = lim
T →∞ −T
P∞
1
= lim
T →∞ 2T
2T 3
= ∞.
T →∞ 3
t2 dt = lim
Z T
2T 3
= ∞.
T →∞ 6T
t2 dt = lim
−T
Similar examples can be found in the discrete-time case.
1.3.2
Periodic Signals
A continuous-time signal is periodic with period T if for all values of t
x(t) = x(t + T ).
In the discrete-time case, a periodic signal is defined similarly as one in which for all n,
x[n] = x[n + N ].
Perhaps the most prevalent class of period signals are the sinusoidal signals. For example
x(t) = sin(2πt/T ) is clearly periodic with period T . To see this, we have:
x(t + T ) = sin 2π
t+T
T
= sin 2π
t
t
+ 2π = sin 2π
T
T
= x(t).
Another example of a periodic signal is shown in Figure 1.5.
An example of a discrete-time periodic signal is given in Figure 1.6.
Periodic signals are a special class of signals and we will deal with them in more detail later.
10
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
x(t)
...
...
−T
T
t
Figure 1.5: A continuous-time periodic signal.
x[n]
...
...
Figure 1.6: A discrete-time periodic signal.
1.3.3
Even and Odd Symmetry Signals
A signal x(t) is said to have even symmetry when
x(t) = x(−t)
and to have an odd symmetry if
x(t) = −x(−t).
Examples of a signals with even symmetry are x(t) = e−|t| , x(t) = cos(t) and x(t) = t2 . Examples
of signals with odd symmetry are x(t) = t and x(t) = sin(t).
Similar definitions apply to discrete-time signals. In other words, a discrete-time signal has
even symmetry if
x[n] = x[−n]
and odd symmetry if
x[n] = −x[−n].
Any signal can be decomposed into the sum of an even signal and an odd signal. To show this,
consider an arbitrary signal x(t) that need not have any symmetry. We can write:
x(t) = x(t) +
x(−t) − x(−t)
x(t) + x(−t) x(t) − x(−t)
=
+
2
2
2
|
{z
} |
{z
}
xe (t): Even
xo (t): Odd
To show that xe (t) is even, we have:
xe (−t) =
x(−t) + x(+t)
= xe (t).
2
11
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
To show xo (t) is odd, we have:
xo (−t) =
1.3.4
x(−t) − x(t)
x(t) − x(−t)
=−
= −xe (t).
2
2
Elementary Signal Transformations
We often need to transform signals in ways that better allow us to extract information from them
or that make them more useful in implementing a given task. Signal processing is a huge field or
research and spans numerous every day applications. Indeed as humans we do signal processing
continuously by interpreting audio and optical (visual) signals and taking action in response to
them.
In this section we will deal only with the most basic operations, which will also be useful in
our later development of the material. The transformations we consider thus only involve the
independent variable which is time in many cases. We will illustrate the concepts using mainly
continuous-time signals, but the principles are exactly the same for discrete-time signals.
Time Shift
Let x(t) be a continuous-time signal. Then x(t − t0 ) is a time-shifted version of x(t) that is identical
in shape to x(t) but is is translated in time by t0 seconds (delayed by t0 seconds if t0 > 0 or advanced
by t0 seconds if t0 < 0.) Similarly, if x[n] is a discrete-time signal, then x[n − n0 ] is a translated
version of x[n]. Figure 1.7 shows a signal x(t) and a version of it that is delayed by 3 seconds and
one that is advanced by 3 seconds.
-4
-3
-2
-1
1
2
3
4
Figure 1.7: Example of time-shift: original signal x(t), delayed signal x(t − 3) and advanced signal
x(t + 3).
Time Reversal
Let x(t) be a continuous-time signal. Then x(−t) is a time-reversed version of x(t). Similarly, if
x[n] is a discrete-time signal, then x[−n] is its time-reversed version. The time reversed signals
look the same as their originals, but they are reflected around t = 0 and n = 0 for continuous-time
and discrete-time signals, respectively.
Time Scaling
Time scaling is another transformation of the time-axis often encountered in practice. Thus, if x(t)
is a continuous-time signal, then x(at) is a time-scaled version of it. If |a| > 1, then the signal is
time-compressed and if |a| < 1 it is time-stretched. For example, if x(t) represents a recorded audio
signal of duration T seconds, x(2t) will be of duration T /2 and will sound high pitched (it will be
12
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
equivalent to being recorded at some speed and then played out at twice that speed). Similarly,
x(t/2) will last 2T and will be equivalent to being played out at half the speed.
The following Matlab script loads a music signal into (Handel’s classical piece) into variable y
and with sampling rate F s = 8, 192 samples/s. The Matlab sound command can be used to listen
to the signal at the original sampling speed, twice that and at half that speed. It also defines y[−n]
in variable z[n] and plays it, which corresponds to playing the music piece backwards.
load handel;
sound(y,Fs);
sound(y,2*Fs);
sound(y,Fs/2);
z=y(length(y):-1:1);
sound(z,Fs);
In general, time-reversal, time-translation and scaling can be combined to produce the signal
x(at + b). See examples in the text.
1.4
Some Important Signals
In this section we define some basic signals which will be used later on as we analyze systems.
1.4.1
The Delta Function
The “delta” or impulse function is a mathematically defined function that can never arise exactly
in nature but is extremely useful in characterizing the response of (linear) systems. Mathematically
the delta function is defined as δ(t) in the continuous-time case and by δ[n] in the discrete-time
case. In the latter case, the delta function is defined by:
δ[n] =
1,
0
n = 0,
n 6= 0.
In the continuous-time case, the delta function is defined by its two important properties:
1. Unit area under the function:
Z ∞
δ(t)dt = 1
(1.1)
x(a)δ(t − a)da = x(t).
(1.2)
−∞
2. The sifting property:
Z ∞
−∞
A similar sifting property (but easier to see) holds for the discrete-time delta function. Let x[n]
be a discrete-time function (signal). Then, it is easy to see the following equation holds:
x[n] =
∞
X
m=−∞
x[m]δ[n − m]
(1.3)
since δ[n − m] will be zero for all m in the sum except for m = n for which it will be equal to one.
Also, it is easy to see that
∞
X
m=−∞
δ[m] = 1.
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
b[n]
bt)
n
0
13
0
t
Figure 1.8: The continuous-time and discrete-time delta functions.
Both continuous and discrete-time functions are shown in the figure below:
The unit impulse functions can be scaled to yield impulses of any value. For example, cδ(t) will
naturally yield c when integrated over all time. Similarly for the discrete-time delta function.
In the continuous-time case, there is no naturally occurring signal that will have a unit area
under a zero support interval (clearly in this case the amplitude must be infinite.) Continuous-time
delta functions can be constructed as limiting cases of other functions. For example, consider the
rectangular pulse function
1
∆(t) = 2T , −T ≤ t ≤ T , .
0
otherwise
This function is shown in the figure below: Clearly, the area under this rectangular pulse is one,
6t)
1/2T
t
-­T 0 T
Figure 1.9: A unit-area rectangular pulse.
irrespective of T . A delta function can be defined as a limiting case of ∆(t) as T → 0, i.e.,
δ(t) = lim ∆(t).
T →0
To verify that ∆(t) satisfies the sifting property in the limit as T → 0, let f (t) be an arbitrary
function and F (t) it’s indefinite integral, i.e.
dF (t)
= f (t).
dt
Let us now consider the integral
Z ∞
−∞
f (a)∆(t − a)da.
We need to show that in the limit as T → 0, this integral is f (t), and thus in that limit, ∆(t) has
both defining properties of a delta function. We have:
Z ∞
−∞
f (a)∆(t − a)da =
=
t+T
1
f (a)da
2T t−T
1
[F (t + T ) − F (t − T )] .
2T
Z
14
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
Thus,
lim
Z ∞
T →0 −∞
1
[F (t + T ) − F (t − T )] = f (t)
T →0 2T
f (a)∆(a − t)da = lim
since the limit on the right above can be seen as the definition of the derivative of F (t) at time
t, which must be f (t) since F (t) was defined as the integral of f (t). The scenario is illustrated in
Figure 1.10.
6t-­a)
f(t)
f(a)
t-­T t t+T
a
Figure 1.10: Illustration for proof of the sifting property of ∆(t).
It can be shown that the delta function is an even function, i.e. δ(t) = δ(−t) and δ[n] = δ[−n].
Thus, in the continuous-time case, for example,
Z ∞
−∞
f (a)δ(a − t)da =
Z ∞
−∞
f (a)δ(t − a)da = f (t).
Also, since the delta function is zero everywhere except where it is located, we have:
Z ∞
−∞
f (a)δ(t − a)da =
Z t+
t−
f (a)δ(t − a)da = f (t),
where t− is the time just before t and t+ is the time just after t, where t is the location of the delta
function δ(t − a) on the a-axis. Indeed, the above integrals will evaluate to the same result f (t) for
any limits that include the delta function within their scope.
Since the delta function is non-zero only at the value of the horizontal axis where it is located,
we can write:
f (t)δ(t − a) = f (a)δ(t − a).
This is illustrated in Figure 1.11 below. In other words, when a function is multiplied by a delta
function, the result is to “sample” the function at the time where the delta function is located. In
other words, the product of a delta function located at some time a on the t-axis and a function
f (t) will be a delta function of height (area) f (a).
1.4.2
The Unit-Step Function
The continuous-time unit-step function is defined in terms of the delta function as:
u(t) =
Z t
−∞
δ(a)da.
15
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
f(a)
bt-­a)
f(t)
t
a
Figure 1.11: Illustration of f (t)δ(t − a) = f (a)δ(t − a).
Clearly, the integral above is zero for all t < 0 and it is one for t ≥ 0, i.e.,
1, t > 0
0 t < 0.
The unit-step function is plotted in Figure 1.12. Clearly, the step function is discontinuous at t = 0.
u(t) =
1
u[n]
u(t)
1
t
n
Figure 1.12: The continuous and discrete-time unit-step functions.
In view of the definition of the unit-step function in terms of the delta function, we can write also,
du(t)
.
dt
Another expression for the unit-step function can be obtained by making use of the sifting
property of the delta function in (1.2). Thus, we can write
δ(t) =
u(t) =
Z ∞
−∞
u(a)δ(t − a)da.
Since u(a) = 0 for a < 0 and 1 for a > 0, we have
u(t) =
Z ∞
0
δ(t − a)da.
The discrete-time unit-step is defined similarly in terms of the discrete-time delta function:
u[n] =
n
X
δ[m]
m=−∞
Another expression can be derived by using (1.3):
u[n] =
∞
X
m=−∞
u[m]δ[n − m] =
∞
X
m=0
δ[n − m].
16
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
1.4.3
Piecewise Constant Functions
The unit-step can be used to describe any piecewise constant function. For example, the function
described in Figure 1.13 can be represented mathematically in terms of step functions as:
x(t) = 2u(t) − u(t − 1) − 2u(t − 3) + u(t − 4).
The derivative of x(t) is also shown in the figure and mathematically it is
2
x(t)
2
1
x’(t)
1
1
2
3
4
t
1
-­1
2
3
4
t
-­1
Figure 1.13: A piecewise constant function and its derivative.
dx(t)
= 2δ(t) − δ(t − 1) − 2δ(t − 3) + δ(t − 4).
dt
1.4.4
Exponential Signals
The class of exponential signals is another important class of signals often encountered in the
analysis of systems. The general form of such signals is in the continuous-time case is:
x(t) = ceat ,
where both c and a can in general be complex. If both c and a are real, then x(t) is of course a
real exponential. When a = jω0 is purely imaginary and c is real, x(t) is a complex exponential:
x(t) = ceat = cejω0 t = c[cos(ω0 t) + j sin(ω0 t)].
This signal is periodic. To find its period, T , we have:
x(t + T ) = cjω0 (t+T ) = cejω0 t ejω0 T = x(t)ejω0 T .
Thus, for x(t + T ) to equal x(t) for all t as required for a periodic signal, it must be that
ejω0 T = 1
which means that ω0 T = 2mπ for some integer m. Thus, T = 2mπ/ω0 and the period (the smallest
positive T for which x(t) is periodic) is
2π
T =
.
|ω0 |
ω0 > 0 is the fundamental frequency and it has the units of radians/s. We also define the
fundamental frequency in hertz (Hz) as ω0 = 2πf0 which implies
f0 =
1
.
T
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
17
Clearly the complex exponential relates closely to the sinusoidal signals in view of Euler’s
identity. In other words,
A cos(2πf0 t) = R{ej2πf0 t },
A sin(2πf0 t) = I{Aej2πf0 t }.
As in the complex exponential case, each sinusoidal signal above has fundamental frequency
f0 Hz and period T = 1/f0 . Note that the period decreases monotonically as f0 increases.
Similar definitions apply to the discrete-time case, although there are some very important and
distinct differences to the continuous-time case. The discrete-time complex exponential is
x[n] = ejω0 n = ej2πf0 n .
Unlike the continuous-time case though where the signal period decreases monotonically as the
frequency increases, only values of ω0 in a 2π interval result in distinct values of the period. This
can be seen easily since (k is any integer)
jω0 n
ej(ω0 +2kπ)n = ejω0 n e| j2kπn
.
{z } = e
1
In other words one might say the discrete-time complex exponential is periodic with period 2π in
its frequency variable ω0 for any fixed time n.
Thus, in the discrete-time case we only need to consider the frequency ω0 to be within a 2π
interval, usually taken either from 0 ≤ ω0 ≤ 2π or −π ≤ ω0 ≤ π.
Now let us determine the period of a discrete-time complex exponential. We have
x[n + N ] = ejω0 (n+N ) = ejω0 n ejω0 N ,
and for the right-hand side to equal x[n] we must have (for some integer m) ω0 N = 2mπ or,
equivalently,
ω0
m
= .
2π
N
In other words, for x[n] = ejω0 n to be periodic, ω0 /2π must be rational. The period in this case is
N if m and N do not have common factors.
1.5
Continuous-Time and Discrete-Time Systems
In general a system is any device that processes signals (inputs) to produce other signals (outputs),
as illustrated abstractly in Figure 1.14 below. In the figure, x(t) (or x[n] in the discrete-time)
x(t)
x[n]
System
S
y(t)
y[n]
Figure 1.14: A system with one input and one output.
represents the input to the system, labeled by S, and y(t) (or y[n] in the discrete-time) the corresponding output. We say the system S operates on the input x(t) to produce the output y(t) and
write mathematically
y(t) = S[x(t)].
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
18
Similarly, in the discrete-time case we write
y[n] = S [x[n]] .
As an example of a (electrical) system, consider the circuit in the figure below. In this electrical
R
+
+
x(t)
y(t)
C
-
-
Figure 1.15: A system with one input and one output.
system the input is a voltage and so is the output. The output relates to the input through a
differential equation. If we let the current through the loop be i(t), then
i(t) = C
dy(t)
.
dt
Then, writing a single loop equation, we have:
x(t) = Ri(t) + y(t).
Thus,
dy(t)
+ y(t).
dt
For an input signal x(t), we can compute the output y(t) by solving the above first-order, linear
differential equation.
In general, continuous and discrete-time systems can be deterministic or stochastic, linear or
non-linear, time-variant or time-invariant. Stochastic systems are ones which contain components
whose behavior cannot be deterministically described for a given input signal. An example of such
a system is a circuit which contains a thermistor, whose resistance is a function of temperature.
Since the temperature as a function of time cannot be exactly predicted in advance, one can only
assume the resistance provided by the thermistor is random, and, thus, any system that contains
it will be stochastic. Stochastic systems are studied using statistical signal processing techniques.
We will not deal with stochastic systems in this class but rather we will only study deterministic
systems. In fact, we will focus only on linear, time-invariant (LTI) systems. We define linearity
and time-invariance, as well as other system properties next.
x(t) = RC
1.5.1
System Properties
Linearity
Consider the single-input, single-output system in Figure 1.14 and two inputs x1 (t) and x2 (t) (in
the continuous-time case). Let y1 (t) and y2 (t) be the corresponding system outputs, i.e.,
y1 (t) = S[x1 (t)],
y2 (t) = S[x2 (t)].
19
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
Then if the system is linear, the following holds:
S[c1 x1 (t) + c2 x2 (t)] = c1 S[x1 (t)] + c2 S[x2 (t)] = c1 y1 (t) + c2 y2 (t),
for any pair of real numbers c1 and c2 . The definition for discrete-time systems is exactly the same.
Thus, a discrete-time system S is linear if
y1 [n] = S [x1 [n]] ,
y2 [n] = S [x2 [n]] ,
then,
S [c1 x1 [n] + c2 x2 [n]] = c1 S [x1 [n]] + c2 S [x2 [n]] = c1 y1 [n] + c2 y2 [n].
In particular, linear systems possess the scaling (or homogeneity) property. I.e., if, for example,
c2 = 0, then
S[c1 x1 (t)] = c1 S[x1 (t)] = c1 y1 (t).
Thus, when the input is scaled by a certain amount, the output corresponding to it is scaled by
the same amount.
They also have the superposition property. I.e., if c1 = c2 = 1,
S[x1 (t) + x2 (t)] = S[x1 (t)] + S[x2 (t)] = y1 (t) + y2 (t).
The linearity property is obviously very important as it allows us to easily compute the response
of a linear system to an infinite variety of inputs, i.e. to any linear combination of inputs for which
the outputs are known. The corresponding output is the same linear combination applied to the
outputs.
The linearity concept is illustrated in Figure 1.16 below.
x1(t)
Linear System
S
y1(t)
If:
Then:
x2(t)
Linear System
S
y2(t)
c1x1(t)+c2x2(t) Linear System c1y1(t)+c2y2(t)
S
Figure 1.16: Illustration of the linearity property.
A very important property of linear operators (which describe systems operating on their inputs
to produce outputs) is that they commute. In other words if S1 and S2 are two linear operators
(systems) and x(t) is an input, we have
S2 [S1 [x(t)]] = S1 [S2 [x(t)]] .
(1.4)
This concept is illustrated in Figure 1.17 below: Thus, if the input to the concatenated system is
x(t), the output will be the same (y(t) whether S1 acts on the input first (top configuration in the
figure above) or S2 acts on the input first (bottom configuration). There are some very important
implications of this property. For example, let the response of a linear system to a unit-step, u(t)
20
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
x(t)
Linear System
S1
z(t)
Linear System
S2
y(t)
x(t)
Linear System
S2
q(t)
Linear System
S1
y(t)
Figure 1.17: Illustration of the commutation property of linear systems.
(i.e. the step response) be y(t). Then the response of the system to a unit impulse δ(t), say h(t),
referred to as the system impulse response, is the derivative of the step response, i.e.,
h(t) =
dy(t)
.
dt
This can be seen mathematically as follows. Let the system be defined by the linear operator S.
Then,
y(t) = S[u(t)].
Now, we know that the delta function is the derivative of the unit-step and that the derivative
operator is linear. Thus,
du(t)
δ(t) =
.
dt
We are interested in the impulse response, h(t), of the system; we have:
h(t) = S[δ(t)] = S
du(t)
d
dy(t)
= S[u(t)] =
,
dt
dt
dt
where the third equality above is because both the derivative operator and the system are linear
and we exchanged their order.
In general, let S be a linear system and L be a linear operator. Let y(t) be the output of a
system when the input is x(t), i.e. y(t) = S[x(t)]. Now let z(t) = L[x(t)] be another signal obtained
after the linear operator L operates on x(t). Then
S[z(t)] = S [L[x(t)]] = L [S[x(t)]] = L[y(t)].
The above result will be used in the next chapter to obtain an important result in computing
the response of linear systems.
Examples of Linear Systems
1. y(t) = 2x(t). Clearly this is a linear system since when the input is c1 x1 (t) + c2 x2 (t) the
output is
2[c1 x1 (t) + c2 x2 (t)] = c1 [2x1 (t)] + c2 [2x2 (t)] = c1 y1 (t) + c2 y2 (t).
2. Consider the system in which the output relates to the input as:
y(t) =
Z t
−∞
x(a)da.
21
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
This system is also linear since:
Z t
−∞
[c1 x1 (a) + c2 x2 (a)]da = c1
Z t
−∞
x1 (a)da + c2
3.
y[n] =
m=n+N
X
Z t
−∞
x2 (a)da = c1 y1 (t) + c2 y2 (t).
x[m].
m=n−N
This is again easily shown using the definition of linearity:
m=n+N
X
m=n−N
(c1 x1 [m] + c2 x2 [m]) = c1
m=n+N
X
x1 [m] + c2
m=n−N
m=n+N
X
x2 [m] = c1 y1 [m] + c2 y2 [m].
m=n−N
4. The current i(t) through a resistor with resistance R relates to the voltage across it through
v(t) = Ri(t).
If the input is v(t) and the output is i(t) this is clearly a linear system.
Similarly, the voltage v(t) across an inductor of inductance L relates to the current i(t)
through it through
di(t)
v(t) = L
.
dt
If we consider the input to be i(t) and the output to be v(t), one can easily show (using the
properties of derivatives) that this is a linear system as well. However, if we consider the
input to be the voltage v(t) across the inductor and the output to be the current i(t) through
it, we have
Z
1 t
i(t) = i0 +
v(a)da,
L −∞
where i0 is some initial current through the inductor. As is shown below, unless i0 = 0, the
system is not linear.
Similar conclusions hold if one considers the voltage v(t) across a capacitor of capacitance C
as the input and the current through it as the output. We have in this case,
i(t) = C
dv(t)
,
dt
and the system is clearly linear. If we instead consider the input to be i(t) and the output to
be v(t), we have
Z
1 t
v(t) = v0 +
i(a)da,
C −∞
and the system is not linear unless the initial voltage v0 across the capacitor is zero.
In general, an electrical system that consists of resistors, inductors and capacitors is a linear
system provided the initial state (initial currents through inductors and initial voltages across
capacitors) is zero (i.e we are only considering the emphzero-state response of the system and
not the total response.)
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
22
Examples of Non-linear Systems
1. y(t) = x2 (t). To show that a system is not linear, it suffices to show the scaling or superposition
property does not hold for some specific case. In the above example, if the input signal is
scaled by a factor of 2, for example, then the output will be
[2x(t)]2 = 4x2 (t) 6= 2x2 (t) = 2y(t).
2. y[n] = c + x[n], i.e. the output is the input plus a fixed constant c. One might be tempted
to say this is a linear system, but it is not in general. For example, let us again see what
happens when we scale the input x[n] by a factor of 2; we have:
S [2x[n]] = c + 2x[n] 6= 2 [c + x[n]] = 2y[n].
One can actually show that for a linear system, when the input is zero, the output MUST
be zero as well. The above system clearly does not satisfy this property since the output is c
when the input is zero (unless of course c = 0). To show that for a linear system the output
must be zero when the input is zero, we argue as follows: If the system is linear, it must be
that:
S[c1 x1 (t) + c2 x2 (t)] = c1 S[x1 (t)] + c2 S[x2 (t)],
for any pair of real numbers c1 and c2 . If we choose c1 = c2 = 0, then, if the system is linear,
in which case the above equation must hold, we have:
S[0] = 0.
3. y(t) = u(x(t)), where u(·) is the unit-step function. Since the unit-step function takes only 2
possible values, 0 or 1, the above system cannot possibly be linear since the scaling property,
for example, cannot possibly hold.
Time Invariance
Let y(t) = S[x(t)] be the response of a system S to an input x(t). Then the system is time-invariant
if:
S[x(t − t0 )] = y(t − t0 ).
In other words, the output of a time-invariant system to an input x(t) shifted in time by t0 has
exactly the same shape as the output when the input is x(t) but it is also shifted by the same
amount t0 as the input. The concept is illustrated in Figure 1.18 below.
If: x(t)
Time-­Invariant
System
S
y(t)
Then:
x(t-­t0 )
Time-­Invariant y(t-­t )
0 System
S
Figure 1.18: Illustration of the time-invariance property.
Examples of time-invariant systems include the following:
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
1. The discrete-time system
n
X
y[n] =
23
x[m]
m=−∞
is time-invariant. To show this, we have
S [x[n − n0 ]] =
n
X
m=−∞
x[m − n0 ].
Changing variables in the summation by letting k = m − n0 , we have:
(n−n0 )
S [x[n − n0 ]] =
X
k=−∞
x[k] = y[n − n0 ],
and therefore the system is time-invariant.
2. y(t) = u (x(t)) , where u(·) is the unit-step function. We have:
S[x(t − t0 )] = u (x(t − t0 )) = y(t − t0 ),
and therefore the system is time-invariant.
3. y[n] = x2 [n]. To show it is time-invariant, we have:
S [x[n − n0 ]] = x2 [n − n0 ] = y[n − n0 ].
Examples of systems that are not time-invariant include the following:
1. y(t) = S[x(t)] = x(2t). Let x1 (t) = x(t − t0 ) be a time-shifted version of x(t). Then,
S[x1 (t)] = x1 (2t) = x(2t − t0 ) 6= x(2(t − t0 )) = y(t − t0 ),
and therefore the system is not time-invariant.
2. y(t) = f (t)x(t), for some arbitrary function f (t). In other words the system multiplies the
input by the function f (t) to produce the output. We have
S[x(t − t0 )] = f (t)x(t − t0 ) 6= y(t − t0 ) = f (t − t0 )x(t − t0 ),
and thus the system is not time-invariant in general, unless of course f (t) = c for some
constant c.
3. y[n] = S [x[n]] = f [n] + x[n], for some arbitrary function f [n]. We have
S [x[n − n0 ]] = f [n] + x[n − n0 ] 6= y[n − n0 ] = f [n − n0 ] + x[n − n0 ],
and therefore the system is not time-invariant.
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
24
Memory
In continuous-time, a system is memoryless when the output y(t) at time t is only a function of
the input, x(t), at time t. In discrete-time, the definition is the same. A discrete-time system is
memoryless when the output y[n] at some time n is only a function of the input at time [n]. For
example, the systems described by:
y[n] = nx2 [n]
y(t) = x(t) − 2x2 (t)
are memoryless, but the ones described by
y(t) = x(t) − x(t − 1)
y[n] =
n
X
x[m],
m=−∞
are not.
Causality
A system (continuous-time or discrete-time) is causal if the output at any time t (or n) is possibly
a function of the input at times from −∞ up to time t (or n) but not a function of the input for
times τ > t (or m > n). Thus, the systems described below are causal
y[n] =
n
X
x[m],
m=−∞
y(t) = x(t)e−(t+1) ,
are causal, but the systems described by
y(t) = x(t/2),
y[n] =
n+1
X
x[m]
m=−∞
are not. Clearly, any memoryless system, such as the second causal system above, must by definition
be causal. The first non-causal system above is easy to see it is non-causal for if one considers t < 0.
So, ones must be careful to make sure the condition for causality holds for all times.
Invertibility
A system is invertible if the input signal at any time can be uniquely determined from the output
signal. In other words, distinct inputs yield distinct outputs. For example, the system
y[n] =
n
X
x[m]
m=−∞
is invertible since x[n] = y[n] − y[n − 1], but the system
y(t) = |x(t)|
is not since it is impossible to determine the sign of the input from the output.
CHAPTER 1. INTRODUCTION TO SIGNALS AND SYSTEMS
25
Stability
A stable system is one in which an input which has an amplitude that is bounded (it is less than
infinity), necessarily results in an output that is bounded. For example, the system
y(t) = cos (x(t)) ,
is naturally bounded since the output can never exceed ±1. The linear, time-invariant system
y[n]| =
∞
X
m=−∞
h[m]x[n − m],
is stable under some conditions on h[n]. Let |x[n]| ≤ A < ∞ be bounded. Then we must find a
condition under which the corresponding output |y[n]| is bounded. We have
∞
X
|y[n]| = h[m]x[n − m]
≤
=
m=−∞
∞
X
m=−∞
∞
X
|h[m]x[n − m]|
|h[m]||x[n − m]|
m=−∞
∞
X
≤ A
m=−∞
|h[m]| < ∞.
Thus, a sufficient condition for a stable system is that
∞
X
m=−∞
|h[m]| < ∞.
Examples of systems that are not stable are:
y(t) = tx(t),
y[n] = x[n]/n.
Chapter 2
Linear, Time-Invariant (LTI) Systems
The rest of the course will focus almost solely on causal, linear, time-invariant systems. Linear, timeinvariant systems are described by linear differential equations with constant coefficients. Invariably,
to compute the response of such systems to a specific input, one must solve a differential equation.
As we saw above during the discussion of linearity, such systems are linear only if the initial state
of the system is zero (i.e. there is no initial energy in the system), or, in general, one only considers
the zero-state response of the system to an input. For example, consider the system described by
the circuit in Figure 1.15 with input-output described by the first-order differential equation:
x(t) = RC
dy(t)
+ y(t).
dt
(2.1)
The output of this system is the voltage across the capacitor and it is assumed to be zero at the
time the input is applied (which we consider to be t = 0). Thus, y(0) = 0. In general, for an n-th
order system described by an n-th order differential equation, the state of the system is determined
by y(t) and its first (n − 1) derivatives and for a zero state, they should all be zero, i.e.,
y(0) =
dy(t)
dy n−1 (t)
|t=0 = · · · =
|t=0 = 0.
dt
dtn−1
There are various ways to solve the above differential equation, but a strong mathematical
tool in this regard is the Laplace transform. We will thus focus on the Laplace transform and its
properties next. Before doing so, though, we derive an important result in linear system theory.
2.1
2.1.1
The Impulse Response and the Convolution Integral
The Continuous-Time Case
As stated earlier, the response of an LTI system to a unit impulse, δ(t), is the impulse response of
the system. We will see next that this response is important in that using it we can write a general
expression for the output of the system to any input.
We know from the definition of the delta function earlier that it possesses the sifting property,
defined by
Z
x(t) =
∞
−∞
x(a)δ(t − a)da,
for an arbitrary signal x(t). Let the impulse-response of a linear, time-invariant system S be h(t).
We are interested in the response of the system when the input is the arbitrary signal x(t), i.e.,
26
27
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
y(t) = S[x(t)]. We have
y(t) = S[x(t)]
= S
Z ∞
−∞
=
y(t) =
Z ∞
−∞
Z ∞
−∞
x(a)δ(t − a)da
x(a)S[δ(t − a)]da →
(2.2)
x(a)h(t − a)da.
The integral in the equation (2.2) above is known as the convolution integral. Thus, the output
of a linear system with impulse response h(t) to an arbitrary input x(t) is obtained by convolving
x(t) with h(t). Mathematically, we write
y(t) = x(t) ∗ h(t) =
Z ∞
−∞
x(a)h(t − a)da.
It is easy to show that x(t) ∗ h(t) = h(t) ∗ x(t). To show this, we have:
Z ∞
x(t) ∗ h(t) =
x(a)h(t − a)da
−∞
Z −∞
= −
Z ∞∞
=
−∞
x(t − b)h(b)db
x(t − b)h(b)db
= h(t) ∗ x(t),
where the second equality above was obtained through a change of variables t − a = b.
If the system is causal, then the impulse response h(t) must be zero for t < 0. In this case
h(−a) will be zero for a > 0 and h(t − a)1 will be zero for a > t. Thus, the convolution integral
can be written in this case as
Z
t
y(t) =
−∞
x(a)h(t − a)da.
If in addition x(t) = 0 for t < 0, then clearly y(t) = 0 for t < 0 and for t ≥ 0 we have
y(t) =
Z t
0
(2.3)
x(a)h(t − a)da.
However, the expression in (2.2) is the most general one and will always hold.
Let us now consider some examples of convolving continuous-time signals.
Continuous-Time Convolution Examples
1. Let x(t) = u(t) and h(t) = e−t u(t). We wish to compute y(t) = x(t) ∗ h(t) = h(t) ∗ x(t). Let
us use
Z ∞
Z t
y(t) = h(t) ∗ x(t) =
h(a)x(t − a)da =
h(a)x(t − a)da.
−∞
0
Since both x(t) and h(t) are zero for t < 0, so will be y(t), so we only need consider t > 0, as
illustrated in Figure 2.1. For t > 0, we have
y(t) =
Z t
0
1
h(a)u(t − a)da =
Z t
0
e−a da = 1 − e−t .
Note here that a is the variable in the convolution integral in Equation (2.2) and t is fixed.
28
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
Thus, the convolution between x(t) and h(t) over all time is
y(t) = 1 − e−t u(t).
t < 0
u(t-a)
1
h(a)
a
t
t>0
u(t-a)
1
h(a)
t
a
h(a)u(t-a)
1
t>0
t
a
Figure 2.1: Illustration of convolution for Example 1.
2. Let x(t) = u(t) − u(t − 1) be a rectangular pulse. Let’s compute
y(t) = x(t) ∗ x(t) =
Z ∞
−∞
x(a)x(t − a)da.
As illustrated in Figure 2.2, we have 4 regions of the time axis to consider: t < 0, 0 ≤ t ≤ 1,
1 ≤ t ≤ 2 and t > 2. Clearly, y(t) = 0 for T < 0 and for t > 2 as seen in from the figure. For
0 ≤ t ≤ 1, we have
Z
t
y(t) =
da = t,
0
and for 1 ≤ t ≤ 2,
y(t) =
Z 1
t−1
da = (2 − t).
The final convolution result is the triangular pulse at the bottom of Figure 2.2. Mathematically, in
terms of the ramp-function,
y(t) = r(t) − 2r(t − 1) + r(t − 2),
29
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
t < 0
x(t-a)
1
x(a)
t
x(t-a)
1
2
a
1
2
a
2
a
x(a)
t
1
1
x(t-a)
x(a)
1
1
t
x(t-a)
x(a)
1
2
1
2
t
a
x(t)*x(t)
1
t
Figure 2.2: Illustration of convolution for Example 2.
or, explicitly,
2.1.2

0,



t<0
t
0≤t≤1
y(t) =

(2
−
t)
1
≤t≤2


0
t > 2.
The Discrete-Time Case
The same results corresponding to the convolution integral in equation (2.2) in the continuous-time
case can be derived for the discrete-time case by using equation (1.3). Thus, if the response of a
linear, time-invariant system to a unit impulse is h[n], then the output of the system to an arbitrary
input x[n] is
y[n] =
∞
X
m=−∞
x[m]h[n − m] =
∞
X
m=−∞
h[m]x[n − m].
(2.4)
where the second equality above can be easily established, as for the continuous-time case, by a
change of variable. If both functions x[n] and h[n] are zero for negative time, then the convolution
30
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
reduces to
y[n] =
n
X
m=0
x[m]h[n − m],
but be careful using this expression; you must make sure the both functions are zero for n < 0.
Discrete-Time Convolution Examples
1. Let x[n] = u[n+2]−u[n−3] and h[n] = x[n]. Figure 2.3 shows graphically the process and the
different time regions when there is a change in the functional dependence. We distinguish
4 different regions: n < −4 and n > 4 when y[n] = 0, −4 ≤ n ≤ 0 and 0 ≤ n ≤ 4. For
4 ≤ n ≤ 0, we have:
y[n] =
∞
X
m=−∞
x[m]x[n − m] =
n+2
X
1 = n + 5.
m=−2
For 0 ≤ n ≤ 4, we have
y[n] =
∞
X
m=−∞
x[m]x[n − m] =
2
X
m=n−2
1 = 5 − n.
The result is illustrated in the last plot in Figure 2.3.
2. Let x[n] = u[n] − u[n − L] and h[n] = an u[n] where 0 ≤ a < 1. Then
∞
X
y[n] =
m=−∞
h[m]x[n − m]
∞
X
m=−∞
am u[m] [u[n − m] − u[n − m + L]] .
The convolution is illustrated in Figure 2.4 for a = 1/2 and L = 5. We distinguish 3 regions:
n < 0, 0 ≤ n < L and n ≥ L. For n < 0, clearly y[n] = 0. For 0 ≤ n < L, we have:
n
X
y[n] =
am =
m=0
For n ≥ L we have:
y[n] =
n
X
m=n−L+1
2.2
1 − an+1
.
1−a
am =
an+1 a−L − 1
1−a
.
The Laplace Transform and its Use to Study LTI Systems
The Laplace transform is an extremely useful mathematical tool in the analysis and design of
continuous-time, linear, time-invariant systems. A parallel transform in the discrete-time case,
the z-transform, is as powerful and important in analyzing and designing discrete-time systems
and has similar properties to the Laplace transform. The power of the Laplace transform is in
converting linear differential equations with constant coefficients into algebraic equations which are
easier to solve. It does so by moving the design from the time-domain (where input-output relations
are described by differential equations) to the frequency-domain, where input-output relations are
described by algebraic equations. Once a solution is obtained in the frequency domain, we can use
31
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
x[n-m]
x[m]
1
1 2
m
-­2 -­1
1 2
m
-­2 -­1
1 2
m
1 2
m
-­2 -­1
n
x[n-m]
1
a
1
-­2 -­1
5
-­5 -­4 -­3 -­2 -­1
x[n]*x[n]
1 2 3 4 5
m
Figure 2.3: Illustration of discrete convolution for Example 1.
the inverse Laplace transform to obtain the solution in the time-domain. In this course, we will not
deal extensively with the mathematical subtleties inherent in the study of the Laplace transform
but rather focus on its properties and how they can be used in the context of LTI systems.
The Laplace transform is an operator, L, which transforms a time-domain signal into the
frequency domain. It is customary to designate the Laplace transform of a signal x(t) by X(s)
where s = σ + jω is the complex frequency variable (with real part σ and imaginary part ω.)
The Laplace transform of a function (signal) x(t) is defined by
X(s) = L[x(t)] =
Z ∞
x(t)e−st dt.
(2.5)
−∞
Not all functions possess a Laplace transform and those that have one is often limited to a
region of the so-called s-plane defined by the real axis (σ) versus the imaginary axis (jω), referred
to as the region of convergence (ROC).
As an example, let us compute the Laplace transform of x(t) = eω0 t u(t). We have: Thus,
X(s) =
Z ∞
−∞
x(t)e−st dt
32
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
h[m]
x[n-m]
1
m
1 2 3 4
n
h[m]
x[n-m]
1
1 2 3 4
n
m
h[m]
x[n-m]
1 2 3 4
2
n
m
x[n]*h[n]
1
-­3 -­2 -­1
1 2 3 4 5
m
Figure 2.4: Illustration of discrete convolution for Example 2.
=
Z ∞
ejω0 t e−st dt
0
=
Z ∞
e−(s−ω0 )t dt
0
=
=
e−(s−ω0 )t
1
− lim
(s − ω0 ) t→∞ (s − ω0 )
1
e−σt e−(ω−ω0 )t
− lim
(s − ω0 ) t→∞
(s − ω0 )
Clearly, if σ > 0, the second term above goes to zero, and thus,
L[ejω0 t ] =
1
,
(s − ω0 )
and the region of convergence is the right-hand s-plane (σ > 0), as indicated by the shaded region
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
33
in Figure 2.5:
Figure 2.5: Illustration of ROC for ejω0 t .
The Laplace transform is invertible; the inverse Laplace transform of X(s) is
1
x(t) =
2πj
Z σ+j∞
X(s)est ds.
(2.6)
σ−j∞
Because of the specific nature of the Laplace transform of linear, constant coefficient differential
equations describing linear time-invariant systems, the Laplace transforms encountered are in the
form of rational functions (ratios of polynomials in s). In these cases we have an easier way of
inverting without having to resort to the contour integral in (2.6).
2.2.1
Properties of the Laplace Transform
Next, we will study several very important properties of the Laplace transform that allow us to
compute Laplace transforms of functions often without resorting to (2.5). Some of these properties
also provide a fundamental insight into the analysis and design of LTI systems.
Linearity
The Laplace transform is linear. To see this, let
X1 (s) = L[x1 (t)] =
and
X2 (s) = L[x2 (t)] =
Z ∞
−∞
Z ∞
−∞
x1 (t)e−st dt,
x2 (t)e−st dt.
Then, the Laplace transform of z(t) = c1 x1 (t) + c2 x2 (t) is:
Z(s) = L[z(t)]
=
Z ∞
−∞
= c1
[c1 x1 (t) + c2 x2 (t)]e−st dt
Z ∞
−∞
x1 (t)e−st dt + c2
= c1 L[x1 (t)] + c2 L[x2 (t)],
Z ∞
−∞
x2 (t)]e−st dt
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
34
and, thus, linearity is established.
Clearly the linearity property is very important as it can be used to break up the Laplace
transform of the linear combination of several functions to a linear combination of the Laplace
transforms of the individual functions. The ROC for the composite function is the intersection of
the ROCs for the individual functions.
Time Shift
Let X(s) = L[x(t)]. Then the Laplace transform of x(t − t0 ) is
L[x(t − t0 )] = e−st0 X(s).
The proof is easily obtained from the definition of the Laplace transform and a simple change
of variables; thus, we have:
L[x(t − t0 )] =
=
Z ∞
−∞
Z ∞
x(t − t0 )e−st dt
x(a)e−s(a+t0 ) da
−∞
= e−st0
Z ∞
x(a)e−sa da
−∞
= e−st0 X(s),
where the second equality is obtained by a change of variables a = t − t0 .
Frequency Shift
Multiplication by eat in the time-domain corresponds to a frequency shift in the frequency domain:
L[eat x(t)] = X(s − a).
The proof is also easy:
L[e x(t)] =
at
=
Z ∞
−∞
Z ∞
eat x(t)e−st dt
x(t)e−(s−a)t dt
−∞
= X(s − a).
Time-domain Differentiation
Differentiation in the time-domain corresponds to multiplication by s in the frequency domain:
dx(t)
L
= sX(s).
dt
To show this we use (2.6). We have:
1
x(t) =
2πj
Z σ+j∞
σ−j∞
X(s)est ds.
35
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
Taking derivatives with respect to t on both sides, we obtain:
dx(t)
1
=
dt
2πj
Z σ+j∞
which indicates that the Laplace transform of
By inductive reasoning, we conclude that
L
sX(s)est ds,
σ−j∞
dx(t)
dt
n
d x(t)
dtn
is sX(s).
= sn X(s).
Frequency-domain Differentiation
Multiplication by t in the time-domain corresponds to differentiation in the frequency-domain (with
negation):
dX(s)
L [tx(t)] = −
.
ds
This is easily seen form the definition of the Laplace transform:
X(s) =
Z ∞
x(t)e−st dt.
−∞
Taking derivatives with respect to s on both sides, we have:
dX(s)
=−
ds
Z ∞
tx(t)e−st dt,
−∞
from which the property follows.
By induction we can generalize this to
L [tn x(t)] = (−1)n
dX n (s)
.
dsn
Frequency-Domain Integration
x(t)
L
=
t
Z ∞
X(a)da.
s
To show this property, we have:
L
x(t)
t
=
Z ∞
x(t)
−∞
=
Z ∞
e−st
dt
t
Z ∞
x(t)
−∞
=
s
=
s
Z ∞ Z ∞
Z ∞
s
e−at da dt
x(t)e
−∞
X(a)da.
−at
dt da
36
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
Time Domain Integration
The Laplace transform of the integral of a function in the time-domain corresponds to division by
s in the frequency-domain:
Z t
X(s)
x(a)da =
L
.
s
−∞
We have
Z σ+j∞
1
X(s)esa ds.
x(a) =
2πj σ−j∞
Integrating on both sides:
Z t
x(a)da =
−∞
1
2πj
Z σ+j∞
Z t
esa da ds =
X(s)
−∞
σ−j∞
1
2πj
Z σ+j∞
X(s) st
e ds,
σ−j∞
s
from which the property follows.
The Convolution Property
This is one of the most important properties of the Laplace transform as it pertains to the analysis
and design of LTI systems. It states that convolution between two functions in the time domain
corresponds to multiplication of their Laplace transforms in the frequency-domain. Let h(t) and
x(t) be two functions that have Laplace transforms H(s) and X(s) respectively. Then
L[x(t) ∗ h(t)] = X(s)H(s).
The proof is as follows:
L[x(t) ∗ h(t)] =
=
Z ∞ Z ∞
−∞
Z ∞
−∞
x(a)h(t − a)da e−st dt
Z ∞
x(a)
−∞
−∞
|
= H(s)
Z ∞
h(t − a)e−st dt da
{z
e−sa H(s)
}
x(a)e−sa da
−∞
= H(s)X(s).
The Initial Value Theorem
If x(t) is zero for t < 0 and has no impulses at the origin, then
lim sX(s) = lim x(t) = x(0+ ).
s→∞
t→0
The initial value theorem allows us to obtain the initial value of a signal in the frequency domain
without having to convert to the time domain by taking an inverse Laplace transform.
The Final Value Theorem
If x(t) is zero for t < 0 and sX(s) has poles only in the left-hand s-plane, then
lim sX(s) = lim x(t).
s→0
t→∞
The final value theorem allows us to obtain the steady-state value of a signal in the frequency
domain without having to convert to the time domain by taking an inverse Laplace transform.
37
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
2.2.2
The Laplace Transform of Some Useful Functions
Here we compute the Laplace transforms of some functions often encountered in the study of
LTI systems. The Laplace transform properties discussed above in conjunction with the Laplace
transform of the functions given below can be used to obtain the Laplace transforms of a much
larger variety of functions.
1. The Laplace transform of the delta function is:
L[δ(t)] =
Z ∞
δ(t)e−st dt = 1.
−∞
2. The Laplace transform of a unit-step is:
L[u(t)] =
Z ∞
u(t)(t)e−st dt =
−∞
Z ∞
e−st dt =
0
1
e−st
1
− lim
= ,
s t→∞ s
s
σ > 0.
3. Let x(t) = e−at u(t). Then
L[x(t)] =
=
Z ∞
−∞
Z ∞
e−at u(t)e−st dt
e−(s+a)t dt
0
=
=
1
e−(s+a)t
− lim
(s + a) t→∞ (s + a)
1
,
σ > −a.
(s + a)
4. x(t) = ejωt u(t).
L[x(t)] =
=
Z ∞
−∞
Z ∞
ejωt u(t)e−st dt
e−(s−jω)t dt
0
=
=
1
e−(s−jω)t
− lim
(s − jω) t→∞ (s − jω)
1
,
σ > 0.
(s − jω)
5. x(t) = cos(ωt)u(t). Since
ejωt = cos(ωt) + j sin(ωt)
and the Laplace transform is linear, the Laplace transform of cos(ωt) is the real part of the
Laplace transform of ejωt obtained above. Thus, we have (<[·] denotes the real part of the
expression in brackets):
L[cos(ωt)] = <
1
s
= 2
,
(s − jω)
(s + ω 2 )
σ > 0.
Similarly, the Laplace transform of x(t) = sin(ωt)u(t) is the imaginary part of the Laplace
transform of ejωt obtained above (=[·] denotes the real part of the expression in brackets):
L[sin(ωt)] = =
1
ω
= 2
,
(s − jω)
(s + ω 2 )
σ > 0.
38
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
2.2.3
Inversion by Partial Fraction Expansion
In the study of LTI systems, we often encounter transfer functions and Laplace domain representations of signals in the form of rational functions in s, i.e. in the form of ratio of polynomials in
s:
N (s)
H(s) =
.
D(s)
In these cases the inverse Laplace transform can be obtained by expanding the rational function
using a partial fraction expansion. We will introduce the technique using several examples:
1. Distinct Real Roots
Let
H(s) =
(s + 1)
.
(s + 2)(s + 3)(s + 4)
We have, for some scalars A, B, C:
(s + 1)
(s + 2)(s + 3)(s + 4)
B
C
A
+
+
⇒
=
s+2 s+3 s+4
s + 1 = A(s + 3)(s + 4) + B(s + 2)(s + 4) + C(s + 2)(s + 3)
H(s) =
The last equation above is true for all s. To compute A, B, C, we choose three different
values of s to obtain the three equations that will allow us to obtain the three variables.
Solving for the three unknowns is made easy by choosing to evaluate the above equation at
the roots of D(s): s = −2, − 3, − 4.
For s = −2, we have:
1
−1 = 2A ⇒ A = − .
2
For s = −3, we have:
−2 = −B ⇒ B = 2.
For s = −4, we have:
3
−3 = 2C ⇒ C = − .
2
Thus,
H(s) = −
from which
1/2
2
3/2
+
−
,
s+2 s+3 s+4
1
3
h(t) = − e−2t + 2e−3t − e−4t u(t).
2
2
As an application of the initial and final value theorems, let us compute h(0) and h(∞) using
the theorems and also from the time-domain expressions. We have:
s(s + 1)
= 0.
s→∞ (s + 2)(s + 3)(s + 4)
h(0) = lim sH(s) = lim
s→∞
From the time-domain expression for h(t), we have
1
3
h(0) = − + 2 − = 0,
2
2
39
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
which confirms the result obtained using the initial value theorem. Similarly, applying the
final value theorem, we have:
s(s + 1)
= 0.
s→0 (s + 2)(s + 3)(s + 4)
h(∞) = lim sH(s) = lim
s→0
From the time-domain expression for h(t), we have
h(∞) = 0.
2. Distinct Complex Roots
H(s) =
(s + 2)
(s + 2)
A
B
=
=
+
.
s2 + 2s + 2
(s + 1 − j)(s + 1 + j)
(s + 1 − j) (s + 1 + j)
Therefore, we have
(s + 2) = A(s + 1 + j) + B(s + 1 − j).
Letting s = −1 + j, we have
1 + j = A(−1 + j + 1 + j) ⇒ A =
1+j
1
1
= −j .
2j
2
2
Letting s = −1 − j, we have
1 − j = B(−1 − j + 1 − j) ⇒ B = −
1−j
1
1
= +j .
2j
2
2
Note that B = A∗ . This is not a coincidence and it is always the case for quadratic terms
with complex roots, which must come in complex conjugate pairs. Thus, in writing a partial
fraction expansion for such quadratic terms one need not use (and compute) two variables A
and B but only one, A, as the other is A∗ . Taking inverse Laplace transforms, we obtain
h(t) =
h
=
h
i
Ae−(1−j)t + A∗ e−(1+j)t u(t)
i
Ae−t ejt + A∗ e−t e−jt u(t)
= e−t Aejt + [Aejt ]∗ u(t)
h
i
= 2e−t < Aejt u(t)
= e−t [cos(t) + sin(t)] u(t).
Note that despite the fact that complex coefficients are involved, the final time-domain result
is real. This is due to the fact again that the complex roots come in complex-conjugate pairs.
Of course, another way to invert H(s) is to use the properties of the Laplace transform. Thus,
we can write:
(s + 1)
1
(s + 2)
=
+
.
H(s) = 2
2
s + 2s + 2
(s + 1) + 1 (s + 1)2 + 1
Noting that the inverse Laplace of
s2
s
+1
s2
1
+1
is cos(t) and the inverse Laplace of
40
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
is sin(t) and using the frequency shift property of the Laplace transform on both terms above,
we obtain (the same expression, as expected):
h(t) = e−t [cos(t) + sin(t)] u(t).
As an application of the initial and final value theorems, let us compute h(0) and h(∞) using
the theorems and also from the time-domain expressions. We have:
h(0) = lim sH(s) = lim
s→∞
s→∞
s(s + 2)
= 1,
(s2 + 2s + 2
which is the same answer obtained from the time-domain expression. Applying the final value
theorem, we have:
s(s + 2)
h(∞) = lim sH(s) = lim 2
= 0.
s→0
s→0 (s + 2s + 2
From the time-domain expression for h(t), we have
h(∞) = 0.
3. Repeated Real Roots
H(s) =
=
(s2 + 3)
(s + 1)2 (s + 2)(s + 3)
A
B
C
D
+
+
+
⇒
(s + 2) (s + 3) (s + 1) (s + 1)2
(s2 + 3) = A(s + 1)2 (s + 3) + B(s + 1)2 (s + 2) + C(s + 1)(s + 2)(s + 3) + D(s + 2)(s + 3)
Evaluating the last equation at s = −1, we obtain D = 2. Evaluating at s = −2, we obtain
A = 7 and for s = −3 we have B = −3. To obtain C, we need to evaluate at another value
of s; choosing s = 0, we have
3 = 3A + 2B + 6C + 6D ⇒
Thus,
H(s) =
C = −4.
7
3
4
2
−
−
+
,
(s + 2) (s + 3) (s + 1) (s + 1)2
from which
h(t) = 7e−2t − 3e−3t − 4e−t + 2te−t u(t).
Let us again verify using the initial and final value theorems. We have
s(s2 + 3)
= 0,
s→∞ (s + 1)2 (s + 2)(s + 3)
h(0) = lim sH(s) = lim
s→∞
which is the same answer obtained from the time-domain expression. Applying the final value
theorem, we have:
s(s2 + 3)
= 0,
s→0 (s + 1)2 (s + 2)(s + 3)
h(∞) = lim sH(s) = lim
s→0
which is also what one obtains using the time-domain expression.
41
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
2.2.4
Solution of Differential Equations Using the Laplace Transform
Consider the general n-th order linear differential equation with constant coefficients:
dn−1 y(t)
dy(t)
dx(t)
dk−1 x(t)
dk x(t)
dn y(t)
+
a
+
·
·
·
+
a
+
a
y(t)
=
x(t)
+
b
+
·
·
·
+
b
+
·
·
·
b
,
n−1
1
0
1
k−1
k
dtn
dtn−1
dt
dt
dtk−1
dtk
where the output y(t) is a function of the input x(t) and possibly some of its derivatives. We are
interested in the zero-state response of this system. Taking Laplace transforms on both sides of
the differential equation above, we obtain
sn Y (s) + an−1 sn−1 Y (s) + · · · + a1 sY (s) = X(s) + b1 sX(s) + · · · + bk−1 sk−1 X(s) + bk sk X(s).
Solving for the ratio of Y (s) over X(s), we obtain
H(s) =
sn + an−1 sn−1 + · · · + a1
Y (s)
=
.
X(s)
1 + b1 s + · · · + bk−1 sk−1 + bk sk
(2.7)
We refer to H(s) in (2.7) defined as the ratio of the Laplace transform of the output to the Laplace
transform of the the input as the transfer function of the system. Rewriting the above equation
slightly, we can write
Y (s) = H(s)X(s),
i.e., the output Y (s) can be obtained by multiplying the transfer function with the Laplace transform of the input, X(s). Since multiplication in the frequency-domain is convolution in the time
domain (in view of the convolution property of the Laplace transform), we have
y(t) = h(t) ∗ x(t).
We conclude that h(t), the inverse Laplace transform of the transfer function H(s), is the impulse
response of the system, i.e. the impulse response and the transfer function are Laplace transform
pairs.
2.2.5
Interconnection of Linear Systems
Very often complicated systems are put together by breaking down the overall system into a number
of subsystems which are then connected together to implement the final system. This kind of a
design isolates individual systems by function making it easy to design these systems to perform
that particular function without any complications that may arise from other parts of a system.
It also is beneficial when a subsystem breaks down in that it can be replaced without having to
replace the whole system.
In general it is relatively easy to find the transfer function of the overall system when one knows
the transfer functions of the individual subsystems. Here, we will do this for a small set of examples.
Example 1 Consider the system in Figure 2.6 comprised of two subsystems with transfer functions
H1 (s) and H2 (s). If the input is x(t) and the output y(t), then the transfer function of the overall
system is, by definition:
Y (s)
H(s) =
.
X(s)
We have
Y (s) = X(s)H1 (s) + X(s)H2 (s) = X(s)[H1 (s) + H2 (s)]
42
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
H1(s)
X(s)
+
Y(s)
H2(s)
Figure 2.6: A parallel interconnection of two systems.
and therefore, the transfer function of the system is
H(s) =
Y (s)
= H1 (s) + H2 (s).
X(s)
Example 2 Now consider the series interconnection of two systems shown in Figure 2.7. The
output of the first system, which is H1 (s)X(s), is the input to the second. Thus.
Y (s) = X(s)H1 (s)Hs (s)
and, thus,
H(s) =
X(s)
Y (s)
= H1 (s)H2 (s).
X(s)
H1(s)
H2 (s)
Y(s)
Figure 2.7: A series interconnection of two systems.
Example 3 Figure 2.8 shows a system interconnection often encountered in control systems, known
as a feedback system (where the name is in view of the fact that the output is fed back to the input).
To compute the overall transfer function, we have:
Y (s) = E(s)H1 (s)
and,
E(s) = X(s) − Y (s)H2 (s).
Thus,
Y (s) = [X(s) − Y (s)H2 (s)]H1 (s) ⇒
Y (s)[1 + H1 (s)H2 (s)] = X(s)H1 (s)
43
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
from which we obtain
H(s) =
X(s)
-
Y (s)
H1 (s)
=
.
X(s)
1 + H1 (s)H2 (s)
E(s)
Y(s)
H1(s)
H2(s)
Figure 2.8: A feedback interconnection of two systems.
Working in the Laplace domain makes designing systems to perform certain functions easier.
For example, suppose that a signal (for example an audio music signal read off an album using a
stylus) is distorted because of imperfections on the disc. If one can model the distortion effect in
the signal as a linear system operating on the original undistorted signal, x(t), then what we have
available is the output of this linear system, say, y(t). The problem is to extract x(t) from y(t),
thus removing the distortion in the signal. A common approach is to build another linear system
appropriately designed through which to pass y(t) to hopefully recover x(t) as closely as possible.
This designed system is often referred to as an equalizer and the process by which the original is
recovered (as much as possible) is known as equalization. This is illustrated in Figure 2.9 below.
We need to design H2 (s) such that it recovers the original signal x(t) from the distorted signal y(t).
X(s)
H1(s)
Y(s)
Equalizer
H2(s)
X(s)
Figure 2.9: Illustration of equalization.
We have
X(s) = X(s)H1 (s)H2 (s),
and therefore
H1 (s)H2 (s) = 1
from which
H2 (s) =
1
.
H1 (s)
Of course, before implementing the above system, one must make sure H11(s) is stable. Also,
often it is not possible to design H2 (s) to perfectly extract the original signal. In these cases, we
aim to minimize the error between the original signal and the reconstructed signal.
Class Project 1 is based on exactly the above concept.
44
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
2.2.6
The Sinusoidal Steady-State Response of a LTI System
Consider a stable LTI system with a rational transfer function
H(s) =
N (s)
.
D(s)
The system being stable means the poles of H(s) (the roots of D(s)) have negative real parts. Let
the input to this system be the complex exponential
x(t) = Aej(ωt+φ) .
We are interested in the steady-state response of the system (after a long time). We have:
Y (s) = H(s)X(s)
H(s)Aejφ
=
(s − jω)
N (s)Aejφ
=
(s − jω)D(s)
X
C
=
+
(Partial fraction terms corresponding to D(s)) ⇒
(s − jω)
X
H(s)Aejφ
C
=
+
(Partial fraction terms corresponding to D(s))
(s − jω)
(s − jω)
for some constant C to be evaluated.
The terms in the partial fraction expansion above corresponding to D(s) will die out in time
since all such terms are multiplied by exponentials with negative exponents (since the roots of D(s)
have negative real parts).
Multiplying both sides of the last equation above by (s − jω), we have
H(s)Aejφ = C + (s − jω)
(Partial fraction terms corresponding to D(s)).
X
The above equation is true for all values of s. Letting s = jω in the equation above, we have
C = H(jω)Aejφ .
Thus,
Y (s) =
H(jω)Aejφ X
+
(Partial fraction terms corresponding to D(s)).
(s − jω)
Taking inverse Laplace transforms and noting that all time-domain signals corresponding to the
terms in the sum will go to zero at steady-state, we have
y(t) = H(jω)Aej(ωt+φ)
(2.8)
= A|H(jω)|e
(2.9)
j(ωt+φ+θ(ω))
= A|H(jω)| cos(ωt + φ + θ(ω)) + jA|H(jω)| sin(ωt + φ + θ(ω)),
(2.10)
where θ(ω) is the angle H(jω).
Thus the output at steady-state is just the input complex exponential multiplied by the transfer
function evaluated at the s = jω (where ω is the frequency of the input complex exponential.)
CHAPTER 2. LINEAR, TIME-INVARIANT (LTI) SYSTEMS
45
Now, since we can write using Euler’s identity
x(t) = Aej(ωt+φ) = A cos(ωt + φ) + jA sin(ωt + φ)
using linearity and Equation (2.9) above, we infer that the output when the input is A cos(ωt + φ)
is A|H(jω)| cos(ωt + φ + θ(ω)) and the output when the input is sin(ωt + φ) is A|H(jω)| sin(ωt +
φ + θ(ω)).
Example 4 Consider the system described by the circuit in Figure 2.10. We are interested in
obtaining the steady-state output when the input is x(t) = 120 cos(10t).
1000 W
+
+
x(t)
1000 mF
-
y(t)
-
Figure 2.10: Circuit for Example 4.
The input-output for the above circuit us described by the following differential equation (writing
a single loop equation):
dy(t)
x(t) = RC
+ y(t).
dt
Taking Laplace transforms on both sides and solving for the transfer function, H(s), we obtain
H(s) =
Thus,
H(j10) =
1
1
=
.
RCs + 1
s+1
1
1 −j84.3◦
=√
e
.
j10 + 1
101
Thus, the output at steady-state is
120
y(t) = √
cos(10t − 84.3◦ ).
101
Chapter 3
The Fourier Series
3.1
Signal Space
Definition 1 (Orthonormal signals) A set of N signals {g1 (t), g2 (t), · · · , gN (t)} defined over a
time interval 0 ≤ t ≤ T is said to be orthonormal if
Z T
0
gi (t)gj∗ (t)dt =
1,
0,
i=j
otherwise.
(3.1)
Definition 2 (Linear combination) A linear combination of N orthonormal signals is any signal of the form
x(t) =
N
X
xi gi (t)
(3.2)
i=1
where x = (x1 , x2 , · · · , xN ) is some real vector.
We say that N orthonormal signals span an N -dimensional (signal) space, which consists of all
possible linear combinations of the N orthonormal signals. We say that the N orthonormal signals
are the basis for the N -dimensional signal space. Thus, any signal in the signal space spanned by
N orthonormal signals can be expressed as in (3.2) for some N -dimensional vector x. If we know
that some signal x(t) is in the space spanned by N orthonormal vectors, we can find the vector x
representing that signal by projecting the signal on each of the N orthonormal signals:
xk =
Z T
0
x(t)gk∗ (t)dt, k = 1, 2, · · · , N.
(3.3)
This can be shown easily as follows:
Z T
0
x(t)gk∗ (t)dt
=
Z T "X
N
0
#
xi gi (t) gk∗ (t)dt =
i=1
N
X
i=1
xi
Z T
0
gi (t)gk∗ (t)dt = xk
where the last equality is in view of (3.1). Thus, (3.3) allows us to compute the signal-space (vector)
representation of any signal in the signal-space, whereas (3.2) takes us from a signal-space (vector)
representation to the waveform representation of the signal.
46
47
CHAPTER 3. THE FOURIER SERIES
3.1.1
Signal Energy
The energy, E, of a signal x(t) is defined by:
Z T
E=
0
|x(t)|2 dt.
If the signal-space representation, x, of a signal x(t) is known, then the signal energy can be
expressed in terms of x as:
E =
=
2
Z T X
N
xi gi (t) dt
|x(t)| dt =
0 i=1
0
Z T
N X
N
N
X
X
Z T
2
xi s∗j
0
i=1 j=1
3.1.2
gi (t)gj∗ (t)dt =
i=1
|xi |2 .
Signal Orthogonality
Two signals x(t) and y(t) are said to be orthogonal if
Z T
x(t)y ∗ (t)dt = 0.
0
The integral above is referred to as the correlation between the two signals x(t) and y(t). Thus,
two signals are orthogonal when their correlation is zero.
In signal space, the orthogonality condition becomes
Z T
0
x(t) · y ∗ (t)dt
=
Z T "X
N
j j
0
=

# N
X
y ∗ g ∗ (t) dt
xi gi (t) · 
i=1
N
N X
X
xi yj∗
j=1
Z T
i=1 j=1
⇔
3.1.3
N
X
0
gi (t)gj∗ (t)dt = 0
xi yi∗ = 0.
i=1
The Euclidean Distance
The Euclidean distance, d(x, y), between two signals x(t) and y(t), is given by
s
d(x, y) =
Z T
0
|x(t) − y(t)|2 dt.
(3.4)
Following similar derivations as above, we can show that the Euclidean distance between two signals
can be expressed in the signal-space as
v
uN
uX
d(x, y) = t |xi − yi |2 .
i=1
(3.5)
48
CHAPTER 3. THE FOURIER SERIES
Example 5 (Signal-space Representation) Figure 3.1 shows an example of a signal-space spanned
by two basis signals and the representation of three signals as points in the signal-space spanned
by the two basis signals. The distance between, say signals s1 (t) and s3 (t), can be computed in
signal-space as
q
√
d13 = (1 − 2)2 + (1 − 0)2 = 2.
The same answer can be obtained in the time-domain through
s
d13 =
Z 2
0
|s1 (t) − s3 (t)|2 dt =
√
2.
#!2 (t)
#!1(t)
1
1
1
2 t
a) The basis signals
1
s1(!)!
t
s2 (!)!
t
1
2
2
t
s3 (!)!
t
1
1
2
t
1
-1
2
t
1
2
t
b) The waveforms to be represented in signal-space
s1 (!)!
t
(!1,1)!
s3 (!)!
t
(!2,0)!
s2 (!)!
t
(!1,"!1)!
c) The siganl-space representation of the three signals
Figure 3.1: Illustration of signal-space concepts.
3.2
The Complex Fourier Series
Claim 1 The signal set
t
1
gk (t) = √ ej2πk T ,
T
−
T
T
≤ t ≤ , k ∈ Z,
2
2
is an orthonormal set (Z is the set of all integers, positive and negative.)
(3.6)
49
CHAPTER 3. THE FOURIER SERIES
To prove this claim, we have:
Z
T
2
− T2
gk (t)gn∗ (t)dt
=
1
T
Z
T
2
− T2
t
ej2π(k−n) T dt
ejπ(k−n) − e−jπ(k−n)
j2π(k − n)
sin[π(k − n)]
=
π(k − n)
1, if n = k
=
0. otherwise.
=
The above signals can also easily be seen to be periodic with period T :
t+T
t
t
1
1
1
gk (t + T ) = √ ej2πk T = √ ej2πk T ej2πk = √ ej2πk T = gk (t).
T
T
T
The orthonormal signal set in (3.6) spans an infinite dimensional space. Signals
√ in this space
can be expressed as (note the slight change in the definition of xk and by the 1/ T factor):
x(t) =
X
t
xk ej2πk T ,
(3.7)
k
where
xk =
1
T
Z
T
2
− T2
t
x(t)e−j2πk T dt, k ∈ Z.
(3.8)
Note that for periodic functions x(t) with period T , the expansion in (3.7) is valid over all time
−∞ < t < ∞, and not just over (a period) −T /2 ≤ t ≤ T /2.
The representation of periodic signals in (3.7) is known as the (complex) Fourier Series. The
series converges in the mean-squared-error (MSE) sense to the signal x(t) under any one of the
following sufficient conditions:
1. The periodic signal x(t) is a continuous function of t.
2. x(t) is square-integrable:
Z
T
|x(t)|2 dt < ∞.
3. The following (Dirichlet) conditions hold:
(a)
Z
T
2
− T2
|x(t)|dt < ∞;
(b) The function x(t) is single-valued.
(c) The function x(t) has a finite number of discontinuities within a period.
(d) The function x(t) has a finite number of maxima and minima within a period.
50
CHAPTER 3. THE FOURIER SERIES
x(t)
1
-T/2
Tp/2
-Tp/2
t
T/2
Figure 3.2: Rectangular pulse periodic signal with period T .
3.2.1
Complex Fourier Series Examples
Example 6 Consider the T -periodic signal in Figure 3.2. To compute the complex Fourier series
coefficients, we have:
xk =
1
T
=
1
T
Z T /2
x(t)e−j2πkt/T dt
−T /2
Z Tp /2
x(t)e−j2πkt/T dt
−Tp /2
1 Tp /2 −j2πkt/T
e
dt
T −Tp /2
Tp sin (πkTp /T )
.
T πk (Tp /T )
Z
=
=
For the special case of Tp = T /2 in particular, we have:
1

 2,
1 sin(πk/2)
xk =
= (−1)(k−1)/2 ,

πk
2 πk/2

0,
k=0
k odd
k 6= 0, even.
(3.9)
Figure 3.3 plots the Fourier coefficients above from k = −10 to k = 10 (i.e. up to the tenth
harmonic). As can be seen, there is a substantial dc component in the periodic signal as well as
substantial energy in the first harmonic (multiple of the fundamental frequency) but no energy in
even harmonics.
Example 7 Consider the T -periodic signal in Figure 3.4. To compute the complex Fourier series
coefficients, we have:
xk =
1
T
Z T /2
x(t)e−j2πkt/T dt
−T /2
Z T /2
=
−2j
T
=
Z
−2j T /2
=
=
0
"
#
ej2πkt/T − e−j2πkt/T
x(t)
dt
2j
x(t) sin(2πkt/T )dt
T 0
1 − cos(πk)
jπk
1 − (−1)k
.
jπk
51
CHAPTER 3. THE FOURIER SERIES
1.2
1
0.8
0.6
0.4
0.2
0
0. 2
10
8
6
4
2
0
k
2
4
6
8
10
Figure 3.3: Amplitude spectrum of the periodic signal in Figure 3.2.
Thus, we have:
xk =
2
jkπ ,
0,
k odd
k even.
(3.10)
Figure 3.5 plots the (imaginary in this case) Fourier coefficients above from k = −10 to k = 10
(i.e. up to the tenth harmonic). As can be seen, there is no dc component in the periodic signal
and in fact no energy in any even harmonics of the signal.
Example 8 Consider the half-wave rectified periodic signal in Figure 3.6. Clearly it has period
T . This signal is obtained when a pure sinusoid A cos(2πt/T ) passes through an ideal diode. To
compute the complex Fourier series coefficients, we have:
xk =
1
T
Z T /2
=
A
T
Z T /4
x(t)e−j2πkt/T dt
−T /2
= −A
cos(2πt/T )e−j2πkt/T dt
−T /4
cos(πk/2)
.
π(k 2 − 1)
52
CHAPTER 3. THE FOURIER SERIES
x(t)
1
-T/2
T/2
t
-1
Figure 3.4: Rectangular pulse periodic signal with period T .
Note that for k = 1 and k = −1, we must take the limit to evaluate the coefficient using L’Hospitals
rule. We have in this case:
cos(πk/2)
A
lim −A
= .
2
k→1
π(k − 1)
4
Figure 3.7 plots the Fourier coefficients above from k = −10 to k = 10 and for A = 1. As can
be seen, there is a substantial dc component in the periodic signal with terms for higher harmonics
diminishing quickly.
Example 9 Consider the full-wave rectified periodic signal in Figure 3.8. Clearly it has period
T /2. This signal is obtained when a pure sinusoid A cos(2πt/T ) passes through a rectifier bridge
with ideal diodes. To compute the complex Fourier series coefficients, we have:
xk =
=
2
T
Z T /4
2A
T
x(t)e−j4πkt/T dt
−T /4
Z T /4
cos(2πt/T )e−j4πkt/T dt
−T /4
cos(πk)
π(4k 2 − 1)
(−1)k+1
= 2A
π(4k 2 − 1)
= −2A
Figure 3.9 plots the Fourier coefficients above from k = −10 to k = 10 and for A = 1. As can be
seen, there is a substantial dc component in the periodic signal (twice as large as for the half-wave
rectified signal) with terms for higher harmonics attenuating quickly.
3.2.2
An Alternative Form of the Fourier Series
For real signals, clearly the complex series in (3.7) must ultimately yield a real function, and it
does. This is a result of the fact that the complex Fourier coefficients for positive and negative
values of k have a certain complex conjugate relation to each other. To see this, let’s compute x−k :
x−k =
1
T
Z T /2
=
1
T
"Z
−T /2
= x∗k
xk =
x(t)ej2πkt/T dt
x∗−k .
T /2
−T /2
⇒
#∗
x(t)ej2πkt/T dt
53
CHAPTER 3. THE FOURIER SERIES
0.8
0.6
0.4
0.2
0
0. 2
0. 4
0. 6
0. 8
10
8
6
4
2
0
k
2
4
6
8
10
Figure 3.5: Spectrum of periodic signal in Figure 3.4.
Let’s now derive an alternative form of the Fourier series from the complex one in (3.7). We
have:
xk =
1
T
Z T /2
x(t)e−j2πkt/T dt
−T /2
1 T /2
1
=
x(t) cos(2πkt/T )dt − j
T −T /2
T
= xkr + jxki ,
Z
where
xkr =
1
T
xki = −
x(t) =
∞
X
k=−∞
Z T /2
1
T
Z T /2
x(t) sin(2πkt/T )dt
−T /2
x(t) cos(2πkt/T )dt
−T /2
Z T /2
−T /2
xk ej2πkt/T
x(t) sin(2πkt/T )dt
54
CHAPTER 3. THE FOURIER SERIES
x(t)
1
t
T/4 T/2
-T/2 -T/4
Figure 3.6: Half-wave rectified signal with period T .
−1
X
=
=
j2πkt/T
xk e
∞
X
+ x0 +
xk ej2πkt/T
k=−∞
∞
X
k=−∞
∞
X
k=1
k=−∞
x−k e−j2πkt/T + x0 +
= x0 +
∞ h
X
x∗k e−j2πkt/T + xk ej2πkt/T
k=1
∞
X
= x0 + 2
= x0 + 2
xk ej2πkt/T
k=1
∞
X
k=1
h
< xk ej2πkt/T
i
i
[xkr cos(2πkt/T ) − xki sin(2πkt/T )]
Now, for k = 1, 2, 3, · · · let ak = 2xkr and bk = −2xki , av = x0 , i.e.,
ak =
2
T
bk =
2
T
av =
1
T
Z T /2
−T /2
Z T /2
−T /2
Z T /2
x(t) cos(2πkt/T )dt
(3.11)
x(t) sin(2πkt/T )dt
(3.12)
x(t)dt,
(3.13)
−T /2
and the Fourier series above becomes:
x(t) = av +
∞
X
[ak cos(2πkt/T ) + bk sin(2πkt/T )]
(3.14)
k=1
Equations (3.11), (3.12), (3.13) and (3.14) define the alternative (but perfectly equivalent) form
of the Fourier series defined in equations (3.7) and (3.8).
Another useful form of the Fourier series can be obtained easily from (3.14). Let
Ak =
q
a2k + b2k
θk = − tan−1
(3.15)
bk
ak
.
We now have:
x(t) = av +
∞
X
k=1
[ak cos(2πkt/T ) + bk sin(2πkt/T )]
(3.16)
55
CHAPTER 3. THE FOURIER SERIES
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0.05
10
8
6
4
2
0
k
2
4
6
8
10
Figure 3.7: Spectrum of periodic signal in Figure 3.6.
= av +
= av +
x(t) = av +
∞
X
k=1
∞
X
k=1
∞
X
Ak
bk
ak
cos(2πkt/T ) +
sin(2πkt/T )
Ak
Ak
Ak [cos(θk ) cos(2πkt/T ) + sin(θk ) sin(2πkt/T )]
Ak cos(2πkt/T + θk ).
⇒
(3.17)
k=1
Equations (3.15), (3.16) and (3.17) define a third (but equivalent) version of the Fourier series.
Example 10 Consider the periodic signal in Figure 3.2 and let Tp = T /2 and T = 4. The complex
Fourier coefficients for this signal were computed earlier and clearly the new coefficients, av , ak
and bk can be obtained by using the relations ak = 2xkr and bk = −2xki , av = x0 . However, let’s
compute them directly using equations (3.11)-(3.13). We have:
x(t) = av +
∞
X
k=1
[ak cos(πkt/2) + bk sin(πkt/2)] ,
56
CHAPTER 3. THE FOURIER SERIES
x(t)
1
T/4 T/2
-T/2 -T/4
t
Figure 3.8: Half-wave rectified signal with period T .
where
av =
ak =
bk =
1 1
1
1 2
x(t)dt =
dt = ,
4 −2
4 −1
2
Z
Z
1 2
1 1
sin(πk/2)
x(t) cos(πkt/2)dt =
cos(πkt/2)dt =
,
2 −2
2 −1
πk/2
Z
Z
1 1
1 2
x(t) sin(πkt/2)dt =
sin(πkt/2)dt = 0.
2 −2
2 −1
Z
Z
Thus,
x(t) =
∞
1 X
sin(πk/2)
+
cos(πkt/2).
2 k=1 πk/2
Figure 3.10 plots the Fourier series above for 10 and 50 terms in the sum and compares it to
the exact. As can be seen from Figure 3.10, the largest discrepancy is at the discontinuities of the
function. In fact this discrepancy (the maximum overshoot) remains the same as more terms in
the infinite series are used, but the ”oscillations” increase in frequency ( as can be seen also in the
figure). As the limit in the number of terms goes to infinity, the energy in the error (difference
squared between the exact function and its Fourier series representation) goes to zero (which means
the Fourier series has converged). This overshoot phenomenon for discontinues signals is known
as Gibb’s phenomenon.
Example 11 As another example, consider the half-wave rectified signal in Figure 3.6. Clearly in
this case bk = 0 and
cos(πk/2)
ak = −2A
,
π(k 2 − 1)
A
.
π
Figure 3.11 plots the above Fourier series for 5 and 10 terms in the sum and compares it to the
exact. Note that in this case much fewer terms are needs in the series to approximate the exact
solution compared to the rectangular pulse case. This is due to the fact that the cosine function
is much ”smoother” and thus has most of its energy in the low frequency terms. As such it is
easier to approximate with fewer terms. This can also bee seen from the equation for the Fourier
coefficients above; whereas for the rectangular pulse case the coefficients at increasing harmonics
of the fundamental die out as 1/k in the rectangular pulse case, they die out as 1/k 2 in the cosine
pulse case.
av =
57
CHAPTER 3. THE FOURIER SERIES
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−0.1
−10
−8
−6
−4
−2
0
k
2
4
6
8
10
Figure 3.9: Spectrum of periodic signal in Figure 3.6.
3.2.3
Average Power Of Periodic Signals
We will now use the Fourier series expansion of periodic signals to compute average power. In
a linear, time-invariant system driven by a periodic signal at its input, we know (in view of the
sinusoidal steady-state response analysis we went through previously) that the same harmonics
(integer multiples of the fundamental frequency) in the input signal will be present throughout
the system. In particular, let’s look at the voltage v(t) across two terminals an the current going
though i(t). Each has Fourier series representations (we will use the complex form):
v(t) =
X
vk ej2πkt/T ,
k
i(t) =
X
ik ej2πkt/T .
k
The average power is:
P
=
1
T
Z T /2
−T /2
v(t)i∗ (t)dt
58
CHAPTER 3. THE FOURIER SERIES
1.2
1
0.8
Amplitude
Original
Fourier series, M=10
Fourier series, M=50
0.6
0.4
0.2
0
0. 2
2
1. 5
1
0. 5
0
Time
0.5
1
Figure 3.10: Fourier series using 10 (red) and 50 terms (green)
=
=
1
T
Z T /2 "X
1
T
Z T /2 "X X
−T /2
#"
X
k
−T /2
#
n
k
vk i∗n
sin(π(k − n))
π(k − n)
=
XX
k
n
=
X
vk i∗k
n
Z T /2
ej2π(k−n)t/T dt
−T /2
k
= v0 i0 +
∞
X
vk i∗k
k=1
∞
X
= v0 i0 + 2
k=1
ik e
vk i∗n ej2π(k−n)t/T
1
T
XX
#∗
j2πkt/T
k
vk i∗n
=
k
vk e
j2πkt/T
+
−∞
X
k=−1
< {vk i∗k }
vk i∗k
dt
dt
1.5
2
59
CHAPTER 3. THE FOURIER SERIES
1.2
1
Amplitude
0.8
0.6
0.4
Exact
M=5
M=10
0.2
0
0. 2
0. 5
0. 4
0. 3
0. 2
0. 1
0
Time
0.1
0.2
0.3
Figure 3.11: Fourier series using 5 (red) and 10 terms (green)
Converting to polar form, we have
vk = |vk |eθV k
=
q
2 + v 2 eθV k
vkr
ki
1q 2
aV k + b2V k eθV k
2
= Vk eθV k
=
ik = |ik |eθIk
=
=
=
q
i2kr + i2ki eθIk
1q 2
aIk + b2Ik eθIk
2
1 θIk
Ik e .
2
0.4
0.5
60
CHAPTER 3. THE FOURIER SERIES
Using the above expressions in the final expression for average power derived above and recognizing
that v0 and i0 are the dc values (Vdc and Idc ) of the current and voltage, we have:
P = Vdc Idc +
3.2.4
∞
1X
Vk Ik cos(θV k − θIk ).
2 k=1
Parseval’s Theorem
Theorem 1 Parseval’s Theorem
Let x(t) be periodic with period T . Then,
1
T
Z T /2
−T /2
|x(t)|2 dt =
X
k
∞
X
|xk |2 = |x0 |2 + 2
k=1
|xk |2 .
(3.18)
Proof:
1
T
Z T /2
−T /2
|x(t)| dt =
2
1
T
Z T /2 X X
−T /2 k
=
XX
=
XX
=
X
k
k
k
n
1
T
xk x∗n
n
xk x∗n
n
xk x∗n ej2π(k−n)t/T dt
Z T /2
ej2π(k−n)t/T dt
−T /2
sin(π(k − n))
π(k − n)
|xk |2
= |x0 |2 + 2
∞
X
k=1
|xk |2 .
where the last equality is in view of the fact that x−k = x∗k and, thus, |x−k | = |xk |.
Parseval’s theorem shows that the squared magnitude of the k-th complex Fourier coefficient of a
periodic signal is equal to the energy of the signal at the k-th harmonic frequency.
3.2.5
The Root-Mean-Square (RMS) Value of a Periodic Signal
Let us now compute the RMS value of a periodic signal (you have already done so for sinusoidal
signals, but this analysis extends the results to arbitrary periodic signals). We have:
s
xrms =
=
1
T
sX
k
=
Z
T
|x(t)|2 dt
|xk |2
v
u
∞
X
u
t|x0 |2 + 2
|xk |2 ,
k=1
where the last two equalities above are in view of Parseval’s theorem. Now, using the relations,
derived earlier, between the complex Fourier coefficients and the real Fourier coefficients defined in
61
CHAPTER 3. THE FOURIER SERIES
(3.11)-(3.13), i.e., ak = 2xkr and bk = −2xki where xkr and xki are the real and imaginary parts of
xk respectively, we have:
xrms =
v
u
∞
X
u
t|x0 |2 + 2
|xk |2
k=1
=
v
u
∞
X
u
2
ta2 + 1
ak + b2k
v
2
=
v
u
∞ X
u
Xk 2
ta2 +
√
v
k=1
k=1
where
Xk =
2
q
a2k + b2k .
√k can be recognized as the rms values of each sinusoidal component of the periodic
The terms X
2
signal. Thus, the rms value of a periodic signal is the square root of the sum of the squares of the
rms values of its sinusoidal components.
3.3
Steady-State Response of a LTI System to Periodic Inputs
We showed previously that the steady-state response, y(t), of a stable LTI system with transfer
function H(s) to a complex exponential x(t) = Aej(ωt+φ) is
y(t) = H(jω)Aej(ωt+φ) .
(3.19)
Now we know that any periodic signal x(t) of period, say, T , can be represented in a Fourier series
as
X
x(t) =
xk ej2πkt/T ,
k
for some Fourier coefficients xk . If now this periodic signal is applied to the LTI system with
transfer function H(s), then, from linearity, the output y(t) is
y(t) =
X
xk H(j2πk/T )ej2πkt/T .
(3.20)
k
The output is clearly another periodic signal with the same fundamental frequency as the input
signal x(t). Considering the expression for y(t) above, clearly the complex Fourier coefficients of
this signal are
yk = xk H(j2πk/T ).
In other words, the Fourier coefficients of the output signal are the Fourier coefficients of the input
periodic signal multiplied by H(j2πk/T ), the transfer function response at each harmonic multiple
of the fundamental period. Thus, by appropriately designing the transfer function H(s) we can
“shape” the spectrum of y(t) to have some desirable form. For example, if we wanted y(t) to
have no dc component, we would make sure that H(0) = 0 (or very close to zero). If we wanted to
extract a pure sinusoid (complex exponential) out of the periodic signal x(t) at some multiple of the
fundamental period 1/T , say 10/T , then assuming x10 is not zero, we would make sure H(j20π/T )
non-zero (and as large as possible) and all other coefficients
H(j2πk/T )
for k 6= 10 zero (or as close to zero as possible).
62
CHAPTER 3. THE FOURIER SERIES
Example 12 Consider the special case of the periodic signal x(t) in Figure 3.2 shown also below
in Figure 3.13 for T = 4 applied to the linear system described by the circuit below. Clearly the
circuit represents a simple low-pass filter which will extract the dc component of x(t) and attenuate
the higher harmonics. The transfer function of the above system is easily computed to be:
1000 W
+
+
x(t)
y(t)
1000 mF
-
-
Figure 3.12: Simple low-pass filter for extracting the dc component.
x(t)
1
-2
t
2
1
-1
Figure 3.13: Rectangular pulse periodic signal with period T = 4.
H(s) =
1
,
s+1
with H(0) = 1 and
1
.
jkπ/2 + 1
H(j2πk/T ) = H(jkπ/2) =
From a previous example when we computed the complex Fourier series coefficients of x(t) given
in Figure 3.13, we can express x(t) in a complex Fourier series as follows:
x(t) =
1 X sin(πk/2) jπkt/2
e
.
2 k
πk/2
The average energy of this signal over a period is
1
E=
T
1
|x(t)| dt =
4
−T /2
Z T /2
2
1
12 dt = .
2
−1
Z 1
We can also compute the average energy of the signal using Parseval’s theorem. We have:
E = x20 + 2
∞
X
k=1
=
2
1
2
|xk |2
X 1 2
+2
k
odd
πk
63
CHAPTER 3. THE FOURIER SERIES
=
=
∞
X
1
1
+2
2
4
π (2k − 1)2
k=1
1 1
1
+ = ,
4 4
2
where the second equality above is because with the exception of x0 , the even Fourier coefficients are
zero in this case and the third equality is obtained by making a change of variable. The resulting
infinite sum can be evaluated in this case and it equals 1/8. One can see that 50% of the energy in
x(t) is in the harmonics above dc.
When x(t) is applied at the input of the circuit above, the output y(t) at steady-state is (using
(3.20))
y(t) =
1X
sin(πk/2) jπkt/2
H(jkπ/2)
e
2 k
πk/2
(3.21)
Thus, the complex Fourier coefficients of the output y(t) are in this case
sin(πk/2)
1
.
yk = H(jkπ/2)
2
πk/2
It can be seen that the output consists of the dc component
1
1
y0 = H(0) =
2
2
(which has not been attenuated at all by the low-pass filter) but it also includes a large number of
harmonics (theoretically infinite, but practically a small number as their energies die out rather
quickly as k increases). Figure 3.14 plots the absolute values of the complex Fourier coefficients
of x(t) and y(t). It can be seen that the dc value is not attenuated by the low-pass filter whereas
other harmonics are significantly reduced at the output of the system. Clearly, the above system
represents a simple rectifier that converts a periodic signal to dc. The quality of the output can be
measured by the fraction of the energy that is contained in the (undesirable in this case) harmonics
above dc compared to the total signal energy (computed to be 1/2 above). It can be seen there is
substantial energy in the output signal only at the fundamental frequency (k = 1) besides dc. The
quality of the filter in producing dc from the input periodic signal can be quantified using Parseval’s
theorem (3.21). The total energy in the output signal is:
Eo = x20 H(0)2 + 2
∞
X
k=1
|xk |2 |H(jkπ/2)|2
X
1
1
1
+2
·
2
2
4
π k 1 + π24k2
k odd
∞ X
1
1
4
+2
·
4
π 2 (2k − 1)2 4 + π 2 (2k − 1)2
k=1
"
=
=
=
=
1 1 − tanh(1)
+
4
4
2 − tanh(1)
= 0.3096,
4
where the sum above was evaluated using Maple.
#
64
CHAPTER 3. THE FOURIER SERIES
0.4
0.2
0
−10
−8
−6
−4
−2
0
2
a) The spectrum of x(t).
4
6
8
10
−8
−6
−4
−2
0
2
b) The spectrum of y(t).
4
6
8
10
0.4
0.2
0
−10
Figure 3.14: The magnitude of the Fourier coefficients of x(t) and y(t).
A measure of the quality of the rectifier circuit above is the fraction, say F , of the energy in the
harmonics above dc to the total energy in the output signal:
∞
1 X
1 − tanh(1)
F =
|xk |2 |H(jkπ/2)|2 =
= 0.1925.
E0 k=1
2 − tanh(1)
Thus, less than 20% of the energy in the output is in the harmonics above dc. In contrast, before
filtering, 50% of the signal energy was in the harmonics. For a good system, F must be as close to
zero as possible.
Chapter 4
The Fourier Transform
The Fourier Transform is another transform, similar to the Laplace Transform, which facilitates
analysis. It can be thought of as a special case of the (bi-lateral) Laplace transform where the
complex frequency variable s = σ + jω is evaluated on the imaginary axis, i.e. s = jω = j2πf .
As such, the Fourier transform of a signal describes the frequency content of that signal, much
like the Fourier series did for periodic signals. However, in contrast to the “discrete” nature of the
frequency content of periodic signals, where as we saw energy was possibly present only at integer
multiples (harmonics) of the fundamental frequency of the periodic signal, for arbitrary signals the
frequency spectrum (content) can vary in a continuum as a function of frequency. Thus, the Fourier
transform is more general than the Fourier series in that it handles a larger class of signals.
The Fourier transform of the impulse response of a system yields the frequency response of the
system and it is similar to the transfer function obtained by taking the Laplace transform of the
impulse response.
Definition 3 (The Fourier Transform) The Fourier Transform of a signal x(t) is defined as:
Z ∞
X(ω) =
x(t)e−jωt dt
(4.1)
−∞
in terms of the angular frequency ω in radians/s, or
X(f ) =
Z ∞
x(t)e−j2πf t dt
(4.2)
−∞
in terms of the frequency in Hz. The Fourier Transform is invertible; thus, given X(f ) or X(ω),
the time-domain signal can be obtained as:
x(t) =
1
2π
Z ∞
X(ω)ejωt dω
(4.3)
−∞
in terms of the angular frequency ω in radians/s, or
x(t) =
Z ∞
X(f )ej2πf t df
−∞
in terms of the frequency in Hz.
Example 13 The Fourier transform of a delta function δ(t) is:
F[δ(t)] =
Z ∞
δ(t)e−j2πf t dt = 1.
−∞
65
(4.4)
66
CHAPTER 4. THE FOURIER TRANSFORM
x(t)
1/T
-T/2
T/2
t
Figure 4.1: Rectangular pulse.
Example 14 Find the Fourier transform of the rectangular pulse in Figure 4.1 below. We have:
X(f ) =
Z ∞
x(t)e−j2πf t dt
−∞
=
=
=
1
T
Z T /2
e−j2πf t dt
−T /2
πf
T
e
− e−πf T
j2πf
sin(πf T )
.
πf T
Figure 4.2 plots the above Fourier transform of the rectangular pulse as a function of the normalized
frequency f T .
Example 15 (Ideal Low-Pass Filter) Find the impulse response h(t) of the ideal low-pass filter
whose Fourier transform is given in Figure 4.3 below. A system having this frequency response is an
ideal low-pass filter which passes all frequencies below W/2Hz without any distortion and completely
blocks all signal frequencies above W/2Hz. We have:
h(t) =
=
Z ∞
H(f )ej2πf t df
−∞
Z W/2
ej2πf t df
−W/2
eπW t − e−πW t
=
j2πt
sin(πW t)
= W
.
πW t
Thus, the impulse response of an ideal low-pass filter is a sinc function.
4.1
Properties of the Fourier Transform
The Fourier transform has properties that are very similar to those of the Laplace transform we
covered earlier (as indeed expected given the close relation between the two transforms).
1. Linearity: The Fourier transform is linear, i.e.:
F [ax1 (t) + bx2 (t)] = aF[x1 (t)] + bF[x2 (t)],
(4.5)
where F[·] denotes the Fourier transform of the signal in brackets. The proof is simple and
it follows from the linearity properties of integrals.
67
CHAPTER 4. THE FOURIER TRANSFORM
1
0.8
Amplitude Spectrum
0.6
sin(¼f T )
¼f T
0.4
0.2
0
0. 2
0. 4
10
8
6
4
2
0
2
Normalized Frequency, fT
4
6
8
10
Figure 4.2: The Fourier transform of a rectangular pulse as a function of normalized frequency f T
.
2. Time-Shift:
F[x(t − a)] = e−j2πf a X(f ).
(4.6)
Again, this is easily shown through a simple change of variables.
3. Time-Scaling: Let a 6= 0 be a scalar. Then
1
F[x(at)] =
X
|a|
f
.
a
Proof: First, let’s assume a > 0. Then
F[x(at)] =
=
=
Z ∞
x(at)e−j2πf t dt (letting τ = at)
−∞
Z
1 ∞
x(τ )e−j2π(f /a)τ dτ
a −∞
1
f
X
.
a
a
Now, assume a < 0. Then:
F[x(at)] =
Z ∞
−∞
x(at)e−j2πf t dt
(letting τ = at)
(4.7)
68
CHAPTER 4. THE FOURIER TRANSFORM
H(f)
1
-W/2
W/2
f
Figure 4.3: The frequency response of an ideal low-pass filter.
=
1
a
Z −∞
x(τ )e−j2π(f /a)τ dτ
∞
1 ∞
x(τ )e−j2π(f /a)τ dτ
= −
a −∞
1
f
= − X
.
a
a
Z
Combining the two expressions for a > 0 and a < 0 we obtain (4.7) for any a 6= 0.
Again, the proof is simply obtained through a change of variables. Clearly, if a > 1, then the
frequency content of the scaled signal increases, otherwise it decreases.
4. Time Reversal:
F[x(−t)] = X(−f ),
(4.8)
which is easily shown through a simple change of variables, or even more simply as a special
case of the time-scaling property above for a = −1. If the signal x(t) is real, then it can be
further easily shown that
X(−f ) = X ∗ (f ).
Thus, for real signals x(t), we have:
F[x(−t)] = X(−f ) = X ∗ (f )
5. Multiplication by t:
F[tx(t)] = −
1 d
X(f ).
j2π df
(4.9)
Proof:
F[tx(t)] =
Z ∞
tx(t)e−j2πf t dt
−∞
∞
1
d −j2πf t
x(t)
e
dt
= −
j2π −∞
df
Z ∞
1 d
= −
x(t)e−j2πf t dt
j2π df −∞
j d
=
X(f )
2π df
Z
Similarly, it can be shown that
F[t x(t)] =
n
j
2π
n
dn
X(f ).
df n
(4.10)
69
CHAPTER 4. THE FOURIER TRANSFORM
6. Multiplication by a complex exponential (Modulation):
F[x(t)ej2πf0 t ] = X(f − f0 ),
(4.11)
F[x(t)e−j2πf0 t ] = X(f + f0 ),
(4.12)
and
Since
cos(2πf0 t) =
1 j2πf0 t
e
+ e−j2πf0 t
2
sin(2πf0 t) =
1 j2πf0 t
e
− e−j2πf0 t
2j
and
we have
F[x(t) cos(2πf0 t)] =
1
[X(f − f0 ) + X(f + f0 )]
2
(4.13)
F[x(t) sin(2πf0 t)] =
1
[X(f − f0 ) − X(f + f0 )]
2j
(4.14)
and
Figure 4.4 shows an example of the Fourier transform of a signal x(t) which is centered around
the origin and its corresponding Fourier transform when it is multiplied by a cosine carrier
at f0 . In telecommunications we refer to signals x(t) that have a Fourier transform centered
at the origin as “baseband” signals. Also, the signal x(t) whose Fourier transform is zero for
frequencies |f | > W Hz we say is (strictly) “bandlimited” to W Hz (or, it has bandwidth W ).
Note that only the transform for positive frequencies is used to determine the bandwidth. In
fact, in view of the property of the Fourier transform of real signals that X(−f ) = X ∗ (f ),
there is redundancy in the Fourier transform in the sense that knowing the Fourier transform
for positive frequencies we can extract the whole Fourier transform and, thus, the signal x(t).
When the signal is modulated by a high frequency carrier, it becomes a “passband” signal,
i.e. one with a Fourier transform centered at some frequency away from the origin. Note that
whereas the baseband signal had bandwidth W , after it is modulated its passband bandwidth
becomes 2W . This is illustrated in the figure in (b) and (c) and (d) show the upper sideband
and lower sideband respectively. We will briefly discuss the issue of upper and lower sidebands
later.
7. Time-domain differentiation:
F
d
x(t) = (j2πf )X(f ).
dt
Proof: We have:
x(t) =
Z ∞
(4.15)
X(f )ej2πf t df.
−∞
Differentiating both sides of the above equation, we obtain:
d
x(t) =
dt
=
Z ∞
−∞
Z ∞
−∞
X(f )
d j2πf t
e
df
dt
(j2πf )X(f )ej2πf t df
70
CHAPTER 4. THE FOURIER TRANSFORM
X(f)
1
-W
W
f, Hz
(a)
1
[X(f ¡ f0 ) + X(f + f0 )]
2
1/2
-f0
W
f0
f, Hz
f0
f, Hz
f0
f, Hz
(b)
Upper sideband
1/2
-f0
W
(c)
Lower sideband
1/2
-f0
W
(d)
Figure 4.4: (a) The Fourier transform of a baseband signal x(t); (b) the Fourier transform of x(t)
modulated by a pure sinusoid; (c) the upper sideband; (d) the lower sideband.
d
The last equation above says that the inverse Fourier transform of (j2πf )X(f ) is dt
x(t).
d
Therefore, the Fourier transform of dt x(t) is (j2πf )X(f ). Similarly, the following more
general property can be shown:
F
n
d
dtn
x(t) = (j2πf )n X(f ).
(4.16)
8. Convolution in the time domain:
F[x(t) ∗ y(t)] = X(f )Y (f ).
(4.17)
Proof: We have
F[x(t) ∗ y(t)] =
=
Z ∞ Z ∞
−∞
Z ∞
−∞
−∞
x(a)y(t − a)da e−j2πf t dt
Z ∞
x(a)
−∞
y(t − a)e−j2πf t dt da
71
CHAPTER 4. THE FOURIER TRANSFORM
=
Z ∞
h
i
x(a) e−j2πf a Y (f ) da
−∞
= X(f )Y (f ),
where the third equality above is in view of the time-shift property of the Fourier transform.
This is the same property the Laplace transform has which is very important in computing
the zero-state response of linear systems. Since this response can be obtained as a convolution
of the input signal with the impulse response of the system, in the frequency domain this
corresponds to multiplying the Fourier transform of the input with the Fourier transform of
the impulse response, i.e., the frequency response of the system. This is illustrated in the
figure below.
x(t)
h(t)
y(t)=x(t)*h(t)
X(f)
H(f)
Y(f)=H(f)X(f)
Figure 4.5: The response of a LTI system to an input in the time-domain and frequency domains.
Example 16 Let us see how the convolution property can also be used to evaluate the Fourier
transform of a given function. Consider the triangle function x(t) shown in Figure 4.6 (a)
below. We can express x(t) as a convolution of the rectangular pulse p(t) in Figure 4.6 (b)
below and itself, i.e. x(t) = p(t) ∗ p(t). Thus,
F[x(t)] = F[p(t) ∗ p(t)] = P (f ) =
2
sin(πf )
πf
2
,
where the last equality is from a previous example where we computed the Fourier transform
of a rectangular pulse.
x(t)
p(t)
1
1
-1
1
t
-1/2
(a)
1/2
t
(b)
Figure 4.6: Triangular and rectangular pulses.
9. Multiplication in the time-domain:
F[x(t)y(t)] = X(f ) ∗ Y (f ) =
Z ∞
−∞
X(a)Y (f − a)da.
(4.18)
10. Time-domain integration:
F
Z t
−∞
x(a)da =
X(f )
δ(f )
+ X(0)
.
j2πf
2
(4.19)
72
CHAPTER 4. THE FOURIER TRANSFORM
Note that whereas the Fourier transform of the derivative of a signal x(t) is (j2πf )X(f ),
the Fourier transform of the integral of a signal (i.e. the inverse of the derivative) does not
simply become division by j2πf , the extra delta function term accounting for the constant
(dc) term.
11. Parseval’s Theorem:
Z ∞
x(t)y ∗ (t)dt =
Z ∞
X(f )Y ∗ (f )df,
(4.20)
−∞
−∞
is known as the generalized Parseval’s theorem. As a special case, Parseval’s theorem states:
Z ∞
−∞
|x(t)|2 dt =
Z ∞
−∞
|X(f )|2 df.
(4.21)
Proof: The above equation is a direct result of (4.18) where we have:
Z ∞
x(t)y(t)e−j2πf t dt =
Z ∞
−∞
−∞
X(a)Y (f − a)da.
Using the fact that
F[y ∗ (t)] =
=
=
Z ∞
y ∗ (t)e−j2πf t dt
−∞
Z ∞
y ∗ (t)e−j2πf t dt
−∞
Z ∞
∗
y(t)ej2πf t dt
−∞
= Y ∗ (−f )
the above equation becomes
Z ∞
x(t)y ∗ (t)e−j2πf t dt =
Z ∞
−∞
−∞
X(a)Y ∗ (a − f )da.
Setting f = 0 on both sides of the above equation we obtain (4.20). Now, if we let x(t) = y(t)
in (4.20), we obtain Parseval’s theorem in (4.21).
Parseval’s theorem shows that the Fourier transform is energy preserving. As an example of
how it can be applied, consider the Fourier transform of the rectangular pulse x(t) in Figure
4.1. Using Parseval’s theorem we have
Z ∞ sin(πf T ) 2
−∞
12. The Duality Property:
Let F[x(t)] = X(f ). Then
Proof: We have
πf T
df =
Z ∞
x2 (t)dt =
−∞
F[X(t)] = x(−f ).
X(t) =
Z ∞
−∞
x(a)e−j2πat da.
1
.
T
(4.22)
73
CHAPTER 4. THE FOURIER TRANSFORM
Thus,
F[X(t)] =
=
=
Z ∞
X(t)e−j2πf t dt
−∞
Z ∞ Z ∞
−∞
Z ∞
x(a)e
=
da e−j2πf t dt
−∞
Z ∞
x(a)
e
−∞
Z ∞
−j2πat
−j2π(f +a)t
dt da
−∞
x(a)δ(f + a)da
−∞
= x(−f ),
where in the second to last equality above we made use of the fact that the Fourier transform
Example 17 (Duality Property) We saw in a previous example that the Fourier transform of a delta function is 1, i.e. F[δ(t)] = 1. By duality then,
F[1] = δ(−f ) = δ(f ).
In other words,
Z ∞
−j2πf t
e
dt =
Z ∞
ej2πf t dt = δ(f ).
−∞
−∞
Example 18 We saw in Example 14 that the Fourier transform of the rectangular pulse
x(t) = u(t + T /2) − u(t − T /2) is a sinc function
X(f ) =
sin(πf T )
.
πf T
Thus, by duality, and replacing T by 1/T on both sides (equality still holds)
F
sin(πt/T )
= u(−f + 1/2T ) − u(−f − 1/2T ) = u(f + 1/2T ) − u(f − 1/2T ),
πt/T
where the second equality above is due to the even symmetry of u(−f + 1/2T ) − u(−f − 1/2T ).
13. The following properties are directly obtained from the definitions of the Fourier transform
and its inverse by setting f = 0 in the first and t = 0 in the second:
X(0) =
Z ∞
x(t)dt,
(4.23)
X(f )df,
(4.24)
−∞
x(0) =
Z ∞
−∞
4.2
Fourier Transforms of Selected Functions
Clearly the set of all functions that possess a Fourier transform is uncountably infinite. Next, we
look at a small subset of interest.
74
CHAPTER 4. THE FOURIER TRANSFORM
1. We already saw above that
F[δ(t)] = 1
and by duality
F[1] = δ(f ).
2. Let p(t) = u(t + T /2) − u(t − T /2) be a rectangular pulse. Then as we saw above
F[p(t)] = P (f ) =
and by duality
F
sin(πf T )
,
πf T
sin(πt/T )
= u(f + 1/2T ) − u(f − 1/2T ).
πt/T
3. The Fourier transform of a unit-step:
F[u(t)] = U (f ) =
1
δ(f )
+
.
j2πf
2
Proof: Using the integration property we have
Z t
Z ∞
x(a)da =
−∞
−∞
x(a)u(t − a)da = x(t) ∗ u(t)
where u(t) is the unit-step. Thus, in view of the convolution theorem (first equality below)
and the integration property (second equality) we have
F
Z t
x(a)da = X(f )U (f ) =
−∞
X(f )
δ(f )
+ X(0)
.
j2πf
2
Solving for U (f ) we obtain
U (f ) =
1
X(0) δ(f )
1
δ(f )
+
=
+
,
j2πf
X(f ) 2
j2πf
2
where we used the property of the delta function x(t)δ(t) = x(0)δ(t).
4. The signum (or sign) function is defined by:
sgn(t) =

 1,
t > 0,
0, t = 0

−1, t < 0,
and it is shown in Figure 4.7 below. Clearly we can express the sign function as
sgn(t) = 2u(t) − 1.
Thus
F[sgn(t)] = 2U (f ) − δ(f ) =
1
.
jπf
By duality
F[1/jπt] = sgn(−f ) = −sgn(f ).
75
CHAPTER 4. THE FOURIER TRANSFORM
sgn(t)
1
t
-1
Figure 4.7: The sign function.
5. The Fourier transform of pure sinusoidal signals:
F[ej2πf0 t ] = δ(f − f0 ).
This can be derived directly by using the duality on the Fourier transform of a delta function
(as we in fact did earlier) or my using the multiplication by a complex exponential property
6 (where in (4.11) x(t) = 1 and thus X(f ) = δ(f ).)
Similarly, in view of Euler’s identity
ejx = cos(x) + j sin(x)
and therefore
cos(x) =
1 jx
e + e−jx
2
sin(x) =
1 jx
e − e−jx
2j
and
we have
1
[δ(f − f0 ) + δ(f + f0 )] ,
2
1
F[sin(2πf0 t)] =
[δ(f − f0 ) − δ(f + f0 )] ,
2j
F[cos(2πf0 t)] =
The Fourier transform of cos(2πf0 t) is shown in Figure 4.8 and it consists of two delta functions at f0 and −f0 .
4.3
Application of Fourier Transform to Communication
By far the most prevalent medium for everyday communication is through radio-frequency (sinusoidal) carriers. And whereas the prevailing methods of communication in recent times have been
digital (as for example in cellular telephony) there are still many analog communication systems in
use. Examples include AM (Amplitude Modulation) radio, FM (Frequency Modulation) radio, CB
radios, etc... In this section we will only deal briefly with analog amplitude modulation (AM) systems as an illustration of the application of the Fourier transform in analyzing and designing such
systems. All diagrams presented below are high level diagrams meant to illustrate the concepts.
76
CHAPTER 4. THE FOURIER TRANSFORM
F[cos(2¼f0 t)]
1/2
f0
-f0
f
Figure 4.8: The Fourier transform of a pure cosine carrier.
4.3.1
Baseband vs Passband Signals
A baseband signal x(t( is one whose Fourier transform is centered around the origin (dc) and it
usually has no energy or very little energy above a certain frequency W , referred to as the bandwidth
of the signal. The Fourier transform of such a signal is shown in Figure 4.4 (a). When such a signal
is multiplied by a pure sinusoidal carrier at some carrier frequency f0 , it becomes a “passband”
signal, i.e. one whose Fourier transform is centered around a high frequency away from the origin,
as illustrated in Figure 4.4 (b). In practice, f0 >> W .
4.3.2
Amplitude Modulation
In amplitude modulation, a baseband information signal x(t) modulates the amplitude of a sinusoidal signal at some carrier frequency f0 which is usually orders of magnitude larger than the
bandwidth of the baseband signal x(t). This is done my multiplying the information signal with a
sinusoidal signal (carrier) as shown in Figure 4.9. The resulting signal (also shown in the frequency
x(t)
X(f)
s(t)
X
S(f)
Acos(2πf0t)
-f0
f0
Local
Oscillator
(LO)
Figure 4.9: Amplitude modulation (DSB-SC).
domain in the figure) is
s(t) = Ax(t) cos(2πf0 t).
where we assume the phase of the carrier is zero without loss of generality since it is not absolute
phase that matters as we will see next. The modulation method is referred to as double sideband
suppressed carrier (DSB-SC), where DSB refers to the fact that both sidebands are present and
suppressed carrier refers to the fact that there is no (unmodulated) carrier in s(t) (i.e. its Fourier
transform does not have delta functions at the carrier frequency).
77
CHAPTER 4. THE FOURIER TRANSFORM
In practice, the bandpass signal s(t) is further bandpass filtered to make sure its frequency
content is contained within specified limits (to avoid interfering with other users that may be
in adjacent frequency bands), amplified and then coupled to an antenna for propagation as an
electromagnetic wave to some receiving antenna.
Assuming the intervening channel has not distorted our signal in any other way other than
attenuating it (in practice, communication channels are not this benign), the received signal at the
output of a low-noise amplifier at the receiver side is given (again ideally) by
r(t) = cs(t) + n(t)
where n(t) is thermal noise (stochastic in nature) and c is some attenuation factor. In order to
minimize the amount of noise (which can mask the signal if not taken care of, especially since c is
in general very small) we use at the receiver passband filters of as high quality as possible so that
the signal part is admitted with no (or little) distortion and as little of the noise enters as possible
(by making the passband of the filters as close to that of the information signal as possible). Here,
we will ignore the noise as well in our simple analysis. Our purpose at the receiver is to extract x(t)
from r(t). This is done by the process of demodulation, which for DSB-SC consists of multiplying
the received signal by a sinusoidal carrier at the same frequency as the carrier in the received signal
(produced by another LO at the receiver) and then passing the result through a low-pass filter.
This is illustrated in Figure 4.10. Note that the LO at the receiver side matches the frequency
r(t)
LPF
z(t)
X
-W
m(t)
W
cos(2πf0t+θ)
Local
Oscillator
(LO)
Figure 4.10: Demodulation of DSB-SC.
of the carrier imbedded in the received signal exactly, but not the phase. This is modeled by the
addition of the phase θ. In practice, as long as the relative velocity between the transmitting and
receiving antennas is small, in which case the Doppler frequency shift will be small, the received
carrier frequency can be reproduced at the receiver by using good quality oscillators (crystal based).
However, at high carrier frequencies, the phase of the received carrier is spatially dependent and
goes through a 2π radians change every wavelength (which could be of the order of centimeters for
high frequencies). Thus, unless a phase tracking loop is used at the receiver (such as a phase-locked
loop (PLL)), the phase θ of the locally generated carrier may be significantly different than the
phase of the received carrier. taken as the reference and assumed to be zero.
Considering the signal z(t), we have
z(t) = r(t) cos(2πf0 t + θ)
= cAx(t) cos(2πf0 t) cos(2πf0 t + θ)
cA
= =
x(t) [cos(θ) + cos(4πf0 t + θ)]
2
78
CHAPTER 4. THE FOURIER TRANSFORM
= ax(t) cos(θ) + ax(t) cos(4πf0 t + θ)
where a = cA/2 is a constant that can be made to any value desired through linear amplification.
The second term in the last equation above, the so-called “double frequency” has a bandpass Fourier
transform centered at 2f0 and it will be filtered out by the low-pass filter. Thus, at the output of
the LPF, we have
m(t) = ax(t) cos(θ).
Note that when the phase of the locally (at the receiver) generated carrier matches perfectly the
phase of the received carrier (assumed to be zero), i.e. when θ = 0, cos(θ) = 1 is maximum. If
θ though happens to be close to π/2, then cos(θ) ≈ 0 and the signal is wiped out. In practice
we cannot of course depend on chance, so a carrier tracking loop is used that tries to generate a
carrier that matches the received carrier phase as much as possible (i.e. making θ as close to zero
as possible). This makes the receiver more complex, and thus more costly.
In AM radio, where there are thousands of receivers per single broadcasting station (transmitter), it makes sense to make the receiver as low-cost as possible (even if it means making the
single transmitter more costly). AM radio uses another modulation technique which produces a
modulated signal containing an unmodulated carrier which can be used at the receiver to “selfdemodulate” without the need for a complex system. The modulation signal for AM radio is:
s(t) = A[1 + mx(t)] cos(2πf0 t),
where it the “modulation index” m is chosen such that 1 + mx(t) > 0 for all t. In this case a simple
“envelope” detector, as the one shown in the figure below suffices to extract the information signal.
In general, some non-linearity (such as the one introduced by the diode below) followed by simple
low-pass filtering will do the job.
Demodulated
output
AM input
signal
C
R
Figure 4.11: Envelope demodulation of AM radio signals.
4.4
4.4.1
Sampling and the Sampling Theorem
Ideal (Impulse) Sampling
Mathematically, sampling can be accomplished by multiplying the signal x(t) to be sampled with
an infinite series of impulses, h(t), Ts seconds apart. Ts is the sampling period, while fs = 1/Ts is
the sampling frequency:
X
h(t) =
δ(t − kTs ).
(4.25)
k
79
CHAPTER 4. THE FOURIER TRANSFORM
h(!)!
t
...
...
"!2Ts
"!Ts
2Ts t
Ts
0
Figure 4.12: The sampling function h(t).
The ideal sampling function h(t) is shown in Figure 4.12. Defining by xs (t) the sampled version of
x(t), we have
X
X
xs (t) = x(t) · h(t) =
x(t)δ(t − kTs ) =
x(kTs )δ(t − kTs )
(4.26)
k
k
An illustration of the sampled signal xs (t) is shown in Figure 4.13. The basic question is
xs (t )
Ts 2Ts 3Ts 4Ts t
Figure 4.13: Illustration of the sampled signal xs (t).
whether one can reconstruct the original signal x(t) from the sampled signal xs (t). We investigate
next under what conditions, if any, this may be possible.
We first compute the Fourier transform of the sampled signal. Using the convolution property
of the Fourier transform, we have
xs (t) = x(t) · h(t) ⇔ Xs (f ) = X(f ) ∗ H(f ),
(4.27)
where ∗ denotes convolution. Since h(t) is periodic with period Ts , we can express it in a Fourier
series expansion:
X
j2πkt
h(t) =
ck e Ts
(4.28)
k
where
ck =
1
Ts
Z
Ts
2
− T2s
h(t) · e−
j2πkt
Ts
dt =
1
Ts
Z
Ts
2
− T2s
δ(t) · e−
j2πkt
Ts
dt =
1
.
Ts
(4.29)
80
CHAPTER 4. THE FOURIER TRANSFORM
Thus,
h(t) =
1 X j2πkt
e Ts
Ts k
(4.30)
Then,
(
H(f ) = F
1 X j2πkt
e Ts
Ts k
)
=
k
1 X
δ f−
.
Ts k
Ts
(4.31)
Substituting above, we obtain
1 X
k
Xs (f ) = H(f ) ∗ X(f ) =
X f−
.
Ts k
Ts
(4.32)
In other words, the Fourier transform of the sampled process xs (t) consists of the superposition of
an infinite number of frequency-shifted and scaled versions of X(f ). Assuming that x(t) is strictly
bandlimited to W Hz (i.e. its Fourier transform is zero outside the frequency band [−W, W ]),
Figure 4.14 plots Xs (f ) for two important cases (assuming a simple triangular shape for X(f ) for
illustration purposes). In the first, the relation between W and Ts is such that there is no overlap
between the shifted spectra, and in the other overlap exists. It is easy to see that the smallest
sampling rate fs above which no overlap exists is 2W . This (smallest) sampling rate is referred to
as the Nyquist rate. When the sampling rate is below the Nyquist rate, we have overlap in the
shifted spectra, a phenomenon referred to as aliasing. It is also easy to see that when fs > 2W , the
Xs(f)
fs < 2W
...
...
-fs -W
W
fs
f
(b)
Xs(f)
fs > 2W
...
...
-fs
-W
W
fs
f
(a)
Figure 4.14: Illustration of sampling above and below the Nyquist rate.
original signal x(t) can be obtained from its sampled version xs (t) through simple low-pass filtering
to extract the component of Xs (f ) centered around zero Hz. In the frequency domain, the low-pass
filtering can be expressed as
X(f ) = Xs (f ) · G(f ),
where
G(f ) =
Ts , |f | ≤ B, for W ≤ B ≤ fs − W
0, otherwise.
(4.33)
81
CHAPTER 4. THE FOURIER TRANSFORM
The impulse response of the low-pass filter, g(t), is then
g(t) = F −1 [G(f )] =
Z B
−B
Ts · ej2πf t df = 2BTs
sin(2πBt)
.
2πBt
(4.34)
Figure 4.15 illustrates G(f ) and g(t).
Ts
G(f)
0
-B -W
W B
f
g(t)
t
Figure 4.15: The low-pass filter G(f ) and its impulse response g(t).
From (4.32), and the convolution property of the Fourier transform we have
x(t) =
Z ∞
−∞
xs (a)g(t − a)da =
X
k
x(kTs )g(t − kTs ).
(4.35)
Thus, we have the following interpolation formula:
x(t) =
X
k
x(kTs )g(t − kTs ),
(4.36)
where g(t) is given by (4.34). The minimum sampling- rate for perfect reconstruction is known
as the Nyquist rate. This simple observation was stated by Shannon in the following sampling
theorem.
Theorem 2 (The Sampling Theorem:) A bandlimited signal with no spectral components above
W Hz can be recovered uniquely from its samples taken every Ts seconds, provided that fs = T1s ≥
2W . Extraction of x(t) from its samples can be done by passing the sampled signal through a lowpass filter having an impulse response given by (4.34). Mathematically, x(t) can be expressed in
terms of its samples by (4.36).
4.4.2
Natural Sampling
Impulse sampling is not practically achievable because of the unavailability of ideal impulses. In
practice, a delta function can be approximated by a rectangular pulse p(t) of width some small T
and height 1/T , as shown in Figure 4.16. Mathematically,
p(t) =
1
T
0,
− T2 ≤ t ≤ T2
otherwise.
82
CHAPTER 4. THE FOURIER TRANSFORM
p(t )
1
T
t
T
Figure 4.16: The pulse approximation to an impulse.
The infinite sequence of impulses h(t) is approximated by hp (t) in the following way
hp (t) =
X
k
p(t − kTs ).
The resulting sampled signal xs (t) is
xs (t) = hp (t) · x(t) ⇒ Xs (f ) = Hp (f ) ∗ X(f ).
Since hp (t) is periodic, we can write
hp (t) =
X
ck ej2πkfs t
k
where
ck =
1
Ts
Z
Ts
2
− T2s
T
2
p(t) · e−j2πkfs t dt =
1
Ts
Z
Hp (f ) =
X
ck δ(f − kfs ).
− T2
1 −j2πkfs t
sin(πkfs T )
·e
dt = fs
.
T
πkfs T
Then:
k
and
Xs (f ) =
X
k
ck X(f − kfs ).
Again we notice that the Fourier transform of xs (t) is a superposition of infinitely many frequencyshifted and scaled replicas of X(f ). Also, if fs ≥ 2W , x(t) can be recovered from xs (t) through
simple low-pass filtering. To recover x(t) from xs (t)
X(f ) = Xs (f ) · G(f ),
where G(f ) is as defined previously.
4.4.3
Zero-Order Hold Sampling
In practice, the simplest way to sample a signal is through a sample-and-hold operation, as illustrated in Figure 4.16 Mathematically, this operation can be described by
xs (t) = p(t) ∗ [h(t) · x(t)] ,
83
CHAPTER 4. THE FOURIER TRANSFORM
xs (!)!
t =! p(!)!
t "![!x(t ) h(!)!
t ]!
0
P(f)
t
Ts
f
Figure 4.17: The sample-and-hold signal.
where p(t) is a unit-height pulse of duration Ts seconds,
p(t) =
1,
0,
− T2s ≤ t ≤
otherwise,
Ts
2
and h(t) is the ideal sampling function defined earlier. Letting x0s (t) = x(t) · h(t), we see that xs (t)
for sampled-and-hold is equivalent to passing an ideally sampled-process (we don’t actually do this
in practice) through a filter with impulse response p(t). Then
Xs (f ) = P (f ) · Xs0 (f ),
where
P (f ) = F [p(t)] =
Z ∞
−∞
p(t)e−j2πf t dt = Ts
sin(πf Ts )
.
πf Ts
Notice that the sample-and-hold operation distorts the spectrum of the sampled signal in a nonuniform way through the multiplication by P (f ). Thus, in recovering x(t) we have to compensate
(equalize) for the effects of P (f ). The equalizer filter with transfer function 1/P (f ) can be obtained
xs(t)
1
P(f)
G(f)
x(t)
Figure 4.18: Reconstructing the original signal from zero-order hold samples.
from a filter with transfer function P (f ) through the following feedback circuit Note: Since in
practice low-pass filters are not ideal and have a finitely steep roll-off, the sampling frequency fs is
taken to satisfy fs ≥ 2.2W , which is 10% larger than the theoretical smallest rate (i.e. the Nyquist
rate).
Example (sampling)
Music in general has a spectrum with frequency components in the range up to 20kHz. The
ideal, smallest sampling frequency fs (i.e. the Nyquist rate) is then 40 Ksamples/sec. The smallest
practical sampling frequency is 44Ksamples/sec. In reality, in compact discs the sampling frequency
is 44.1Ksamples/sec.
84
CHAPTER 4. THE FOURIER TRANSFORM
+
+
+
+
-
P(f)
Figure 4.19: Generating 1/P (f ).
Download