Review: Fourier Series of Square Wave

advertisement
EE 179, Lecture 5, Handout #7
Review: Fourier Series of Square Wave
Square wave with period 2π.
1
0.5
0
−8
−6
−4
−2
0
2
4
6
8
Square wave with period 2π is determined by values in [−π, π]:
(
1 |t| < π/2
w(t) =
0 −π < |t| < π/2
Square wave is a 1-bit quantization of cosine:
(
1 cos t > 0
w(t) =
0 cos t < 0
Crystal oscillators for digital circuits convert sinusoidal outputs to square waves using
comparator and amplifiers.
EE 179, April 9, 2014
Lecture 5, Page 1
Fourier Series Examples (cont.)
If n 6= 0,
Z π/2
1
w(t) dt =
1 · e−int dt
2π −π/2
−π
1 eπin/2 − e−πin/2
1 e−int π/2
=
=
2π −in −π/2 2π
in
1
Dn =
2π
=
Z
π
sin πn/2
πn/2
= sinc(n/2) = (−1)
If n = 0,
1
D0 =
2π
EE 179, April 9, 2014
Z
n−1
2
2
πn
π/2
−π/2
1 · e−i0t dt =
1
1
·π =
2π
2
Lecture 5, Page 2
Fourier Series Coefficients of Square Wave
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−0.1
−0.2
−0.3
−20
EE 179, April 9, 2014
−15
−10
−5
0
5
10
15
20
Lecture 5, Page 3
Trigonometric Series Approximating Square Wave
◮
N = 3, N = 9
1.5
1
0.5
0
−0.5
◮
−3
−2
−1
0
1
2
3
−1
0
1
2
3
N = 11, N = 19
1.5
1
0.5
0
−0.5
−3
EE 179, April 9, 2014
−2
Lecture 5, Page 4
Fourier Series of Periodic Exponential Decay
The Fourier series representation is
g(t) = 0.504 1 +
∞
X
n=1
2
cos 2nt + 4n sin 2nt
2
2 + 16n
!
The phase is θn = − tan−1 4n. Note that limn→∞ θn = −π/2.
This calculation was done offline by copying from textbook. Other approved approaches
are WWW and Mathematica.
EE 179, April 9, 2014
Lecture 5, Page 5
Types of Systems
For theoretical and practical reasons, we restrict attention to systems and
have useful properties and represent the physical world.
◮
Causal
◮
Continuous
◮
Stable
◮
Linear
◮
Time invariant
Fundamental fact: every linear, time-invariant system (LTIS) is
characterized by
◮
Impulse response: w(t) = h(t) ∗ v(t)
◮
Transform function: in frequency domain, W (f ) = H(f ) · V (f )
EE 179, April 9, 2014
Lecture 5, Page 6
Signals as Vectors
A signal g(t) defined for a finite number of time variables,
g(tk ) = gk ,
k = 1, . . . , n
can be considered to be a vector of dimension n:
g = (g1 , g2 , . . . , gn )
The Euclidean norm (or size or magnitude) is
kgk =
p
|g1 |2 + · · · + |gn |2 =
X
n
k=1
|gk |2
(This definition works for complex-valued signals.)
The norm is used to measure how far apart are two signals, e.g., to tell how
good an estimate is:
X
n
square error = kĝ − gk2 =
(|ĝk − gk |2
k=1
EE 179, April 9, 2014
Lecture 5, Page 7
Orthogonality
If two signals are orthogonal, then by Pythagoras’ theorem,
kf k2 + kgk2 = kf + gk2
= (f + g) · (f + g)
= f · f + f · g + g · f + g · g = kf k2 + kgk2 + 2(f · g)
Thus two signals are orthogonal if and only inner product f · g = 0.
(In this case the energy of the sum is the sum of the energies.)
Recall that f · g = kf kkgk cos θ, where θ is the angle between f and g.
◮
θ = 0 ⇒ f and g point in exactly the same direction
◮
0 < θ < π/2 ⇒ f and g point in the same general direction
◮
θ = −π/2 ⇒ f and g are perpendicular
◮
π/2 < θ < π ⇒ f and g point in opposite general direction
◮
θ = −π ⇒ f and g point in exactly the opposite direction
EE 179, April 9, 2014
Lecture 5, Page 8
Signals as Vectors, II
For most signal processing applications, the number of samples is much
larger than 3, so we cannot easily visualize signals as vectors.
In fact, n might be infinite, e.g., tk = k∆t for k = 0, 1, 2, . . . or even
−∞ < k < ∞.
When the dimension is infinite, the signal’s norm may be infinite. If
X
kgk2 =
|gk |2 < ∞
k
the signal has finite energy and is said to belong to L2 .
The energy of the sampled signal depends on the sampling interval ∆t. The
power in each interval is kgk k2 , so total energy is
X
|gk |2 ∆t
k
Gauss determined the orbit of Ceres using only 24 samples and an early version of the FFT.
EE 179, April 9, 2014
Lecture 5, Page 9
Signals as Vectors, III
If the sample interval goes to zero, we obtain a continuous-time signal with
energy
Z ∞
X
|g(t)|2 dt = lim
|gk |2 ∆t
−∞
∆t→0
k
The class of finite energy signals is named L2 or L2 (−∞, +∞).
The two major tasks of signal processing are estimation and detection.
◮
Estimation: finding a good estimate ĝ(t) of an unknown signal g(t)
using information of of other signals
◮
Detection: identifying which of one of a finite number of possible
signals is present, again based on signals that depend on g(t).
In communication systems, the available signal is the output of a channel:
w(t) = h(t) ∗ v(t) + z(t) ,
where the channel attenuates and distorts the input and adds noise.
EE 179, April 9, 2014
Lecture 5, Page 10
Component of a Vector along Another Vector
A simple example of a channel with noise results in output
g = cx + e
Where the gain c and error e are unknown.
There are infinitely many solutions. To minimize error, we choose
e = g − cx
to be orthogonal to g.
EE 179, April 9, 2014
Lecture 5, Page 11
Component of a Vector along Another Vector, II
The component of g along x is kgk cos θ. Therefore
ckxk2 = kxkkgk cos θ = hg, xi
So we can solve for c:
c=
hg, xi
hg, xi
=
2
kxk
hx, xi
This answer applies to continouous-time signals. Suppose g(t) is define for
t1 < t < t2 . We can to approximate g(t) by cx(t). The energy of the error
Z t2
|g(t) − cx(t)|2 dt
Ee =
t1
is minimized by solving ∂Ee /∂c = 0:
R t2
Z t2
g(t)x(t) dt
1
g(t)x(t) dt
=
c = t1R t2
( t) dt
Ex t1
x
t1
EE 179, April 9, 2014
Lecture 5, Page 12
Application to Signal Detection
Consider a radar pulse g(t).
The return signal depends on whether the transmit signal encounters a
target. Here w(t) represents noise and interference.
(
ag(t − t0 ) target present
y(t) =
w(t)
target absent
Typical pulse is 1 µs at 3 GHz; repetition rates from 2 kHz to 200 KHz
EE 179, April 9, 2014
Lecture 5, Page 13
Application to Signal Detection, II
A key assumption (usally justified) is that noise w(t) is uncorrelated with
signal.
Z t2
w(t)g(t − t0 ) dt = 0
hw(t), g(t − t0 )i =
t1
To see if a target is present at distance corresponding to delay t0 , we
correlate the return signal with a delayed version of the radar pulse:
(R t
2
2
t1 a|g(t − t0 )| dt target present
hy(t), g(t − t0 )i =
0
target absent
(
aEg target present
=
0
target absent
Obviously, we decide based on whether the correlation is nonzero.
To detect targets at all possible distances, we vary t0 .
EE 179, April 9, 2014
Lecture 5, Page 14
Parseval’s Theorem
Complex exponentials are orthogonal. Suppose m and n are integers and
m 6= n. Suppose period T0 = 1. Then
Z 1
Z 1
j2πmt −j2πnt
j2πmt
j2πnt
e
dt =
ej2π(m−n)t dt
e
·e
=
e
0
0
1−1
ej2π(m−n)t 1
=
=0
=
j2π(m − n) 0 j2π(m − n)
It follows that the power of a Fourier series is sum of powers of terms:
Z 1
2
|g(t)|2 dt
kg(t)k =
0
2
X
∞
∞
X
X
∞
j2πnt 2
j2πnt kDn k2
=
Dn e
=
Dn e
=
n=−∞
n=−∞
n=−∞
We can calculate power in either time or frequency domain.
Also called Parseval’s identity (1799), Plancherel’s theorem (1910) and Rayleigh’s theorem
EE 179, April 9, 2014
Lecture 5, Page 15
Negative Frequencies
We cannot tell by looking at sine or cosine individually which direction the
signal is moving. However, complex exponentials involve by sine and cosine,
so we can tell which way the signal is rotating.
By convention, positive frequency corresponds to counterclockwise,
negative frequency to clockwise.
EE 179, April 9, 2014
Lecture 5, Page 16
Download