An Introduction to Digital Communications - ECE @ TAMU

advertisement
An Introduction to
Digital Communications
Costas N. Georghiades
Electrical Engineering Department
Texas A&M University
These
Thesenotes
notesare
aremade
madeavailable
availablefor
forstudents
studentsof
ofEE
EE455,
455,and
andthey
theyare
areto
to
be
used
to
enhance
understanding
of
the
course.
Any
unauthorized
be used to enhance understanding of the course. Any unauthorized
copy
copyand
anddistribution
distributionof
ofthese
thesenotes
notesisisprohibited.
prohibited.
1
Course Outline
‰ Introduction
¾ Analog Vs. Digital Communication Systems
¾ A General Communication System
ˆ Some
Probability Theory
¾ Probability space, random variables, density
functions, independence
¾ Expectation, conditional expectation, Baye’s rule
¾ Stochastic processes, autocorrelation function,
stationarity, spectral density
Costas N. Georghiades
2
Outline (cont’d)
‰ Analog-to-digital
conversion
¾ Sampling (ideal, natural, sample-and-hold)
¾ Quantization, PCM
ˆ Source
coding (data compression)
¾ Measuring information, entropy, the source coding
theorem
¾ Huffman coding, Run-length coding, Lempel-Ziv
ˆ Communication
channels
¾ Bandlimited channels
¾ The AWGN channel, fading channels
Costas N. Georghiades
3
Outline (cont’d)
ˆ Receiver
design
¾ General binary and
M-ary signaling
¾ Maximum-likelihood receivers
¾ Performance in an AWGN channel
RThe Chernoff and union/Chernoff bounds
RSimulation techniques
¾ Signal spaces
¾ Modulation: PAM, QAM, PSK, DPSK, coherent FSK,
incoherent FSK
Costas N. Georghiades
4
Outline (cont’d)
ˆ Channel
coding
¾ Block codes, hard and soft-decision decoding,
performance
¾ Convolutional codes, the Viterbi algorithm,
performance bounds
¾ Trellis-coded modulation (TCM)
ˆ Signaling
through bandlimited channels
¾ ISI, Nyquist pulses, sequence estimation, partial
response signaling
¾ Equalization
Costas N. Georghiades
5
Outline (cont’d)
ˆ Signaling
through fading channels
¾ Rayleigh fading, optimum receiver, performance
¾ Interleaving
ˆ Synchronization
¾ Symbol synchronization
¾ Frame synchronization
¾ Carrier synchronization
Costas N. Georghiades
6
Introduction
A General Communication System
Source
Transmitter
Channel
Receiver
User
• Source:
Speech, Video, etc.
• Transmitter: Conveys information
• Channel:
Invariably distorts signals
• Receiver:
Extracts information signal
• User:
Utilizes information
Costas N. Georghiades
7
Digital vs. Analog Communication
systems have an alphabet which is
uncountably infinite.
ˆ Analog
¾ Example: Analog Amplitude Modulation (AM)
X
Receiver
RF
Oscillator
Costas N. Georghiades
8
Analog vs. Digital (cont’d)
r
Digital systems transmit signals from a
discrete alphabet
¾ Example:
Binary digital communication systems
Data Rate=1/T bits/s
T
0
1
…0110010...
Costas N. Georghiades
or
0
T
0
Transmitter
9
Digital systems are resistant to noise...
1
Noise
s1(t)
1
+
s2(t) 0
1
r(t)
1?
Channel
?
Optimum (Correlation) Receiver:
r(t)
T
X
∫ ( )dt
0
s1(t)
Costas N. Georghiades
t=T
>
<
1
0
0
Comparator
10
Advantages of Digital Systems
‰Error correction/detection
‰Better encryption algorithms
‰More reliable data processing
‰Easily reproducible designs
¾Reduced cost
‰Easier data multiplexing
‰Facilitate data compression
Costas N. Georghiades
11
A General Digital Communication System
Source
A/D
Conversion
Source
Encoder
Channel
Encoder
Modulator
C
h
a
n
n
e
l
Synchronization
User
Costas N. Georghiades
D/A
Conversion
Source
Decoder
Channel
Decoder
Demodulator
12
Some Probability Theory
Definition: A non-empty collection of subsets α = {A1 , A2 , ...} of
a set Ω , i.e., α = {Ai ; Ai ∈ Ω} is called an algebra of sets if:
Aj ∈ α ⇒ Ai ∪ Aj ∈ α
1)
∀Ai ∈α and
2)
∀Ai ∈ α ⇒ Ai ∈ α
Example: Let Ω = {0,1,2} .
1) α = {Ω, ∅} an algebra
2) α = { Ω, ∅ , {1} , {2} , {0} , {1,2} , {1,0} , { 0,2}} an algebra
3) α = { Ω, ∅ , {0} , {1} , {2}}
Costas N. Georghiades
not an algebra
13
Probability Measure
Definition: A class of subsets, α , of a space Ω is a σ-algebra
(or a Borel algebra) if:
1) Ai ∈α ⇒ Ai ∈α .
∞
2) Ai ∈ α , i = 1,2,3,... ⇒ U Ai ∈α .
i =1
Definition: Let α be a σ-algebra of a space Ω . A function P
that maps α onto [0,1] is called a probability measure if:
1) P[Ω ] = 1
2) P[ A] ≥ 0
∀
A ∈α .
⎡∞
⎤ ∞
3) P ⎢U Ai ⎥ = ∑ P[ Ai ] for
⎣ i =1 ⎦ i =1
Costas N. Georghiades
Ai ∩ A j = ∅ , i ≠ j .
14
Probability Measure
Let Ω = ℜ (the real line) and α be the set of all intervals (x1, x2] in ℜ.
Also, define a real valued function f which maps ℜ → ℜ such that:
1) f ( x) ≥ 0 ∀ x ∈ℜ .
∞
2)
∫ f ( x)dx = 1.
−∞
Then:
[
] [
] ∫
P { x ∈ℜ; x1 < x ≤ x 2 } = P ( x1 , x 2 ] =
x2
x1
f ( x )dx
is a valid probability measure.
Costas N. Georghiades
15
Probability Space
The following conclusions can be drawn from the above definition:
1) P[∅ ] = 0
[ ]
2) P A = 1 − P[ A]
( P ( A + A ) = P ( Ω ) = 1 = P ( A ) + P ( A )) .
3) If A1 ⊂ A2 ⇒ P ( A1 ) ≤ P ( A2 )
4) P[ A1 ∪ A2 ] = P[ A1 ] + P[ A2 ] − P[ A1 ∩ A2 ] .
Definition: Let Ω be a space, α be a σ-algebra of subsets of Ω , and
P a probability measure on α . Then the ordered triple ( Ω , α , P ) is a
probability space.
Ω
sample space
α
event space
P
probability measure
Costas N. Georghiades
16
Random Variables and Density Functions
Definition: A real valued function X(ω) that maps ω ∈ Ω into the real line ℜ
is a random variable.
Notation: For simplicity, in the future we will refer to X(ω) by X.
Definition: The distribution function of a random variable X is defined by
F X ( x ) = P[ X ≤ x ] = P[ −∞ < X ≤ x ] .
From the previous discussion, we can express the above probability in terms
∞
of a non-negative function f X (⋅) such that
∫f
X
( x ) d X = 1 as follows
−∞
FX ( x ) = P [ X ≤ x ] =
x
∫f
X
(α ) d α .
−∞
We will refer to f X (⋅) as the density function of random variable X.
Costas N. Georghiades
17
Density Functions
We have the following observations based on the above definitions:
−∞
1) F X ( −∞ ) =
∫
f X (x)d X = 0
−∞
∞
2) F X ( ∞ ) =
∫
f X (x)d X = 1
−∞
3) If x 1 ≥ x 2 ⇒ F X ( x 1 ) ≥ F X ( x 2 ) ( F X ( x ) non-decreasing)
Examples of density functions:
a) The Gaussian density function (Normal)
f X ( x) =
Costas N. Georghiades
1
2π σ
2
e
−
( x − µ )2
2σ 2
18
Example Density Functions
b) Uniform in [0,1]
⎧1, x ∈[ 0,1]
f X (x) = ⎨
⎩0, otherwise
fX (x)
1
x
0
1
c) The Laplacian density function:
fX (x)
f X (x) =
Costas N. Georghiades
a
exp ( − a x )
2
19
Conditional Probability
Let A and B be two events from the event space α. Then, the probability of event A,
given that event B has occurred, P[A | B ] , is given by
P[ A ∩ B]
.
P[ A| B ] =
P[ B]
Example: Consider the tossing of a dice:
P [{2} | "even outcome"]=1/3,
P[{2} | "odd outcome"] = 0
Thus, conditioning can increase or decrease the probability of an event, compared to its
unconditioned value.
The Law of Total Probability
M
Let A1, A2,..., AN be a partition of Ω, i.e.,
UA
i =1
i
= Ω and Ai ∩ A j = ∅ ∀ i ≠ j .
Then, the probability of occurrence of event B ∈α can expressed as
M
P[ B] = ∑ P[ B| Ai ] P[ Ai ] , ∀ B ∈α .
i =1
Costas N. Georghiades
20
Illustration, Law of Total Probability
P ( B | A3 )
A3
B
A1
A2
P ( B | A2 )
Costas N. Georghiades
21
Example, Conditional Probability
Pr( 0) =
1
2
P00
0
0
P01
P00=P[receive 0 | 0 sent]
P10=P[receive 0 | 1 sent]
P01=P[receive 1 | 0 sent]
1
Pr(1) =
2
P10
1
P11
1
P11=P[receive 1 | 1 sent]
P01 = 0.01 ⇒
P00 = 1 − P01 = 0.99
P10 = 0.01 ⇒
P11 = 1 − P10 = 0.99
Pr( e) = Pr( 0) ⋅ P01 + Pr(1) ⋅ P10 =
1
1
⋅ 0.01 + ⋅ 0.01
2
2
= 0.01
Costas N. Georghiades
22
Baye’s Law
Baye’s Law:
Let
Ai , i = 1, 2,..., M be a partition of Ω and B an event in α. Then
[
]
P Aj | B =
[
][ ]
P B| A j P A j
M
∑ P[ B| A ] P[ A ]
i
i =1
i
Proof:
[
[
]
] P[ B] ⇒ P[ A ∩ B] = P[ A | B] P[ B] = P[ B| A ] P[ A ]
P[ B| A ] P[ A ]
P[ B| A ] P[ A ]
⇒ P[ A | B] =
=
P Aj | B =
P Aj ∩ B
j
j
j
j
P( B )
j
j
j
j
M
∑ P[ B| A ] P[ A ]
i =1
Costas N. Georghiades
j
i
i
23
Statistical Independence of Events
Two events A and B are said to be statistically independent if
P[ A ∩ B] = P[ A] ⋅ P[ B] .
In intuitive terms, two events are independent if the occurrence of one does not affect the
occurrence of the other, i.e.,
P ( A| B ) = P ( A )
when A and B are independent.
Example: Consider tossing a fair coin twice. Let
A={heads occurs in first tossing}
B={heads occurs in second tossing}.
Then
1
.
4
The assumption we made (which is reasonable in this case) is that the outcome of a coin
toss did not affect the other.
P[ A ∩ B ] = P[ A] ⋅ P[ B] =
Costas N. Georghiades
24
Expectation
Consider a random variable X with density fX (x). The expected (or mean) value of X is
given by
E[ X ] =
∞
∫xf
X
( x )dx .
−∞
In general, the expected value of some function g( X ) of a random variable X is given by
E [ g( X ) ] =
∞
∫ g( x ) f
X
( x ) dx .
−∞
When g( X ) = X n for n = 0,1,2,L , the corresponding expectations are referred to as the
n-th moments of random variable X. The variance of a random variable X is given by
var ( X ) =
∞
2
x
−
E
(
X
)
f X ( x )dx
[
]
∫
−∞
∞
=
∫x
2
f X ( x )dx − E 2 ( X )
−∞
= E ( X 2 ) − E 2 ( X ).
Costas N. Georghiades
25
Example, Expectation
Example: Let X be Gaussian with
f X ( x) =
1
2π σ 2
⎡ ( x − µ) 2 ⎤
⎥
exp ⎢−
2
2σ
⎢⎣
⎥⎦
Then:
E( X ) =
∞
1
2π σ
2
∫xe
−
( x−µ)2
2σ 2
dx = µ ,
−∞
Var( X ) = E [ X 2 ] − E 2 ( X ) =
∞
2
2
2
.
x
f
(
x
)
dx
−
µ
=
σ
∫ X
−∞
Costas N. Georghiades
26
Random Vectors
Definition: A random vector is a vector whose elements are random variables, i.e., if X1,
X2, ..., Xn are random variables, then
X = ( X 1 , X 2 ,..., X n )
is a random vector.
Random vectors can be described statistically by their joint density function
f X (x) = f X 1 X 2 ... X n ( x1 , x 2 ,L , x n ) .
Example: Consider tossing a coin twice. Let X1 be the random variable associated with
the outcome of the first toss, defined by
⎧1, if heads
X1 = ⎨
⎩0, if tails
Similarly, let X2 be the random variable associated with the second tossing defined as
⎧1, if heads
X2 = ⎨
.
⎩0, if tails
The vector X = ( X 1 , X 2 ) is a random vector.
Costas N. Georghiades
27
Independence of Random Variables
Definition: Two random variables X and Y are independent if
f X ,Y ( x , y ) = f X ( x ) ⋅ f Y ( y ) .
The definition can be extended to independence among an arbitrary number of random
variables, in which case their joint density function is the product of their marginal
density functions.
Definition: Two random variables X and Y are uncorrelated if
E [ XY ] = E [ X ] ⋅ E [ Y ] .
It is easily seen that independence implies uncorrelatedness, but not necessarily the other
way around. Thus, independence is the stronger property.
Costas N. Georghiades
28
The Characteristic Function
Definition: Let X be a random variable with density f X ( x ) . Then the characteristic
function of X is
[
Ψ X ( jω ) = E e
jω X
∞
]= ∫ e
jω x
f X ( x )dx .
−∞
Example: The characteristic function of a Gaussian random X variable having mean µ
and variance σ 2 is
Ψ X ( jω ) =
∞
1
2π σ
2
∫e
jω x
⋅e
−
( x − µ )2
2σ
2
dx = e
1
jωµ − ω 2σ 2
2
−∞
Definition: The moment-generating function of a random variable X is defined by
Φ X ( s) = E [ e
∞
sX
] = ∫e
sx
f X ( x ) dx .
−∞
Fact: The moment-generating function of a random variable X can be used to obtain its
moments according to:
n
Φ X ( s)
d
n
|s = 0
E[ X ] =
ds n
Costas N. Georghiades
29
Stochastic Processes
A stochastic process {X (t ); − ∞ < t < ∞} is an ensemble of signals, each of which can be
realized (i.e. it can be observed) with a certain statistical probability. The value of a
stochastic process at any given time, say t1, (i.e., X(t1)) is a random variable.
Definition: A Gaussian stochastic process is one for which X(t) is a Gaussian random
variable for every time t.
5
Amplitude
Fast varying
0
-5
0
0.2
0.4
0.6
0.8
1
0.8
1
Time
1
Amplitude
Slow varying
Costas N. Georghiades
0
-1
0
0.2
0.4
0.6
30
Characterization of Stochastic Processes
Consider a stochastic process { X (τ );−∞ < τ < ∞} . The random variable X(t), t ∈ℜ ,
has a density function f X ( t ) ( x; t ) . The mean and variance of X(t) are
E [ X ( t )] = µ X ( t ) =
∞
∫xf
X (t )
( x; t )dx ,
−∞
[
VAR[ X ( t )] = E ( X ( t ) − µ X ( t ) )
2
].
Example: Consider the Gaussian random process whose value X(t) at time t is a
Gaussian random variable having density
⎡ x2 ⎤
f X ( x; t ) =
exp⎢−
⎥,
2
t
2π t
⎣
⎦
We have E [ X ( t )] = 0 (zero-mean process), and var[ X ( t ) ] = t .
0.4
t= 1
0.3
0.2
0.1
Costas N. Georghiades
0
-6
t= 2
-4
-2
0
2
4
6
31
Autocovariance and Autocorrelation
Definition: The autocovariance function of a random process X(t) is:
[(
)]
)(
C XX ( t1 , t 2 ) = E X ( t1 ) − µ X ( t1 ) X ( t 2 ) − µ X ( t 2 ) ,
t1 , t 2 ∈ℜ. .
Definition: The autocorrelation function of a random process X(t) is defined by
[
]
R XX (t1 , t 2 ) = E X (t1 ) ⋅ X (t 2 ) ,
t1 , t 2 ∈ℜ .
Definition: A random process X is uncorrelated if for every pair (t1,t2)
E [ X ( t 1 ) X ( t 2 )] = E [ X ( t 1 )] ⋅ E [ X ( t 2 )] .
Definition: A process X is mean-value stationary if its mean is not a function of time.
Definition: A random process X is correlation stationary if the autocorrelation function
R XX (t1 , t 2 ) is a function only of τ = (t1 − t 2 ) .
Definition: A random process X is wide-sense stationary (W.S.S.) if it is both mean value
stationary and correlation stationary.
Costas N. Georghiades
32
Spectral Density
Example: (Correlation stationary process)
[
µ X (t ) = a
]
RXX (t1 , t2 ) = exp − t1 − t2 = exp[ − τ
],
τ = t1 − t 2 .
Definition: For a wide-sense stationary process we can define a spectral density, which is the Fourier transform of the
stochastic process's autocorrelation function:
SX ( f ) =
∞
∫R
XX
(τ ) e j 2 π f τ dτ .
−∞
The autocorrelation function is the inverse Fourier transform of the spectral density:
R XX (τ ) =
∞
∫S (f) e
j2 π f τ
X
df .
−∞
Fact: For a zero mean process X,
var ( X ) = R XX (0) =
∞
∫ S ( f ) df .
X
−∞
Costas N. Georghiades
33
Linear Filtering of Stochastic Signals
x(t )
H(f )
y (t )
SY ( f ) = S X ( f ) ⋅ H ( f )
2
The spectral density at the output of a linear
filter is the product of the spectral density of the
input process and the magnitude square of the
filter transfer function
Costas N. Georghiades
34
White Gaussian Noise
Definition: A stochastic process X is white Gaussian if:
a) µ X ( t ) = µ (constant)
N
b) R XX (τ ) = 0 δ (τ ) (τ = t 1 − t2 )
2
c) X is a Gaussian random variable and X(ti ) is independent of X(tj ) for all
ti ≠ t j .
Note: 1) A white Gaussian process is wide-sense stationary
N
2) S X ( f ) = 0 is not a function of f
2
Sx(f)
N0/2
f
Costas N. Georghiades
35
Analog-to-Digital Conversion
r
Two steps:
Sampling
¾ Discreetize amplitude: Quantization
¾ Discreetize time:
A
m
p
l
i
t
u
d
e
Analog signal: Continuous time,
continuous amplitude
0
0
Costas N. Georghiades
1
2
3
4
Time
36
Sampling
‰
Signals are characterized by their frequency
content
‰
The Fourier transform of a signal describes its
frequency content and determines its bandwidth
x(t)
0.3
∞
X ( f ) = ∫ x (t )e − j 2πft dt
−∞
0
Time, sec
∞
⇔
x (t ) = ∫ X ( f )e
−∞
Costas N. Georghiades
j 2πft
df
X(f)
0.5
0
2
4
Frequency, Hz
37
Ideal Sampling
Mathematically, the sampled version, xs(t), of signal x(t) is:
x s (t ) = h(t ) ⋅ x (t ) ⇔ X s ( f ) = H ( f )∗ X ( f ) ,
∞
1
h (t ) = ∑ δ (t − kTs ) =
Ts
k = −∞
∞
∑e
j 2 πk
t
Ts
Sampling
function
k = −∞
h(t)
...
-4Ts -3Ts -2Ts -Ts
...
0
Ts
t
2Ts 3Ts 4Ts
xs(t)
Ts
Costas N. Georghiades
2 Ts 3 Ts 4 Ts
t
38
Ideal Sampling
⎪⎧ 1
H ( f ) = ℑ⎨
⎪⎩ Ts
∞
∑e
j2
K =−∞
π kt
Ts
⎪⎫ 1
⎬=
⎪⎭ Ts
⎛
∞
∑δ⎜f −
⎝
k =−∞
k⎞
⎟.
Ts ⎠
Then:
1
X s ( f ) = H( f ) * X ( f ) =
Ts
⎛
k⎞
⎜
⎟.
X
f
−
∑
T
⎝
K =−∞
s⎠
∞
X s ( f)
Aliasing
fs < 2 W
X(f)
...
...
-fs
-W
W
fs
f
(b )
X s ( f)
No Aliasing
fs > 2 W
...
...
-fs
-W
W
fs
f
(a )
Costas N. Georghiades
39
Ideal Sampling
If fs>2W, the original signal x(t) can be obtained from xs(t) through simple low-pass
filtering. In the frequency domain, we have
X ( f ) = X s ( f ) ⋅ G ( f ),
where
⎧ Ts , f ≤ B
G( f ) = ⎨
⎩0, oherwise.
for W ≤ B ≤ f s − W .
G( f )
The impulse response of the low-pass filter, g(t), is then
[
]
g( t ) = ℑ−1 G( f ) = ∫ G( f ) ⋅ e j 2π ft df = 2 BTs
B
−B
sin(2π Bt )
.
2π Bt
Ts
−B
B f
From the convolution property of the Fourier transform we have:
x(t ) =
∞
∞
∫ x (a ) ⋅ g (t − a )da = ∑ x(kT ) ⋅ ∫ δ (a − kT )g (t − a )da = ∑ x(kT ) ⋅ g (t − kT ) .
s
−∞
s
s
−∞
k
s
s
k
Thus, we have the following interpolation formula
x(t ) = ∑ x(kTs ) ⋅ g (t − kTs )
k
Costas N. Georghiades
40
Ideal Sampling
G(f)
T
0
-B -W
W B
f
g(t)
t
The Sampling Theorem:
A bandlimited signal with no spectral components above W Hz can be recovered
uniquely from its samples taken every Ts seconds, provided that
Nyquist
1
Ts ≤
, or, equivalent ly, f s ≥ 2W .
Rate
2W
Extraction of x(t) from its samples can be done by passing the sampled signal through a
low-pass filter. Mathematically, x(t) can be expressed in terms of its samples by:
x (t ) = ∑ x (kTs ) ⋅ g (t − kTs )
k
Costas N. Georghiades
41
Natural Sampling
A delta function can be approximated by a
rectangular pulse p(t)
⎧ T1 , T2 ≤ t ≤ T2
p (t ) = ⎨
⎩0, elsewhere.
T
h p (t ) =
Costas N. Georghiades
∞
∑ p(t − kT )
k =−∞
s
It can be shown
that in
this case as well
the original
signal can be
reconstructed
from its samples
at or above
the Nyquist rate
through simple
low-pass filtering
42
Zero-Order-Hold Sampling
x s ( t ) = p ( t ) ∗ [x ( t ) ⋅ h ( t ) ]
x(t)
P(f)
0
t
Ts
f
x s (t )
1
P( f )
Equalizer
Costas N. Georghiades
G( f )
Low-pass
Filter
x (t )
Reconstruction is
possible but an
equalizer may be
needed
43
Practical Considerations of Sampling
Since in practice low-pass filters are not ideal and have a finitely steep roll-off, in
practice the sampling frequency fs is about 20% higher than the Nyquist rate:
f s ≥ 2.2W
50
0
-50
-100
-150
0
0.2
0.4
0.6
0.8
1
Example:
Music in general has a spectrum with frequency components in the range ~20kHz. The
ideal, smallest sampling frequency fs is then 40 Ksamples/sec. The smallest practical
sampling frequency is 44Ksamples/sec. In compact disc players, the sampling frequency
is 44.1Ksamples/sec.
Costas N. Georghiades
44
Summary of Sampling (Nyquist) Theorem
ˆ An
analog signal of bandwidth W Hz can
be reconstructed exactly from its
samples taken at a rate at or above 2W
samples/s (known as the Nyquist rate)
x(t)
xs(t)
0
0
Costas N. Georghiades
Ts
1
f =
s T > 2W
s
0
1
2
3
4
t
0
1
2
3
4
t
45
Summary of Sampling Theorem (cont’d)
Signal reconstruction
x(t)
xs(t)
Low-Pass
Filter
0
0
1
2
3
0
4
t
0
1
2
3
4
t
Amplitude still takes values on a continuum => Infinite number of bits
Need to have a finite number of possible amplitudes => Quantization
Costas N. Georghiades
46
Quantization
ˆ Quantization
is the process discretizing the
amplitude axis. It involves mapping an infinite
number of possible amplitudes to a finite set
of values.
N bits can represent L = 2
amplitudes.
This corresponds to N-bit quantization
N
Quantization:
Uniform Vs. Nonuniform
Scalar Vs. Vector
Costas N. Georghiades
47
Example (Quantization)
Let N=3 bits. This corresponds to L=8
quantization levels:
x(t)
111
110
101
100
0
000
1
2
3
t
3-bit
Uniform
Quantization
001
010
011
Costas N. Georghiades
48
Quantization (cont’d)
ˆ There
is an irrecoverable error due to
quantization. It can be made small through
appropriate design.
ˆ Examples:
¾ Telephone speech signals: 8-bit quantization
¾ CD digital audio: 16-bit quantization
Costas N. Georghiades
49
Input-Output Characteristic
7∆
2
3-Bit (8-level)
Uniform
Quantizer
Output,
xˆ = Q ( x )
5∆
2
3∆
2
∆
2
− 4∆
− 3∆
− 2∆
−∆
∆
−
2
−
3∆
2
−
5∆
2
−
Costas N. Georghiades
∆
2∆
3∆
4∆
Input, x
Quantization Error:
d = ( x − xˆ ) = ( x − Q ( x ) )
7∆
2
50
Signal-to-Quantization Noise Ratio (SQNR)
For Stochastic Signals
For Random Variables
PX
SQNR =
D
where :
1
PX = lim
T →∞ T
1
D = lim
T →∞ T
Costas N. Georghiades
PX
SQNR =
D
where :
T
2
[
[ ]
]
PX = E X 2
∫ T E X (t ) dt
−2
{
2
}
∫ T E [X (t ) − Q ( X (t ))] dt
T
2
−2
2
[
D = E ( X − Q( X ))
2
]
Can be used for
stationary processes
51
SQNR for Uniform Scalar Quantizers
Let the input x(t) be a sinusoid of amplitude V volts. It can
be argued that all amplitudes in [-V,V] are equally likely.
∆
Then, if the step size is , the quantization
error is
uniformly distributed in the interval
⎡ ∆ ∆⎤
⎢⎣ − 2 , 2 ⎥⎦
∆2
1 ∆2 2
D = ∫ ∆ e de =
∆ −2
12
For an N-bit quantizer:
1/ ∆
− ∆/2
∆/2 e
V2
1 T2 2 2
PX = lim ⋅ ∫ T V sin (ωt )dt =
−
T →∞ T
2
2
∆ = 2V / 2 N
⎛P
∴ SQNR = 10 ⋅ log10 ⎜ X
⎝D
Costas N. Georghiades
p(e)
⎞
⎟ = 6.02 ⋅ N + 1.76 dB
⎠
52
Example
A zero-mean, stationary Gaussian source X(t) having spectral density as given below
is to be quantized using a 2-bit quantization. The quantization intervals and levels are
as indicated below. Find the resulting SQNR.
SX ( f ) =
200
−τ
(
)
⇔
R
τ
=
100
⋅
e
XX
2
1 + (2πf )
PX = RXX (0 ) = 100
f X ( x) =
-5
-15
-10
[
]
D = E ( X − Q( X )) = 2 ⋅ ∫
2
10
0
( x − 5)
2
5
0
1
⋅e
200π
x2
−
200
15
10
∞
f X ( x )dx + 2 ⋅ ∫ ( x − 15) f X ( x )dx = 11.885
2
10
⎛ 100 ⎞
SQNR = 10 log10 ⎜
⎟ = 9.25 dB
⎝ 11.885 ⎠
Costas N. Georghiades
53
Non-uniform Quantization
ˆ
In general, the optimum quantizer is non-uniform
ˆ
Optimality conditions (Lloyd-Max):
¾
The boundaries of the quantization intervals are the mid-points of the
corresponding quantized values
¾
The quantized values are the centroids of the quantization regions.
¾ Optimum quantizers are designed iteratively using the above rules
ˆ
We can also talk about optimal uniform quantizers. These
have equal-length quantization intervals (except possibly the
two at the boundaries), and the quantized values are at the
centroids of the quantization intervals.
Costas N. Georghiades
54
Optimal Quantizers for a Gaussian Source
Costas N. Georghiades
55
Companding (compressing-expanding)
Uniform
Quantizer
Compressor
…
Low-pass
Filter
µ - Law Companding :
Expander
1
µ = 255
0.8
ln (1 + µ ⋅ x )
g ( x) =
⋅ sgn( x ),
ln (1 + µ )
µ = 10
0.6
−1 ≤ x ≤ 1
0.4
µ =0
0.2
0 0
1
−1
g ( x) =
[
(1 + µ ) − 1]⋅ sgn( x ),
µ
1
x
0.2
0.4
0.6
0.8
1
0.8
−1 ≤ x ≤ 1
µ =0
0.6
0.4
µ = 10
0.2
µ = 255
0
Costas N. Georghiades
0
0.2
0.4
0.6
0.8
56
1
Examples: Sampling, Quantization
ˆ Speech
signals have a bandwidth of about
3.4KHz. The sampling rate in telephone
channels is 8KHz. With an 8-bit quantization,
this results in a bit-rate of 64,000 bits/s to
represent speech.
ˆ In CD’s, the sampling rate is 44.1KHz. With a
16-bit quantization, the bit-rate to represent
(for each channel) is 705,600 bits/s (without
coding).
Costas N. Georghiades
57
Data Compression
A/D Converter
Analog
Source
Sampler
Quantizer
Source
Encoder
001011001...
Discrete Source
Discrete
Source
01101001...
Source
Encoder
10011...
The job of the source encoder is to efficiently
(using the smallest number of bits)
represent the digitized source
Costas N. Georghiades
58
Discrete Memoryless Sources
Definition: A discrete source is memoryless if successive
symbols produced by it are independent.
For a memoryless source, the probability of a
sequence of symbols being produced equals the
product of the probabilities of the individual
symbols.
Costas N. Georghiades
59
Measuring “Information”
ˆ Not
all sources are created equal:
¾ Example:
Discrete
Source 1
Discrete
Source 2
Discrete
Source 3
Costas N. Georghiades
P(0)=1
No information provided
P(1)=0
P(0)=0.99
Little information is provided
P(1)=0.01
P(0)=0.5
Much information is provided
P(1)=0.5
60
Measuring “Information” (cont’d)
The amount of information provided is a function of
the probabilities of occurrence of the symbols.
ˆ Definition: The self-information of a symbol x which
has probability of occurrence p is
ˆ
I(x)=-log2(p) bits
ˆ
Definition: The average amount of information in
bits/symbol provided by a binary source with P(0)=p
is
H ( x ) = − p log 2 ( p ) − (1 − p ) log 2 (1 − p )
H(x) is known as the entropy of the binary source
Costas N. Georghiades
61
The Binary Entropy Function
Maximum information is conveyed when the
probabilities of the two symbols are equal
H(x)
1
0.5
0
Costas N. Georghiades
0.5
1
p
62
Non-Binary Sources
In general, the entropy of a source that produces L symbols with
probabilities p1, p2, …,pL, is
L
H ( X ) = −∑ pi ⋅ log 2 ( pi ) bits
i =1
Property: The entropy function satisfies
0 ≤ H ( X ) ≤ log 2 (L )
Equality iff the source
probabilities are equal
Costas N. Georghiades
63
Encoding of Discrete Sources
• Fixed-length coding: Assigns source symbols binary
sequences of the same length.
• Variable-length coding: Assigns source symbols
binary sequences of different lengths.
Example: Variable-length code
Symbol Probability
Codeword
Length
a
3/8
0
1
b
3/8
11
2
c
1/8
100
3
d
1/8
101
3
Costas N. Georghiades
4
3
3
1
1
M = ∑ mi ⋅ pi = 1× + 2 × + 3 × + 3 ×
8
8
8
8
i =1
= 1.875 bits/symbol
4
H ( X ) = −∑ pi ⋅ log 2 ( pi ) = 1.811 bits/symbol
i =1
64
Theorem (Source Coding)
ˆ The
smallest possible average number of
bits/symbol needed to exactly represent
a source equals the entropy of that
source.
ˆ Example 1: A binary file of length 1,000,000
bits contains 100,000 “1”s. This file can be
compressed by more than a factor of 2:
H ( x ) = −0.9 ⋅ log(. 9 ) − 0.1 ⋅ log(. 1) = 0.47 bits
S = 106 × H ( x ) = 4.7 × 105 bits
Compression Ratio=2.13
Costas N. Georghiades
65
Some Data Compression Algorithms
ˆ Huffman
coding
ˆ Run-length coding
ˆ Lempel-Ziv
ˆ There are also lossy compression algorithms
that do not exactly represent the source, but
do a good job. These provide much better
compression ratios (more than a factor of
10, depending on reproduction quality).
Costas N. Georghiades
66
Huffman Coding (by example)
A binary source produces bits with P(0)=0.1. Design a Huffman code that
encodes 3-bit sequences from the source.
Code bits
Source bits Probability
largest
Lengths
1
111
3
011
110
3
010
101
.081
3
001
011
.081
5
00011
100
.009
5
00010
010
.009
5
00001
001
5
00000
000
smallest
1
1
.729
.081
.009
.001
1
1
.162
1.0
0
.271
1
1
.109
.018 1
0
1
0
0
.028
0
.01
0
0
H ( x ) = [− 0.1 ⋅ log2 (0.1) − 0.9 ⋅ log2 (0.9)] = 0.469
1
M = (1 × 0.729 + 3 × 3 × 0.081 + 5 × 3 × 0.009 + 5 × 0.001) = 0.53
3
Costas N. Georghiades
67
Run-length Coding (by example)
A binary source produces binary digits with P(0)=0.9. Design a run-length code
to compress the source.
Source Bits
Run Lengths
Probability
Codewords
1
0
0.100
1000
01
1
0.090
1001
001
2
0.081
1010
0001
3
0.073
1011
00001
4
0.066
1100
000001
5
0.059
1101
0000001
6
0.053
1110
00000001
7
0.048
1111
00000000
8
0.430
0
Costas N. Georghiades
M 1 = 4 × (1 − 0.43) + 1 × 0.43 = 2.71
M 2 = .1 + 2 × .09 + 3 × .081 + 4 × .073
+ 5 × .066 + 6 × .059 + 7 × .053
+ 8 × .048 + 8 × .430
= 5.710
M =
M 1 2.710
=
= 0.475
M 2 5.710
H ( X ) = −0.9 ⋅ log2 (0.9) − 0.1 ⋅ log2 (0.1)
= 0.469 < 0.475
68
Examples: Speech Compression
ˆ Toll-quality
speech signals can be produced at
8Kbps (a factor of 8 compression compared to
uncompressed telephone signals).
ˆ Algorithms
that produce speech at 4.8Kbps or
even 2.4Kbps are available, but have reduced
quality and require complex processing.
Costas N. Georghiades
69
Example: Video Compression
ˆ Uncompressed
video of a 640x480 pixel2
image at 8 bits/pixel and 30 frames/s requires
a data-rate of 72 Mbps.
ˆ Video-conference
ˆ MPEG2
Costas N. Georghiades
systems operate at 384Kbps.
(standard) operates at 3Mbps.
70
Download