ch5.2 (Waveform Coding).ppt

advertisement
PCM & DPCM & DM
1
Pulse-Code Modulation (PCM) :

In PCM each sample of the signal is
B
quantized to one of the 2 amplitude
levels, where B is the number of bits used
to represent each sample.
The rate from the source is BFs bps.

The quantized waveform is modeled as :
~s (n)  s (n)  q(n)

q(n) represent the quantization error, Which we
treat as an additive noise.
2
Pulse-Code Modulation (PCM) :
 The
quantization noise is characterized as a
realization of a stationary random process q in
which each of the random variables q(n) has
uniform pdf.




2
q
2
Where the step size of the quantizer is   2 B
1/ 


2

2
3
Pulse-Code Modulation (PCM) :
 If
Amax :maximum amplitude of signal,
Amax
 B
2
 The
mean square value of the quantization
Δ/2 1
2
error is :  q (n)   Δ/2 q 2 (n)dq
Δ
2
2
A
1 3
Δ
max

q (n) |Δ/2


 Δ/2
3Δ
12 2 2B 12
 Measure
noise is :
in dB, The mean square value of the
2
22 B
10 log 10  10 log 10
 6 B  10.8 dB.
12
12
4
Pulse-Code Modulation (PCM) :
 The
quantization noise decreases by 6 dB/bit.
 If the headroom factor is h, then
X rms
Amax 2 B 


h
h
The signal to noise (S/N) ratio is given by
(Amax=1)
2
2B

S X rms
2
SNR   2
 12 2
N  / 12
h
12  22 B
SNR dB  10 log 10
 6 B  10.8  20 log 10 h
2
h
 In dB, this is
5
Pulse-Code Modulation (PCM) :

Example :
 We
require an S/N ratio of 60 dB and that a
headroom factor of 4 is acceptable. Then the
required word length is :

60=10.8 + 6B – 20 log 10 4
B  10.2  11 bit
 If
we sample at 8 KHZ, then PCM require
8k 11  88000 bit/s.
6
Pulse-Code Modulation (PCM) :

A nonuniform quantizer characteristic is
usually obtained by passing the signal
through a nonlinear device that compress
the signal amplitude, follow by a uniform
quantizer.
Compressor
A/D
D/A
Expander
Compander
(Compressor-Expander)
7
Pulse-Code Modulation (PCM) :

A logarithmic compressor employed in
North American telecommunications
systems has input-output magnitude
characteristic of the form
log( 1   | s |)
| y |
log( 1   )


is a parameter that is selected to give the
desired compression characteristic.
8
Pulse-Code Modulation (PCM) :

The logarithmic compressor used in
European telecommunications system is
called A-law and is defined as
log( 1  A | s |)
| y |
1  log A
9
DPCM :

A Sampled sequence u(m), m=0 to m=n-1.
~(n  1), u~(n  2),...
u
 Let
be the value of the
reproduced (decoded) sequence.
10
DPCM:
~ ( n)
u
 At m=n, when u(n) arrives, a quantify
,
an estimate of u(n), is predicted from the
previously decoded samples u~(n  1), u~(n  2),...
i.e.,
~
~
~
u (n)   (u (n  1), u (n  2),...);


 (.) : ”prediction rule”
Prediction error:
~
e(n)  u(n)  u (n)
11
DPCM :

If e~(n) is the quantized value of e(n), then
the reproduced value of u(n) is:
u~(n)  u~ (n)  e~(n)

Note:
u (n)  u~ (n)  e(n)
u (n)  u~(n)  (u~ (n)  e(n))  (u~ (n)  e~(n))
 e(n)  e~(n)
 q(n) : The Quantizati on error in e(n)
12
DPCM CODEC:
u (n)
Σ
e(n)
Quantizer
~
e ( n)
Communication
Channel
~
e ( n)
u~ ( n )
Σ
u~ (n)
u~ (n)
u~ ( n )
Predictor
Coder
Σ
Predictor
Decoder
13
DPCM:

Remarks:
 The
pointwise coding error in the input
sequence is exactly equal to q(n), the
quantization error in e(n).
 With
a reasonable predictor the mean
sequare value of the differential signal e(n) is
much smaller than that of u(n).
14
DPCM:

Conclusion:
 For
the same mean square quantization error,
e(n) requires fewer quantization bits than u(n).
 The
number of bits required for transmission
has been reduced while the quantization error
is kept the same.
15
DPCM modified by the addition of
linearly filtered error sequence
u (n)
Σ
e(n)
u~ (n)
Quantizer
~
e ( n)
Linear filter
{b̂ (i)}
Σ
Linear filter
~ (n) Σ
u
{â (i)}
Coder
Communication
~
e ( n)
Channel
u~ ( n )
Σ
u~ (n)
Linear
filter
{b̂ (i)}
Σ
Linear
filter
{â (i)}
Decoder
16
Adaptive PCM and Adaptive DPCM

Speech signals are quasi-stationary in nature

The variance and the autocorrelation function of the source output vary
slowly with time.

PCM and DPCM assume that the source output is stationary.

The efficiency and performance of these encoders can be improved
by adaptation to the slowly time-variant statistics of the speech
signal.

Adaptive quantizer

feedforward

feedbackward
17
Example of quantizer with an
adaptive step size
111
M (4)
7∆/2
M (3)
101
M (2)
3∆/2
-3∆
-2∆
001
M (3)
000
M (4)
100
M (1)
-∆ 011 0
∆
-∆/2
M (1)
010
M (2)
Multiplier
110
5∆/2
∆/2
Previous Output
2∆
3∆
-3∆/2
-5∆/2
-7∆/2
18
ADPCM with adaptation of the predictor
Step-size
adaptation
u (n)
Σ
e(n)
Quantizer
~
e ( n)
u~ (n)
Σ
Encoder
Communication
Channel
Decoder
u~ ( n )
Σ
~
e ( n)
u~ (n)
Predictor
Predictor
Predictor
adaptation
Coder
Decoder
19
Delta Modulation : (DM)

Predictor : one-step delay function

Quantizer : 1-bit quantizer
~
~
u (n)  u (n  1)
~
e(n)  u (n)  u (n  1)
20
Delta Modulation : (DM)

Primary Limitation of DM
 Slope
overload : large jump region
 Max.
slope = (step size)X(sampling freq.)
 Granularity
 Instability
Noise : almost constant region
to channel noise
21
DM:
u (n)
~
e ( n)
e(n)
~ ( n)
u
Unit Delay
u~ ( n )
Integrator
Coder
~
e ( n)
u~ ( n )
~ ( n)
u
Unit Delay
Decoder
22
DM:
Step size effect :
Step Size
(i) slope overload
(sampling frequency ) (ii) granular Noise
23
Adaptive DM:
Ek 1
sk 1
 k , E k ,  min
Adaptive
Function
Xk
Unit Delay
X k 1
Stored
 k 1
Ek 1  sgn [ S K 1  X k ]
E


 |  k | [ Ek 1  k ] if |  k |  min 
 k 1  

2

if |  k |  min 
 min Ek 1

X k 1  X k   k 1
This adaptive approach simultaneously minimizes the effects of both
slope overload and granular noise
24
Vector Quantization
(VQ)
25
Vector Quantization :

Quantization is the process of
approximating continuous amplitude
signals by discrete symbols.
Partitioning of
two-dimensional
Space into 16 cells.

26
Vector Quantization :
The LBG algorithm first computes a 1vector codebook, then uses a splitting
algorithm on the codeword to obtain the
initial 2-vector codebook, and continue the
splitting process until the desired M-vector
codebook is obtained.
 This algorithm is known as the LBG
algorithm proposed by Linde, Buzo and
Gray.

27
Vector Quantization :

The LBG Algorithm :

Step 1: Set M (number of partitions or cells)=1.Find the centroid
of all the training data.

Step 2: Split M into 2M partitions by splitting each current
codeword by finding two points that are far apart in each partition
using a heuristic method, and use these two points as the new
centroids for the new 2M codebook. Now set M=2M.

Step 3: Now use a iterative algorithm to reach the best set of
centroids for the new codebook.

Step 4: if M equals the VQ codebook size require, STOP;
otherwise go to Step 2.
28
Download