Uploaded by Ibrahim Qassem

Lect6 SCALAR QUANTIZATION

advertisement
Outline
8.1 Overview
8.2 Introduction
8.3 The Quantization Problem
Chapter 8
8.4 Uniform Quantizer
SCALAR QUANTIZATION
8.5 Adaptive Quantization
8.6 Nonuniform Quantization
8.7 Entropy-Coded Quantization
Yeuan-Kuen Lee
[ MCU, CSIE ]
2
Ch 8 Scalar Quantization
8.1 Overview
8.2 Introduction
9
The number of possible distinct source output values is generally much larger
and most general ideas in lossy compression.
9
Scalar quantization ( in this chapter ),
9
Vector quantization ( in the next chapter ).
In many lossy compression applications we are required to represent each
source output using one of a small number of codeword.
In this chapter, we begin our study of quantization, one of the simplest
than the number of codewords available to represent them.
The process of representing a large – possible infinite – set of values with a
much smaller set is called quantization.
2.47, 3.1415926
2
3
Source
Source
-10.0 ~ 10.0
Infinite
number of values
Only 21 values
{ -10, -9, … 0 … 9, 10 }
A simple quantization scheme would be to
represent each output of the source with the
integer value closest to it.
( if the source output is equally close to two
integers, we will randomly pick one of them )
At the same time we have also forever lost the original value of the source output.
3
?
2.95, 3.16, 3.057932 or any other of an infinite set of values.
We have lost some information. - lossy compression
Ch 8 Scalar Quantization
3
Ch 8 Scalar Quantization
4
8.2 Introduction
8.3 The Quantization Problem
In practice, the quantizer consists of two mappings:
9
The set of input and output of a quantizer can be scalar or vectors.
Encoder mapping: (irreversible)
9
Divides the range of values that the source generates into a number
of intervals. Each interval is represented by a distinct codeword.
Scalar
Scalar
Scalar
Scalar Quantizer
Quantizer
Scalar
Scalar
9
9
Decoder mapping:
9
Vector
Vector
Vector
Vector Quantizer
Quantizer
When the sample value comes from an analog source,
the encoder is called an analog-to-digital (A/D) converter.
Chapter 8
Because a codeword represents an entire interval, and there is no way
of knowing which value in the interval was actually generated by the source,
Vector
Vector
the decoder puts out a value that, in some sense, best represents all the values
Chapter 9
in the interval.
9
Midpoint of the interval
9
If the reconstruction is analog, the decoder is often referred to as a
digital-to-analog (D/A) converter.
5
Ch 8 Scalar Quantization
8.3 The Quantization Problem
8.3 The Quantization Problem
Input Codes
Codes
000
001
-3.0
010
-2.0
011
-1.0
100
0
101
1.0
110
2.0
000
001
010
011
100
101
110
111
111
3.0
Figure 8.1 Mapping for 3-bit encoder
Ch 8 Scalar Quantization
6
Ch 8 Scalar Quantization
Output
-3.5
-2.5
-1.5
-0.5
0.5
1.5
2.5
3.5
Figure 8.2 Mapping for 3-bit D/A converter.
7
Ch 8 Scalar Quantization
8
8.3 The Quantization Problem
8.3 The Quantization Problem
Example
Example 8.3.1
8.3.1
Suppose a sinusoid 4cos(2πt) was sampled every 0.05 second.
The sample was digitized using the A/D mapping shown in Figure 8.1,
and the reconstructed using the D/A mapping shown in Figure 8.2.
The first few inputs, codewords, and reconstruction values are given in Table 8.1.
Table 8.1 Digitizing a sine wave.
t
4cos(2πt)
A/D Output
D/A Output
Error
0.05
0.10
0.15
0.20
3.804
3.236
2.351
1.236
111
111
110
101
3.5
3.5
2.5
1.5
0.304
-0.264
-0.149
-0.264
y = 4 * cos ( 2πx )
9
Ch 8 Scalar Quantization
8.3 The Quantization Problem
9
10
Ch 8 Scalar Quantization
8.3 The Quantization Problem
Construction of the intervals (their location, etc.) can be viewed
3.5
as part of the design of the encoder.
9
Selection of the reconstruction values is part of the design of the decoder.
9
The fidelity of the reconstruction depends on
2.5
1.5
both the intervals and the reconstruction values.
-4.0
encoder
( Figure 8.1 )
output
-3.0
-2.0
-1.0 0.5
decoder
( Figure 8.2 )
1.0
-0.5
2.0
3.0
4.0
input
1.7
-1.5
We call this encoder-decoder pair a quantizer.
-0.3
( Figure 8.2 )
-2.5
-3.5
Input-output map
Figure 8.3 Quantizer input-output map.
Ch 8 Scalar Quantization
11
Ch 8 Scalar Quantization
12
8.3 The Quantization Problem
8.3 The Quantization Problem
9
Distortion – Average squared difference between the quantizer input and output.
9
Suppose we have an input modeled by a random variable X with pdf fX(x).
9
We call this mean squared quantization error (msqe) and denote it by σq2
9
If we wished to quantize this source using a quantizer with M intervals,
9
The rate of the quantizer is the average number of bits required
we would have to specify M + 1 endpoints for the intervals, and a
representative value for each of the M intervals.
to represent a single quantizer output.
9
9
We would like to get the lowest distortion for a given rate,
The endpoints of the intervals are known as decision boundaries,
while the representative values are called reconstruction levels.
or the lowest rate for a given distortion.
-∞
-3.5
-2.5
-1.5
-0.5
0.5
1.5
2.5
3.5
000
001
010
011
100
101
110
111
-3.0
13
Ch 8 Scalar Quantization
8.3 The Quantization Problem
-2.0
-1.0
0
1.0
2.0
∞
3.0
14
Ch 8 Scalar Quantization
8.3 The Quantization Problem
Let us denote the decision boundaries by { b i } iM= 0
M
the reconstruction level by { y i } i = 1
Quantization noise
the quantization operation by Q(‧)
Then, Q(x) = yi iff b i-1 < x ≤ b i
(8.1)
The mean squared quantization error is then given by
σ
=
2
q
=
M
∫
−∞
∑∫
i =1
∞
bi
b i −1
Quantizer input
( x − Q ( x )) 2 f X ( x ) dx
(8.2)
( x − y i ) 2 f X ( x ) dx
(8.3)
Quantizer output
Quantizer output = Quantizer output + Quantization noise
The difference between the quantizer input x and output y = Q(x),
besides being referred to as the quantization error,
is also called the quantizer distortion or quantization noise.
Ch 8 Scalar Quantization
+
Figure 8.4 Additive noise model of a quantizer.
15
Ch 8 Scalar Quantization
16
8.3 The Quantization Problem
9
If we use fixed-length codewords to represent the quantizer output,
9
However, if we are allowed to used variable-length codes, such as
then the size of the output alphabet immediately specifies the rate.
Huffman codes or arithmetic codes, along with the size of alphabet, the
If the number of quantizer output is M, then the rate given by
selection of the decision boundaries will also affect the rate of the
R = log 2 M 
9
9
8.3 The Quantization Problem
quantizer.
(8.4)
9
Table 8.2 Codeword assignment for an eight-level quantizer.
For example, if M = 8, then R = 3.
In this case, we can pose the quantizer design problem as follows:
y1
y2
y3
y4
y5
y6
y7
y8
Given an input pdf fX(x) and the number of levels M in the quantizer,
find the decision boundaries { bi } and the reconstruction levels { yi }
so as to minimize the mean squared quantization error
given by Equation (8.3)
1110
1100
100
00
01
101
1101
1111
The rate will depend on the probability
of occurrence of the output.
If li is the length of the codeword corresponding
to the output yi, and P(yi) is the probability of
occurrence of yi ,then the rate is given by:
R =
M
∑
li P ( y i )
(8.5)
i=1
17
Ch 8 Scalar Quantization
8.3 The Quantization Problem
R =
M
∑
li P ( y i )
For a given source input :
(8.5)
9
9
However, the probability { P(yi) } depend on the decision boundaries { bi }.
9
For example, the probability of yi occurring is given by
P ( yi ) =
9
∫
b i −1
∑
i=1
Ch 8 Scalar Quantization
li
∫
bi
b i −1
The partitions we select and the binary codes for the partitions will
determine the rate for the quantizer.
f X ( x ) dx
Thus, the problem of finding the optimum
expression
R =
The partition we select and the representation for these partitions will
determine the distortion incurred during the quantization process.
9
Therefore, the rate is a function of decision boundaries and given by the
M
18
8.3 The Quantization Problem
i=1
bi
Ch 8 Scalar Quantization
f X ( x ) dx
(8.6)
9
partitions,
9
codes, and
9
representation levels
are all linked.
19
Ch 8 Scalar Quantization
20
8.3 The Quantization Problem
8.4 Uniform Quantization
In light of this information, we can restate our problem statement:
Given
Given aa distortion
distortion constrain
constrain
σ
2
q
≤ D*
9
The simplest type of quantizer is the uniform quantizer.
9
All intervals are the same size in the uniform quantizer,
except possibly for the outer intervals.
(8.7)
(i.e., the decision boundaries are spaced evenly.)
find
find the
the decision
decision boundaries,
boundaries, reconstruction
reconstruction levels,
levels, and
and binary
binary codes
codes that
that
minimize
minimize the
the rate
rate given
given by
by Equation
Equation (8.6)
(8.6),, while
while satisfying
satisfying Equation
Equation (8.7)
(8.7)..
9
as decision boundaries; in the inner intervals, they are the midpoints of
Or,
the intervals.
Given
Given aa rate
rate constrain
constrain
R ≤ R*
9
(8.8)
9
minimize
(8.3),, while
while satisfying
satisfying Equation
Equation (8.8)
(8.8)..
minimize the
the distortion
distortion given
given by
by Equation
Equation (8.3)
Ch 8 Scalar Quantization
21
8.4 Uniform Quantization
9
9
The constant spacing is usually referred to as the step size and is
denoted by ∆ .
find
find the
the decision
decision boundaries,
boundaries, reconstruction
reconstruction levels,
levels, and
and binary
binary codes
codes that
that
9
The reconstruction values are also spaced evenly, with the same spacing
The quantizer shown in Figure 8.3 is a uniform quantizer with ∆ = 1 .
22
Ch 8 Scalar Quantization
8.4 Uniform Quantization
output
Midrise quantizer:
9
Figure 8.3
3.0
9
does not have zero as one of its representation levels.
2.0
Midtread quantizer:
9
Figure 8.5
9
have zero as one of its representation levels.
-3.5
-2.5
-1.5
1.0
-0.5
0.5
1.5
2.5
3.5
input
-1.0
Usually, we use a midrise quantizer if the number of levels is even and
-2.0
a midtread quantizer if the number of level is odd.
-3.0
Figure 8.5 A midtread quantizer.
Ch 8 Scalar Quantization
23
Ch 8 Scalar Quantization
24
8.4 Uniform Quantization
8.4 Uniform Quantization
q = x-Q(x)
Uniform
Uniform Quantization
Quantization of
of aa Uniformly
Uniformly Distributed
Distributed Source
Source
9
Suppose we want to design an M-level uniform quantizer for an input
-4∆
-3∆
-2∆
-∆
∆
2∆
3∆
4∆
that is uniformly distributed in the interval [ -Xmax, Xmax ].
9
Step size ∆ is given by
∆=
9
- ∆/2
2 X max
M
2
q
M /2
= 2∑
i =1
∫
i∆
( i −1 ) ∆
(x −
The mean squared quantization error is the second moment of a random
variable uniformly distributed in the interval [ -∆/2, ∆/2 ]:
2i − 1
1
∆ )2
dx ≅ ∆ 2 12
2
2 X max
σ
25
Ch 8 Scalar Quantization
∆/2
Figure 8.6 Quantization error for a uniform midrise
quantizer with a uniformly distributed input.
The distortion becomes
σ
∆/2
x
- ∆/2
8.4 Uniform Quantization
2
q
=
1
∆
∫
∆ /2
−∆ /2
q 2 dq = ∆ 2 12
Ch 8 Scalar Quantization
26
8.4 Uniform Quantization
Example
Example 8.4.1
8.4.1 Image
Image Compression
Compression
1 1 3 q=∆ /2
( q ) q=−∆ /2
∆ 3
1  ∆ 3
1  ∆3 
∆ 3
=
(
)
(
)
=
−
−


3 ∆  2
2 
3∆  4 
σ
2
q
=
∆2
12
=
1
∆
∫
∆ /2
−∆ /2
q 2 dq =
Figure 8.7 Left: Original Sena image; Right: 3-bit/pixel
Ch 8 Scalar Quantization
27
Ch 8 Scalar Quantization
28
8.4 Uniform Quantization
Example
Example 8.4.1
8.4.1 Image
Image Compression
Compression
Figure 8.7 Left: 1-bit/pixel Right: 1-bit/pixel (Conti.)
Ch 8 Scalar Quantization
29
Download