Uploaded by 布里啾啾底部哩多

DC Ch9

advertisement
Chapter 9
Scalar Quantization
Introduction
2
Quantization
 The process of representing a large (possibly infinite) set of
values with a much smaller set (codewords).
» infinite values in [10, 10]  21 integers {10, 9, 10}.
» Forever lost the original value of the source output.
 One of the simplest & most general ideas in lossy compression.
 Set of inputs and outputs: scalar or vector quantizers.
 Quantizer design has a significant impact on the amount of
compression obtained and loss incurred in a lossy compression.
 Quantizer consists of an encoder mapping & a decoder mapping.
 Encoder divides the range of source values into a number of
intervals, each represented by a distinct codeword. Irreversible.
 If source is analog, the encoder is called an analog-to-digital
(A/D) converter & the decoder is called a D/A converter.
Data Compression
Chapter 9
Hsun-Wen Chang
Quantization Problem
3
Ex. 3-bit quantizer: 3 bits to represent each value.
E.g. [1, 0) 011, [3, ) 111,
(, 3) 000.
Ex 9.3.1. 4cos(2t) was sampled every 0.05 second.
1.7  1.5
−0.3−0.5
 Decide intervals (decision boundaries): part of encoder design.
 Select reconstruction levels (values): part of decoder design.
 Fidelity of the reconstruction depends on both the intervals and
the reconstruction values.  View them as a pair.
Data Compression
Chapter 9
Hsun-Wen Chang
Quantization Problem (cont.)
4
 Discrete sources are often modeled with continuous distributions,
which simplify the design process considerably & perform well.
 Given an input modeled by a random variable X with pdf fX(x),
to quantize source using a quantizer with M intervals needs to
specify M1 endpoints for intervals and M representative values.
» Endpoints: {bi}, 0iM; reconstruction levels: {yi}, 1iM.
» Quantization: Q(x)yi if bi1xbi.
Goal: Find the lowest distortion for a given rate R R* or
Find the lowest rate for a given distortion q2 D*.
 Distortion: the average squared difference between quantizer
input and output. q2: mean squared quantization error (msqe).
 Quantization error, Quantizer distortion, or Quantization noise:
the difference between the quantizer input x and output yQ(x).
Data Compression
Chapter 9
Hsun-Wen Chang
Quantization Problem (cont.)
5
 Rate of quantizer: the average number of bits required to
represent a single quantizer output.
 Using fixed-length codewords, the size M of the output alphabet
specifies the rate: Rlog2M. (If M8, then R3.)
 Using variable-length codes, along with the size of the alphabet,
selection of decision boundaries will affect the rate of quantizer.
The rate depends on the probability of occurrence of the outputs.
» li: the length of the codeword corresponding to the output yi.
P(yi): the probability of yi depends on decision boundary {bi}.
»
 The partitions selected and the binary codes for the partitions
will determine the rate for the quantizer.
 Goal: Given a distortion/rate constraint, find decision boundaries,
reconstruction levels, & binary codes to minimize rate/distortion.
Data Compression
Chapter 9
Hsun-Wen Chang
Uniform Quantization
6
Uniform Quantization
 The simplest type of quantizer.
 All intervals are the same size, except for two outer intervals.
» Decision boundaries are spaced evenly.
 Reconstruction values are spaced evenly as decision boundaries.
» Step size : the constant spacing.
Midrise quantizer
 In the inner intervals, reconstruction values are the midpoints.
 It does not have zero as one of its representation levels (1).
Midtread quantizer
# of levels is even, or odd.
 Zero is one of its output levels. Especially useful in situations
where it is important that the zero value be represented.
» Represent silence periods in audio coding schemes.
 Only 7 levels. For a fixed-length 3-bit code, 1 codeword left over.
Data Compression
Chapter 9
Hsun-Wen Chang
Uniform Quantization (cont.)
7
 Assume that the input distribution is symmetric around the
origin and the midrise quantizer is also symmetric.
 To find step size  that minimizes the distortion for a given
input process and number of decision levels.
Uniformly Distributed Source
 Design an M-level uniform quantizer for an input that is
uniformly distributed in the interval [−Xmax, Xmax].
 Step size: 2Xmax M.
 Distortion: msqe212.
 Quantization error: qxQ(x).
 Signal variance s2 (2Xmax)212.
 For every additional bit, increase SNR 6.02 dB.
Data Compression
Chapter 9
Hsun-Wen Chang
Uniform Quantization (cont.)
8
Nonuniform Source
 When distribution is no longer uniform, it is not a good idea to
obtain step size by dividing the range of input by # of levels.
Ex 9.4.2. Suppose input fell within [1, 1] with probability 0.95,
& fell in the intervals [100, 1), (1, 100] with probability 0.05.
Design an eight-level uniform quantizer.
 Let step size be 25. Then [1, 0) 12.5, [0, 1) 12.5.
The maximum quantization error is 12.5. However, at least 95%
of the time, the minimum error that will be incurred is 11.5.
 A much better approach uses a smaller step size, which results
in better representation of the values in [−1, 1], even if it meant
a larger maximum error.
 Let step size be 0.3. Then maximum quantization error is 98.95.
However, 95% quantization error 0.15.  Smaller msqe.
Data Compression
Chapter 9
Hsun-Wen Chang
Uniform Quantization (cont.)
9
 If distribution is no longer uniform, it is not a good idea to obtain
step size by simply dividing the range of input by # of levels.
 It is totally impractical when model sources with distributions
that are unbounded, such as the Gaussian distribution.
 Include the pdf of the source in the design process.
 Objective: find step size to minimize distortion for a given M.
 Method: Write distortion as a function of step size & minimize.
granular noise
overload noise
Data Compression
Chapter 9
Hsun-Wen Chang
Uniform Quantization (cont.)
10
Quantization noise
 Nonuniform sources are often modeled by unbounded pdfs.
» A nonzero probability of getting an unbounded input.
 In practical, no unbounded inputs, but it is very convenient.
» Measurement error is often modeled as a Gaussian
distribution, even when the measurement error is bounded.
 If input is unbounded, quantization error is no longer bounded.
 In the inner intervals the error is still bounded by 2.
» called granular error or granular noise.
 In the outer intervals quantization error is unbounded.
» called overload error or overload noise.
 The probability that the input fall into the overload region is
called the overload probability.
 Figure: quantization error as a function of input.
Data Compression
Chapter 9
Hsun-Wen Chang
Uniform Quantization (cont.)
11
Data Compression
Chapter 9
Hsun-Wen Chang
Uniform Quantization (cont.)
12
 Nonuniform sources have pdfs that are generally peaked at zero
and decay as we move away from the origin.
» Overload probability is generally much smaller.
 An increase in the step size .  An increase in (M21).
 A decrease in the overload probability.
 An increase in the step size .  Increase the granular noise.
 Selection of  is a balance between overload & granular errors.
Designing uniform quantizer is a balancing of these two effects.
Loading factor fl
 Defined as the ratio of the maximum value the input can take in
the granular region to the standard deviation.
 An important parameter that describes thetrade-off.
 A common value of the loading factor is 4, or 4 loading.
Data Compression
Chapter 9
Hsun-Wen Chang
Uniform Quantization (cont.)
13
 For uniform distribution, there is a 6.02dB increase in SNR for
each additional bit, This is not true for the other distributions.
 The more peaked a distribution is, the more it seems to vary
from the 6.02dB rule.
 Laplacian distribution has more of its probability mass away
from the origin in its tails than Gaussian distribution.
 For the same step size & # of levels, there is a higher probability
of being in the overload region if the input has a Laplacian
distribution than if the input has a Gaussian distribution.
 In uniform distribution, the overload probability is zero.
 For a given number of levels, if step size increases, the size of
overload region is reduced at the expense of granular noise.
 Distributions with heavier tails tend to have larger step sizes.
» For eight levels, the step size for uniform quantizer is 0.433,
for Gaussian is larger (0.586), for Laplacian is larger (0.7309).
Data Compression
Chapter 9
Hsun-Wen Chang
Mismatch Effects
14
 Assume that the mean of the input distribution is zero.
Two types of mismatches
 When assumed distribution type matches the actual type, but
the variance of input is different from the assumed variance.
 The actual distribution type is different from the type assumed.
SNR is a ratio of the input variance and the msqe.
 Use a 4-bit Gaussian uniform quantizer with a Gaussian input.
 SNR is maximum when input variance matches the assumed.
 Asymmetry: the SNR is considerably worse when the input
variance is lower than the assumed variance. (Left part)
» When input variance is smaller than assumed variance, the
msqe actually drops because there is less overload noise.
» When the input variance is higher than the assumed
variance, the msqe increases substantially.
Data Compression
Chapter 9
Hsun-Wen Chang
Mismatch Effects
15
 Second kind of mismatch: where the input distribution does not
match the distribution assumed when designing the quantizer.
Table: SNR when using different inputs & 8-level quantizers.
Each input distribution has unit variance.
 From left to right, the designed step size becomes progressively
larger than the “correct” step size.
 Where input variance is smaller than the assumed variance.
 There is a greater drop in performance than when the quantizer
step size is larger than its optimum value.
Data Compression
Chapter 9
Hsun-Wen Chang
Adaptive Quantization
16
 Several things might change in the input relative to the assumed
statistics, including the mean, the variance, and the pdf.
 Method: Adapt the quantizer to the statistics of the input.
» If the mean of the input is changing with time, the best
strategy is to use some form of differential encoding (Ch11)
» For changes in the other statistics, the common approach is
to adapt the quantizer parameters to the input statistics.
Two main approaches
Offline or forward adaptive approach
 Divide source output into blocks of data. Each is analyzed
before quantization, & quantizer parameters are set accordingly.
 The settings are transmitted to the receiver as side information.
Online or backward adaptive approach.
 Adaptation is performed based on the quantizer output.
 Available to both sides, there is no need for side information.
Data Compression
Chapter 9
Hsun-Wen Chang
Forward Adaptive Quantization
17
Adapt to changes in input variance
 Need at least amount of time delay to process a block of data.
 Insertion of side information in the transmitted data stream may
also require the resolution of some synchronization problems.
Selection of block size
 Trade-off between the increase in side information required by
small block sizes & the loss of fidelity due to large block sizes
» If too large, may not capture the changes in input statistics.
» Large block sizes mean more delay, may not be tolerable.
» Small block sizes need to transmit side information more
often, & the amount of overhead per sample increases.
Variance estimation (0)
 At time n, use N future samples to estimate:
 Need to quantize variance information for sending to receiver.
» # of bits to quantize the variance is significantly larger than
# of bits to quantize the sample values.
Data Compression
Chapter 9
Hsun-Wen Chang
Forward Adaptive Quantization (cont.)
18
 Quantize speech using a fixed 3-bit quantizer.
 Step size was adjusted based on the statistics of
the entire sequence of speech segment.
 Input: 4000 samples of a male speaker saying test.
 The speech signal was sampled at 8000 samples
per second and digitized using a 16-bit A/D.
 Considerable loss in amplitude resolution: Sample
values close together be quantized to same value.
Quantized with a forward adaptive quantizer
 Divided input into blocks of 128 samples.
 Find standard deviation in a block, quantize by an
8-bit quantizer, and send to transmitter & receiver.
 Then normalize samples in the block using .
 Reconstruction is much more closely to input.
 Large loss in the latter half of displayed samples.
Data Compression
Chapter 9
Hsun-Wen Chang
Forward Adaptive Quantization (cont.)
19
 Ex 9.4.1 used a uniform 3-bit quantizer with the assumption
that the input is uniformly distributed. (Middle image)
 Refine source model a bit: source is uniformly distributed
over different regions, the range of the input changes.
 (Bottom) Sena image quantized with a block size of 88
using 3-bit forward adaptive uniform quantization.
 Side information consists of the minimum and maximum
values in each block, which require 8 bits each.
 Overhead: 16(88)0.25 bits per pixel.
 Overhead (0.25) is quite small compared to the number (3)
of bits per sample used by the quantizer.
 The resulting image using forward adaptive quantization is
hardly distinguishable from the original (Top).
 It seems to be very good at higher rates.
Data Compression
Chapter 9
Hsun-Wen Chang
Backward Adaptive Quantization
20
Backward Adaptive Quantizer
 Only the past quantized samples are available for use.
 Only encoder knows the input. (cannot used to adapt quantizer.)
Q. How can we get information about mismatch simply by
examining output of quantizer without knowing what input was?
 Study the output of quantizer for a long period of time, & get
some idea about mismatch from distribution of output values.
 If quantizer step size  is well matched to input, the probability
would be consistent with the pdf assumed for the input.
 If  is smaller (larger) than what it should be, input will fall in
outer (inner) levels of quantizer an excessive number of times.
 Therefore, observe the output of quantizer for a long time, then
expand (contract) quantizer step size if the input falls in the
outer (inner) levels an excessive number of times.
Data Compression
Chapter 9
Hsun-Wen Chang
Backward Adaptive Quantization (cont.)
21
Jayant quantizer
» Jayant named “quantization with one word memory.”
 Adjust the quantizer step size after observing a single output.
» No need to observe quantizer o/p over a long period of time.
 If input falls in outer levels, expand the step size, & if input
falls in inner quantizer levels, reduce the step size.
 Once quantizer is matched to input, the product of expansions
and contractions is unity.
 Assign a multiplier Mk to each interval. Adapt n Ml(n1) n1.
For inner levels, Mk1; and for the outer levels, Mk1.
» l(n1) is the quantization interval at time n1.
» The multipliers for symmetric intervals are identical.
 Note. Step size is modified based on previous quantizer output,
which is available to both sides.  No side information.
Data Compression
Chapter 9
Hsun-Wen Chang
Backward Adaptive Quantization (cont.)
22
Jayant quantizer (cont.)
Ex 9.5.3. For a 3-bit Jayant quantizer with multipliers M0M40.8,
M1M50.9, M2M61, and M3M71.2. The initial step size is
0.5. Input is 0.1, 0.2, 0.2, 0.1, 0.3, 0.1, 0.2, 0.5, 0.9, 1.5, .
Data Compression
Chapter 9
Hsun-Wen Chang
Backward Adaptive Quantization (cont.)
23
Jayant quantizer (cont.)
 If input was changing rapidly (high-frequency input) (much
more likely to occur), quantizer would not function very well.
 If input changed slowly, the quantizer could adapt to the input.
» Most natural sources tend to be correlated (low-frequency).
 If high-frequency input was removed, the residual is generally
enough for Jayant quantizer to function quite effectively.
 If step size continues to shrink for an extended period of time,
it would result in a value of zero in a finite precision system.
 Define a minimum value min & a maximum value max and
step size is not allowed to go below min or beyond max.
» To prevent not able to adapt fast enough.
» E.g. a silence period in speech-encoding systems or
a dark background in image-encoding systems.
Data Compression
Chapter 9
Hsun-Wen Chang
Backward Adaptive Quantization (cont.)
24
Selection of Multiplier
 The further the multiplier values are from 1, the more adaptive
the quantizer. However, adapting too fast leads to instability.
 If input is stationary, then impose a stability criterion
» Requirement: once quantizer is matched to input,
the product of the expansions & contractions are equal to 1.
» nk: the number of times the input falls in the kth interval,
» Pknkn: the probability of being in quantizer interval k.
 Impose some structure on multipliers:
» 1 and lk are integers.
Selection of 
 The value for  determines how fast the quantizer will respond
to changing statistics. A large  will result in faster adaptation,
while a smaller value of  will result in greater stability.
Data Compression
» Multiplier for inner level has to be 1.  l00.
Chapter 9
Hsun-Wen Chang
Backward Adaptive Quantization (cont.)
25
Ex 9.5.4. For a 2-bit quantizer and input probabilities P00.8 &
P10.2, choose l01 & l14. Then M01, M11, & lkPk0.
 Input: a square wave switching between 0 & 1 every 30 samples.
 When input switches from 0 to 1, the input falls in the outer
level, and the step size increases until  is just greater than 1.
 If  is close to 1,  increases quite slowly and should have a
value close to 1 right before 1, where output is close to 1.5.
 When 1, input falls in the inner level, and output suddenly
drops to about 0.5. Step size now decreases until just below 1.
 The process repeats, causing the ringing effect.
 As  increases, quantizer adapts more rapidly, & the magnitude
of ringing effect decreases. (As right before 1, «1 & output
«1.5. When 1,  increases significantly, so output»0.5.)
 It may be better to have two adaptive strategies, one for when
CCITT
standard G.726 input is changing rapidly, and one for when input is constant.
Data Compression
Chapter 9
Hsun-Wen Chang
Backward Adaptive Quantization (cont.)
26
 When selecting multipliers for a Jayant quantizer, the best
quantizers expand more rapidly than they contract.
» When input falls into the outer levels, overload error is
essentially unbounded. Need to be mitigated with dispatch.
» When input falls in the inner levels, granular noise is
bounded and, therefore, may be more tolerable.
Robustness of Jayant quantizer
 Fig: Performance of Jayant quantizer compared to the pdfoptimized quantizer in the face of changing input statistics.
 Performance of Jayant quantizer is much better than nonadaptive
uniform quantizer over a wide range of input variances.
 Performance of nonadaptive quantizer is significantly better than
Jayant quantizer when input variance & design variance agree.
 If input statistics is known and not change over time, it is better
to design for those statistics than to design an adaptive system.
Data Compression
Chapter 9
Hsun-Wen Chang
Nonuniform Quantization
27
 In lossless compression, in order to minimize the average rate,
assign shorter codewords to symbols with higher probability.
 In order to decrease average distortion, try to approximate the
input better in regions of high probability, perhaps at the cost of
worse approximations in regions of lower probability.
 Nonuniform quantizer: A quantizer has nonuniform intervals.
 Make quantization intervals smaller in those regions that have
more probability mass. Quantizer error is smaller there.
» E.g., Have smaller intervals near the origin.
 While a nonuniform quantizer provides lower average
distortion, the design of nonuniform quantizers is also
somewhat more complex.
 Find the decision boundaries and reconstruction levels that
minimize the mean squared quantization error.
Data Compression
Chapter 9
Hsun-Wen Chang
Nonuniform Quantization
28
pdf-optimized Quantization
 For a probability model of source, find {bi} and {yi} that
minimize
 Derivate with respect to yj and set to zero:
 Output point yj for each quantization interval
is the centroid of the probability mass in that interval.
 Solving these two equations will give the values for the
reconstruction levels and decision boundaries that minimize the
mean squared quantization error.
 Unfortunately, to solve for yj, need the values of bj and bj1, and
to solve for bj , we need the values of yj1 and yj.
Data Compression
Chapter 9
Hsun-Wen Chang
Lloyd-Max Quantizer
29
Lloyd-Max Algorithm
Design an M-level symmetric midrise quantizer.
 Need to obtain reconstruction levels {y1, y2, , yM2} and
decision boundaries {b1, b2 , , bM21} to design this quantizer.
» bM2 is the largest value the input can take on (maybe ).
» The other yi & bi can be obtained through symmetry.
 Consider y1. Two unknowns b1 and y1 in equation
 Initially, make a guess at y1, and later try to refine this guess.
 Obtain b1  y22b1 y1  b2  y3    bM21  yM2.
» Accuracy of all the values depends on initial estimate of y1.
Data Compression
 Use bM2 (from knowledge of data) & computed bM21 to find
yM2, compare it with the previously computed yM2.
 If the difference is less than some tolerance threshold, then stop.
Otherwise, adjust the estimate of y1 in the direction indicated by
the sign of the difference and repeat the procedure.
Chapter 9
Hsun-Wen Chang
Lloyd-Max Quantizer (cont.)
30
 Distributions with heavier tails have larger outer step sizes.
Those with more heavily peaked have smaller inner step sizes.
 A significant improvement in SNR compared with those for
pdf-optimized uniform quantizers. Especially for distributions
further away from the uniform distribution.
Data Compression
Chapter 9
Hsun-Wen Chang
Lloyd-Max Quantizer (cont.)
31
Mismatch Effects
 pdf-optimized nonuniform quantizers also have problems when
the assumptions underlying their design are violated.
 A serious problem because in most communication systems,
input variance can change considerably over time.
» In telephone system, different people speak with differing
amounts of loudness. The quantizer needs to be quite
robust to the wide range of input variances in order to
provide satisfactory service.
 Sol 1: Use adaptive quantization to match the quantizer to the
changing input characteristics. (Similar to uniform cases)
 Sol 2: Use a nonlinear mapping to
flatten performance curve.
 Companded quantization
Data Compression
Chapter 9
Hsun-Wen Chang
Lloyd-Max Quantizer (cont.)
32
Entropy Coding
 To find the optimum quantizer for a given number of levels and
rate is a rather difficult task.
 An easier approach is to design a quantizer that minimizes the
msqe, then entropy-code its output.
 While difference in rate for lower levels is relatively small, for
a larger number of levels, there can be a substantial difference
between the fixed-rate and entropy-coded cases.
Data Compression
Chapter 9
Hsun-Wen Chang
Lloyd-Max Quantizer (cont.)
33
Entropy Coding (cont.)
» Difference between fixed rate & uniform quantizer entropy
is generally greater than difference between fixed rate &
entropy of the output of nonuniform quantizer.
 Because nonuniform quantizers have smaller (larger) step sizes
in high-(low-)probability regions.
 Probability of an input falling into a low-probability region and
probability of an input falling in a high-probability region
closer together.
 Raise output entropy of nonuniform quantizer with respect to
the uniform quantizer.
 The closer the distribution is to being uniform, the less
difference in the rates.
 Difference in rates is much less for the quantizer for the
Gaussian source than the quantizer for the Laplacian source.
Data Compression
Chapter 9
Hsun-Wen Chang
pdf-optimized Quantization (cont.)
34
Properties for a given Lloyd-Max quantizer
 Property 1: The mean values of the input and output are equal.
 Property 2: Variance of output is always  variance of input.
 Property 3: The msqe is
» x2: variance of quantizer input, and the second term is the
second moment of output (or variance if input is zero mean).
 Property 4: E[XN]q2.
» N: the random variable corresponding to quantization error.
 Property 5: Quantizer output and quantization noise are
orthogonal: E[Q(X)N | b0, b1 , , bM]0.
Data Compression
Chapter 9
Hsun-Wen Chang
End of Chapter 9
Download