Arithmetic Coding

advertisement
Arithmetic Coding
This material is based on K. Sayood, “Introduction to data compression,” San
Francisco, CA: Morgan Kaufmann Publishers, 1996. III Edition 2006.
Internet: mkp@mkp.com
web site: http://mkp.com
Useful for sources with small alphabets (ex-binary 0 or 1) and alphabets with
highly skewed (unequal) probabilities. (Ex-facsimile, text etc.).
The rate for Huffman Coding is within Pmax + 0.086 of entropy (Pmax =
probability of most frequently occurring symbol). For small alphabet size
and highly skewed probability Pmax can be very large.
Example: A = (a1, a2, a3)
Huffman Code
Letter
a1
a2
a3
Probability
.95
.02
.03
Code
0
11
10
Entropy = ∑3𝑖=1 𝑃𝑖 𝑙𝑜𝑔2 𝑃𝑖 = 0.335 bits/symbol
Redundancy = 1.05-.335 = 0.67 bit/symbol
Huffman code gives 1.05 bits/symbol = (.95) 1 + (.02) 2 + (.03) 2
Group symbols {a1, a2, a3} in blocks of two.
{a1a1, a1a2, a1a3, a2a1, ...., a3a3}
# of messages = 32 = 9
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
1
Symbol Probability
Code
0
a1a1
.9025
a1a3
.0285 o
0
a3a1
.0285 o
1
a1a2
.0190
0
a2a1
.0190
0
a3a3
.0009
Root
o 1.0
1
o .0975
o
0
.0570
0
100
1
101
o .0405
1
110
o
.0215
1
1110
111100
0
o
1
a2a3
.0006
a3a2
.0006
0
.0015
o
.0025
111101
1
111110
0
o
.001
a2a2
.0004
1
111111
Huffman Tree
Average rate = 1.222bits/message (coding in blocks of 2 symbols)
Average rate = 0.611bit/symbol of original alphabet
Symbol
a1
a2
a3
Probability
0.95
0.02
0.03
P (a1) = 0.95, P(a2) = .02, P(a3) = .03
Average bit rate = .611 bits/symbol, entropy = .335 bits/symbol
Redundancy = 0.611-0.335 = 0.276
Group symbols (a1, a2, a3) in blocks of three,
Alphabet size = 27 = 33, (a1 a1 a1, a1 a1 a2,.....,a3 a3 a2, a3 a3 a3)
Average bit rate by Huffman Coding can be reduced but alphabet size grows
exponentially. Group symbols in blocks of four. Alphabet size = 3 4 = 81
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
2
To assign a code for a particular sequence (groups of symbols) of length m,
Huffman Code requires developing codewords for all possible sequences of
length m. Arithmetic Coding assigns codes to particular sequences of length
m without having to generate codes for all sequences of length m.
Procedure:
I.
II.
Generate a unique identifier or tag for the sequence of length m to be
coded.
Assign a unique binary code to this tag.
Generating a tag:
Alphabet size = 3, A= (a1, a2, a3)
P(a1) = .7, P(a2) = .1, P(a3) = .2
Cumulative distribution function CDF
i
Fx (i) =  P(X = k)
k=1
0
a1
0
.49
.546
a1
a1
a1
a2
.7
.8
.49
.56
a2
.539
.546
a2
a3
.5558
.5572
a3
a3
Mid point of
tag interval
.5565
1.0
.7
Tag for input sequence a1 a2 a3 a2
.56
.56
a1a1 = 0.7*0.7 = 0.49
a1a2 = .7*0.1 = 0.07,
0.49+0.07 = 0.56
a1a3 = 0.7*0.2 = 0.14,
0.56+0.14 = 0.7
a1a2a1 = 0.07*0.7 = 0.049,
0.49+0.049 = 0.539
a1a2a2 = 0.07*0.1 = 0.007,
0.539+0.007 = 0.546,
a1a2a3 = 0.07*0.2 = 0.014,
0.546+0.014 = 0.56
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
3
Tag for input sequence (a1, a2, a3.....)
Each new symbol restricts the tag to a subinterval that is disjoint from any
other subinterval. The tag interval for (a1, a2, a3, a1) is disjoint from the tag
interval for (a1, a2, a3, a2)
Any member of an interval can be used to identify a tag.
1.
2.
3.
Lower limit of the interval
Midpoint of the interval
Upper and lower limits of the interval
Use midpoint of the interval to identify a tag. Let the alphabet be
A = (a1, a2, ...,am)
i-1
TX (ai) =  P (X=k) + (1/2) P(X=i)
k=1
= FX (i-1) + (1/2) P(X = i)
Example Roll of a die (1, 2, ......,6)
P(X = K) = 1/6, K = 1, 2, .......,6
Assign a tag to a particular sequence Xi
TX (m) (Xi) =  P(y) + (1/2) P(Xi)
y<xi
y<x means y precedes x in the ordering, m = length of sequence.
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
4
Roll of a die 1, 2, 3, 4, 5, 6
0
1
1/6
2
1/3
3
1/2
4
2/3
5
0
.0833
1
.25
2
.4166
3
.5883
4
.75
5
5/6
6
1.0
1/72
1/36
3/72
2/36
5/72
3/36
7/72
4/36
9/72
5/36
.9166
6
11/72
1/6
4
TX (5) =  P(X = k) + (1/2) P(X = 5) = .75 = 0.67 + (.16/2)
k=1
TX (2) (13) = P(X = 11) + P(X = 12) + (1/2) P(X = 13) = 1/36 + 1/36 + (1/2)
1/36 = 5/72
We have to compute the probability of every sequence that is less than the
sequence for which the tag is generated. This is prohibitive. However, the
upper and lower limits of a tag interval can be computed recursively.
For a sequence X = (X1, X2.......Xn)
l(n) = l(n-1) + [u(n-1) - l(n-1)] FX (Xn-1 )
u(n) = l(n-1) + [u(n-1) - l(n-1)] FX (Xn)
u(n) = upper limit of tag for X
l(n) = lower limit of tag for X
midpoint of the interval for tag
TX (n) (X) = (u(n) + l(n))
2
Example: A = {a1, a2, a3}
P(a1) = .8,
P(a2) = .02, P(a3) = .18
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
5
(small alphabet size & highly
skewed probability)
Encode a sequence 1 3 2 1 (a1 a3 a2 a1)
0
0
a1
a1
.8
.656
a1
.64
a2
.656
a3
a2
.82
a3
1.0
.7712
a1
.7712
a2
.772352
.773504
a2
.77408
a3
a3
.8
.77408
.8
FX (1) = .8 , FX (2) = .82
FX (3) = 1, FX (k) = 0, k < 0
FX (k) = 1, k> 3
Sequence 1, 3, 2, 1
l(0) = 0, u(0) = 1
First element = 1
P(a1) = 0.8
P(a2) = 0.02
P(a3) = 0.18
Recursion relations
l(n) = l(n-1) + [u(n-1) - l(n-1)] FX (Xn-1 )
u(n) = l(n-1) + [u(n-1) - l(n-1)] FX (Xn)
l(1) = 0 + (1-0) 0 = 0
u(1) = 0 + (1-0).8 = .8
II Element 3
l(2) = 0 + (.8 - 0) FX (2) = .656
u(2) = 0 + (.8- 0) FX (3) = .8
a1a1 = 0.8*0.8 = 0.64
a1a2 = 0.8*0.02 = 0.016
0.64+0.016 = 0.656
a1a3 = 0.8*0.18 = 0.144
0.656+0.144 = 0.8
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
6
III Element 2
l(3) = .656 + (.8 - .656) .8 = .7712
u(3) = .656 + (.8 - .656) .82 = .77408
Last element 1
l(4) = .7712 + (.77408 - .7712) FX (0) = .7712 + 0.0 = .7712
u(4) = .7712 + (.77408 - .7712) FX (1) = .773504 = .7712 + .002888 x.8
Tag
TX(4) (1321) = .7712 + .773504
2
= .772352
Generating a binary code
Alphabet A = (a1, a2, a3, a4)
P(a1) = 1/2, P(a2) = 1/4, P(a3) = P(a4) = 1/8
= 0.5
= 0.25
= 0.125
0
a1
.25 TX (1)
.5
a2
.75
.625 TX (2)
a3
.875
.8125 TX (3)
a4
.9375 TX (4)
Binary code for TX (x)
= Represent TX(x)
in binary and truncate to l (x),
l (x) = log2 1/p(x) + 1
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
7
X = smallest integer > (X)
Binary Code
P(x) Symbol
1/2
1/4
1/8
1/8
1
2
3
4
Fx
.5
.75
.875
1.0
TX
in binary
log2 1/P(x) +1
Code
2
3
4
4
01
101
1101
1111
.25 .010
.625 .101
.8125 .1101
.9375 .1111
This is a prefix code i.e., no codeword is a prefix of another codeword.
Average length (bits/symbol) for coding groups of symbols of length m is
H (x) < lA < H(x) + 2/m
H (x) = Entropy (block m symbols together)
By increasing ‘m’ both
Huffman and arithmetic
coding can reach close
to entropy.
H (x) < lH < H(x) + 1/m
lA = Average bit length for arithmetic coding
lH = Average bit length for Huffman coding
Alphabet size K, # of possible sequences of length m is Km.
Codebook size = Km
Ex: K = 4 (a1, a2, a3, a4), m = 3, (codebook size = 43 = 64)
(a1a1a1, a1a1a2, a1a2a3,......a4a4a3, a4a4a4)
m = 4, Codebook size = 44 = 256, (a1a1a1a1, ....., a4a4a4a4)
Huffman Coding requires building the codes for the entire codebook. For
arithmetic coding, obtain the tag corresponding to a given sequence.
Synchronized rescaling.
As n gets larger (# of symbols in the group) l(n) and u(n) (lower and upper
limits of the tag interval get closer and closer). This is avoided by rescaling
while still preserving the information being transmitted. This is called
synchronized rescaling. In general the tag is confined to lower [0, .5) or upper
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
8
half [.5, 1.0) of the [0, 1) interval. If this is not valid, the algorithm is
modified. Mapping is
E1 : [0, .5)
E2 : [.5, 1)
[0, 1), E1 (X) = 2X
[0, 1), E2 (X) = 2 (X-.5)
Incremental Encoding
Generating and transmitting portions of the code as the sequence is being
observed rather than wait until the end of the sequence.
Advantages of arithmetic coding
1.
2.
3.
Advantageous for small alphabet size and highly skewed probabilities
Easy to implement a system with multiple arithmetic codes. Once the
computational machinery to implement one arithmetic code is
developed, all that is needed to set up multiple arithmetic codes is the
availability of more probability tables.
Easy to adapt arithmetic codes to changing input statistics. No need to
generate a tree (Huffman Code), a priori. Modeling and coding
procedures can be separated.
QM Coder:
This is a modification of an adaptive binary arithmetic coder called the
Q coder. QM coder tracks the lower end of the tag interval l(n) and the size of
the interval A(n).
A(n) = u(n) - l(n)
where u(n) is the upper end of the tag interval.
Applications of arithmetic coding in standards
1.
JPEG: Extended DCT-based process and lossless process
2.
JBIG: 1024 to 4096 ‘ACS’. QM Coder. Also JBIG-2: Context based
AC.
3.
H.263 Optional mode: Syntax based ‘AC’ ‘SBAC’
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
9
4.
MPEG-4 Context based arithmetic coding in shape coding ( MPEG-4
VM 8.0, July 1997)
5.
MPEG-4 Still frame image coding - wavelet based. Lowest subband is
DPCM coded. Prediction errors are coded using adaptive arithmetic
coder. Zerotree wavelet coding and quantized values are coded using
adaptive arithmetic coding.
6.
MPEG-2 AAC (Also MPEG-4 T/F audio coder) Scale factors are
coded based on bit-sliced arithmetic coding. AAC: advanced audio
coding, T/F: time/frequency
7.
JPEG-LS part 2 Lossless and nearly lossless compression of
continuous tone still images. [51]
8.
JPEG2000: context dependent binary arithmetic coding. Uses up to
9 contexts
9.
H.264/MPEG-4 Part 10: Context based adaptive binary arithmetic
coding (CABAC) [63].
JPEG: Joint Photographic Experts Group [20]
JBIG: Joint Binary Image Group [19]
H. 263 (Video Coding < 64 Kbps) [23]
MPEG: Moving Picture Experts Group
Adaptive arithmetic coding
Probabilities of source symbols are dynamically estimated based on the
changing symbol statistics observed in the message to be encoded, i.e.,
estimate probabilities on the fly (dynamic modeling).
References:
1.
2.
3.
4.
5.
T. C. Bell, J. G. Cleary, and I. H. Witten. Text Compression. Advanced
Reference Series. Englewood Cliffs, NJ: Prentice Hall, 1990.
G. G. Langdon, Jr. An Introduction to Arithmetic Coding, IBM Journal
of Research and Development, 28: 135-149, March 1984.
J. J. Rissanen and G. G. Langdon. Universal Modeling and Coding.
IEEE Trans. on Information Theory, vol. IT-27(1),pp.12-22, Jan. 1981.
G. G. Langdon and J. J. Rissanen. Compression of Black-White
Images with Arithmetic Coding. IEEE Trans. on Communications,
vol. 29(6): pp.858- 867, June 1981.
T. C. Bell, I. H. Witten, and J. G. Cleary. Modeling for Text
Compression. ACM Computing Survey, vol. 21: pp. 557-591,
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
10
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
Dec. 1989.
W. B. Pennebaker et-al, “ An Overview of the Basic Principles of the
Q-Coder Adaptive Binary
Arithmetic Coder”. IBM Journal of
Research and Development, vol. 31: pp.717- 726, Nov. 1988.
J. L. Mitchell and W. B. Pennebaker. Optimal Hardware and Software
Arithmetic Coding Procedures for the Q-Coder. IBM Journal of
Research and Development, vol. 32: pp.727-736, Nov. 1988.
W. B. Pennebaker and J. L. Mitchell. Probability Estimation for the QCoder. IBM Journal of Research and Development, vol. 32: pp.737752, Nov. 1988.
I. H. Witten, R. Neal, and J. G. Cleary, “Arithmetic coding for data
compression,” Communications of the Association for Computing
Machinery, vol. 30: pp. 520-540, June 1987. (Software)
G. G. Langdon, Jr., and J. J. Rissanen. “A Simple General Binary
Source Code”. IEEE Trans. on Information Theory, vol. IT-28: pp.
800-803, Sept. 1982.
M. Nelson. The Data Compression Book. New York: M&T Books,
1991.
K. Sayood, “Introduction to data compression,” San Francisco, CA:
Morgan Kaufmann Publishers, 1996. III Edition, 2006.
M. R. Nelson, “Arithmetic coding and statistical modeling,” Dr.
Dobb’s Journal.
J. A. Storer, “Data compression,” Rockville, MD, Computer Science
Press, 1988.
C. Chamzas and D. Duttweiler, “Probability estimation in arithmetic
and adaptive Huffman entropy coders,” IEEE Trans. Image Process,
vol. 4, pp. 237-246, March 1995.
J. M. Jou, “An on line adaptive data compression chip using arithmetic
codes,” ISCAS 96, pp. …….Atlanta, GA, May 1996.
R. M. Pelz and B. Jannet, “Error concealment for robust arithmetic
decoding in mobile radio environments,” Signal Processing: Image
Communication, vol. 8, pp. 411-419, July 1996.
F. Mueller and K. Illgner, “Embedded Laplacian pyramid image
coding using conditional arithmetic coding,” IEEE ICIP-96, Lausanne,
Switzerland, Sept. 1996.
H. M. Hang and J. W. Woods (Eds.), “Handbook of visual
communications,” Orlando, FL, Academic Press, 1995.
W. B. Pennebaker and J. L. Mitchell, “JPEG still image data
compression standard,” New York, NY, Van Nostrand Reinhold, 1993.
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
11
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
W. Kou, “Digital image compression algorithms and standards,”
Norwell, MA, Kluwer Academic, 1995.
Z. Xiang, K. Ramachandran and M. T. Orchard, “Efficient arithmetic
coding for wavelet image compression,” SPIE/IS & T Symp. on
Electronic Imaging, vol. 3024, San Jose, CA, Feb. 1997.
“Video Coding for narrow telecommunication channels at < 64
Kbits/s” Draft ITU-T Rec. H. 263, April 1995.
ITU-LBC-97-094, Draft 10 of H.263+, H.263+ Video Group, Nice,
France, Feb. 1997.
F. Golchin and K. Paliwal, “Quadtree based classification with
arithmetic and trellis coded quantization for subband image coding,”
ICASSP 97, vol.4, pp. 2921-2924, Munich, Germany, April 1997.
Web site: www.icspat.com go to links wavelet sites arithmetic
coding package (jmd@cs.dartmouth.edu).
I.H. Witten, R.M. Neal and J.G. Cleary, “Arithmetic coding for data
compression”, Commun. of the ACM, vol. 30, pp. 520-540, June 1987.
L. Stuvier and A. Moffat, “ Piecewise integer mapping for arithmetic
coding”, IEEE DCC Conf., March 1998.
T. Bell and B.McKenzie, “ Compression of sparse matrices by
arithmetic coding”, ”, IEEE DCC Conf., March 1998.
I. Kozintsev, J. Chou and K. Ramachandran, “ Image transmission
using arithmetic coding based continuous error detection”, IEEE DCC,
March 1998.
F. Ling and W. Li, Dimensional adaptive arithmetic coding for image
compression,” IEEE ISCAS, Monterey, CA, June 1998.
L-S.
Wang,
“Basics
of
arithmetic
coding,”
in
http://dodger.ee.ntu.edu.tw/lswang/arith/adapt.htm 1996.
(www.iscas.nps.navy.mil)
L. Labelle and D. Lauzn, "Arithmetic coding of a lossless contourbased representation of label images," IEEE ICIP, pp. MA8-2,
Chicago, IL, Oct. 1998.
I. Balasingham, J.M. Lervik and T.A. Ramstad, "Lossless image
compression using integer coefficient filter banks and class-wise
arithmetic coding," ICASSP98, vol. III, pp. 1349-1352, Seattle,
WA,May 1998.
Website: http://www.cis.ohio-state.edu/hypertext/faq/usenet/compress
ion - faq/part1/faq-doc-12.html
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
12
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
C. Caini, G. Calarco and A. V. Coralli, "A modified arithmetic coder
subband audio compression," IEEE ISPACS, pp. 621-626, Melbourne,
Australia, Nov. 1998.
J.B. Lee, J.-S. Cho and A. Eleftheriadis, "Optimal shape coding under
buffer constraints," IEEE ICIP, pp. MA8-8, Chicago, IL, Oct. 1998.
J. Ostermann, "Efficient encoding of binary shapes using MPEG-4,
IEEE ICIP, pp. MA8-9, Chicago, IL, Oct. 1998.
N. Brady, F. Bossen and N. Murphy, "Context-based arithmetic coding
of 2D shape sequences," Special session on shape coding, IEEE ICIP,
pp., Santa Barbara, CA, Oct. 1997.
B. Martins and S. Forchhammer, “Lossless, near-lossless and
refinement coding of bilevel-images,” IEEE Trans. IP, vol. 8, pp. 601613, May 1999.
R.L. Joshi, V.J. Crump, and T.R. Fischer, “ Image subband coding
using . arithmetic coded trellis coded quantization”’ IEEE Trans.
CSVT, vol.5, pp.515-523, Dec. 1995.
W.K. Pratt et al, “ Combined symbol matching facsimile data
compression”, Proc. IEEE, vol. 68, pp.786-796, July 1990.
P.W. Moo and X. Wu, “ Resynchronization properties of arithmetic
coding”, IEEE ICIP’99, Kobe, Japan, Oct. 1999.
MPEG-4 Parametric coder HVXC (speech) and HILN (MUSIC) LBR
BIT SLICED ARITHMETIC CODING is applied to spectral
coefficients, ISO/IEC JTC1/SC29/WG11,MPEG 99/N2946, OCT.
1999.
D. Tzovaras, N.V. Boulgouris and M.G. Strintzis, “ Lossless image
compression based on optimal prediction, adaptive lifting and
conditional arithmetic coding,” Submitted to IEEE Trans. IP, Dec.
1998.
I. Sodagar, B.-B. Chai and J. Wus, “ A new error resilience technique
for image compression using arithmetic coding,” IEEE ICASSP 2000,
Istanbul, Turkey, June 2000.
A. Moffat, R. Neal and H. Witten, “ Arithmetic coding revisited,”
DCC1995, IEEE Data Compression, Conf., pp.202-211, Snow Bird,
UT, March 1995.
R.R. Osorio and J.D. Bruguera, “ Architectures for arithmetic coding
in image compression,” EUSIPCO2000, Tampere, Finland, Sept.2000.
http://eusipco2000.cs.tut.fi
Andra, “A multi-bit arithmetic coding technique,” IEEE ICIP,
Vancouver, Canada, Sept.2000.
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
13
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
S.A Martucci, “ Reversible compression of HDTV images using
median adaptive prediction and arithmetic coding,” IEEE ISCAS, pp.
1310-1313, 1990.
M.J. Weinberger, G. Seroussi and G. Sapiro, “ LOCO-A: an arithmetic
coding of extension of LOCO-I,” ISO/IEC JTC1/SC29/WG1
document N342, June 1996.
D. Gong and Y. He, “An Efficient Architecture for Real-time Contentbased Arithmetic Coding,” IEEE ISCAS, Geneva, Switzerland, May
2000. http://iscas.epfl.ch
R.A. Freking and K.K. Parhi, “Highly
parallel
arithmetic
coding,”
IEEE DSP Workshop, Hunt, TX, Oct. 2000.
E. Baum, V. Harr and J. Speidel, “ Improvement of H.263 encoding
by adaptive arithmetic coding,” IEEE Trans. CSVT, vol. 10, pp. 797800, Aug. 2000.
J-K. Kim, K.H. Yong and C.W. Lee, “ Document image compression
by nonlinear binary subband decomposition and concatenated
arithmetic coding’” IEEE Trans. CSVT, vol. 10, pp. 1059-1067, Oct.
2000.
D. LeGall and A. Tabatabai, “ Subband coding of digital images using
symmetric short kernel filters and arithmetic coding techniques,” IEEE
ICASSP, pp. 761-765, New York, NY, 1988.
Proposal of the arithmetic coder for JPEG2000. ISO/IEC
JTC1/SC29/WG1 N762, March 1998.
D.-Y. Chan, J.-F. Yang, and S.-Y. Chen, “ Efficient connected-index
finite-length arithmetic codes, “ IEEE Trans. CSVT, vol. 11, pp. 581593, May 2001.
Gonzales et al, “ DCT coding of motion video storage using adaptive
arithmetic coding,” Signal Processing; Image Communication, vol.20,
pp.145-154, Aug.1990.
D. Mukherjee and S.K. Mitra, “ Arithmetic coded vector SPIHT with
classified tree-multistage VQ for color image coding,”
M. Ghanbari, “ Arithmetic coding with limited past memory,” IEE
Electronics Letters, vol. 23, # 13, pp. 1157-1159, June 1991.
A. Abu-Hajar and R. Sankar, “ Wavelet based lossless image
compression using partial SPIHT and bit plane arithmetic coding,”
IEEE ICASSP, vol. 4, pp. 3497-3500, March 2002.
ftp://ftp.imtc-files.org
All documents related to JVT (H.264 &
MPEG-4 Part 10)
Y.
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
14
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
T. Guionnet and C. Guillermot, “Robust Decoding of Arithmetic
Codes for Image Transmission Over Error-Prone Channels,” IEEE
ICIP, PP. ,Barcelona, Spain, 2003.
D. Marpe and T. Wiegand, “A Highly Efficient Multiplication-Free
Binary Arithmetic Coder and Its Application in Video Coding,’ IEEE
ICIP, PP. ,Barcelona, Spain, 2003.
K. Sugimoto, et al, “Generalized Motion Compensation and Arithmetic
Coding for Matching Pursuit Coder,” IEEE ICIP, PP.
,Barcelona,
Spain, 2003.
M. Grangetto, G. Olmo and P. Cosman, “Error Correction by Means of
Arithmetic Codes: an Application to Resilient Image Transmission,”
IEEE ICIP, PP. ,Barcelona, Spain, 2003.
E. Pasero,and A. Montuori “ Neural network based arithmetic coding
for real-time audio transmission on the TMS320C6000 DSP platform,”
IEEE ICASSP,vol..II, pp.
, 2003.
D. M Arpe, H. Schwarz and T. Wiegand, “Context-based adaptive
binary arithmetic coding in the H.264/AVC video compression
standard,” IEEE Trans. CSVT, vol. 13, pp. 620-636, July 2003.
D. Hong and V. van der Schaar, “ Arithmetic coding with adaptive
context-tree weighting for the H.264 video coders,’ SPIE, vol. 5308,
pp.
, Jan. 2004.
B. Valentine and O. Sohm, “ Optimizing the JPEG2000 binary
arithmetic encoder for VLIW architectures”, IEEE ICASSP 2004, PP.
, Montreal, Canada, May 2004.
T. Guionnet and C. Guillemot, “ Soft and joint source-channel
decoding of quasi-arithmetic codes”, Eurasip J. on Applied Signal
Processing, vol. 2004, pp.
, March 2004.
M. Grangetto, E. Magli and G. Olmo, “ Error resilient MQ coder and
MAP JPEG 2000 decoding”, IEEE ICIP 2004, Singapore, Oct. 2004.
S. Bouchoux et al, “Implementation of JPEG2000 arithmetic decoder
using dynamic reconfiguration of FPGA”, IEEE ICIP 2004, Singapore,
Oct. 2004.
M. Dyer, D. Taubman and S. Nooshabadi, “ Improved throughput
arithmetic coder for JPEG2000”, IEEE ICIP 2004, Singapore, Oct.
2004.
M. Grangetto, E. Magli and G. Olmo, “ Reliable JPEG2000 wireless
imaging by means of error-correcting MQ coder”, IEEE ICME, pp.
Taipei, Taiwan, June 2004.
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
15
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
S.-M. Lei, “On the Finite-Precision Implementation of Arithmetic
Codes,” Journal of Visual Communication and Image Representation,
Vol. 6, No. 1, pp. 80-88, March 1995.
Kun-Bin Lee, Jih-Yiing Lin, and Chein-Wei Jen, “A multisymbol
context-based arithmetic coding architecture for MPEG-4 shape
coding”, IEEE Trans. CSVT, VOL. 15, PP. 283-295, Feb. 2005.
K-K Ong et al, “ A high throughput contaxt-based adaptive arithmetic
codec for JPEG2000”, IEEE ISCAS 2002, Scottsdale, AZ, May 2002.
H. Shojania and S. Sudharsanan, “ A VLSI architecture for high
performance
CABAC
encoding”,
SPIE
VCIP2005,
Beijing, China, July 2005.
L. Zhang, R. Zhang and J. Zhou, “ Algorithm of incorporating error
detection into H.264 CABAC”, SPIE-VCIP2005, Beijing, China, July
2005.
M. Grangetto, P. Cosman and G. Otmo, “ Joint source/channel coding
and MAP decoding of arithmetic coding”, IEEE Trans. Commun., vol.
53, pp.
, 2005.
A. Bfezina, J. Polec and M. Hudak, “ Context based arithmetic coding
of segmented images”, 5th Eurasip conf., EC-SIP-M2005, Smolenice,
Slovakia, June-July, 2005.
M. Dyer, S. Nooshabadi and D. Taubman, “Reduced latency
arithmetic decoder for JPEG2000 block decoding”, IEEE ISCAS 2005,
vol.
, pp , Kobe, Japan, May 2005.
D. LeGall and A. Tabatabai, “ Subband coding of digital images using
short kernel filters and arithmetic coding techniques”, IEEE ICASSP
1988, pp. 761-764, 1988.
T. M. Cover and J. M. Thomas, “Elements of Information Theory”,
John Wiley & Sons, New York, 1991.
E. N. Gilbert and E. F. Moore, Variable-Length Binary Encodings,
Bell Syst. Tech. J., vol. 7, pp. 932-967, 1959.
J. J. Rissanen, Generalized Kraft inequality and arithmetic coding,
IBM J. Res. Dev., vol. 20 pp. 198—203, 1976.
J. J. Rissanen and G. G. Langdon, Arithmetic coding, IBM J. Res.Dev.,
vol. 23 (2) pp. 149—162, 1979.
F. Rubin, Arithmetic stream coding using fixed precision registers,
IEEE Trans. Inform. Theory, vol. 25 (6), pp. 672—675, 1979.
R. Pasco, Source coding algorithm for fast data compression (Ph.D.
thesis, Stanford University, 1976.
J. Chen et al, “Efficient video coding using legacy algorithmic
approaches”, IEEE Trans. Multimedia accepted.
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
16
G. Gote et al, “ H. 263+: Video coding at low bit rates”, IEEE Trans.
CSVT, vol. 8, pp 849-866, Nov. 1998.
94. K. N. Ngan, D. Chai and A. Mallik, “Very low rate video coding using
H.263 codes, “IEEE Trans. CSVT, vol. 6, pp. 308-312, June 1996.
95. Lin, Y.J. Wang and T.H. Fan, “ Compaction of ordered dithered
images using arithmetic coding,” IEEE Trans. IP, vol. 10, pp.797-802,
May 2001.
96. ftp://ftp.ucalgary.edu/
[ftp site]
97. D. Marpe, et al., "Context Based Adaptive Binary Arithmetic coding in
the H.264/AVC Video Compression Standard," IEEE Trans. CSVT,
vol. 13, no. 7, pp. 620 - 637, July. 2003.
98. V. Sze and M. Budagavi, "High throughput CABAC entropy coding in
HEVC," IEEE Trans. CSVT, vol.22, no.12, pp. 1778 - 1791, Dec.
2012.
99. Q. Yu et al, "High-throughput and low-complexity binary arithmetic
decoder based on logarithmic domain," IEEE ICIP, Quebec city,
Canada, 27 - 30 Sept. 2015
100. V. Sze and D. Marpe, "Entropy coding in HEVC," chapter 8 from the
book, V. Sze, M. Budagavi and G. J. Sullivan, "High Efficiency Video
Coding (HEVC) : Algorithms and Architectures," Springer, 2014.
101. Wikipedia, “Truncated binary encoding – Wikipedia, the free
encyclopedia,”
2014.
[online].
Availabe
:http://en.wikipedia.org/wiki/Truncated_binary_encoding.
102. E. Belyaev et al, “An Efficient Adaptive Binary Arithmetic Coder with
Low Memory requirement,” IEEE Journal of selected topics in signal
processing, vol. 7, pp. 1053-1061, Dec 2013.
93.
P.S: Note that Morgan Kaufmann is bought by Elsevier. http:// www.books.elsevier.com,
http://textbooks.elsevier.com
17
Download