Review of Image Compression and Comparison of its Algorithms

advertisement
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 12, December 2013
ISSN 2319 - 4847
Review of Image Compression and
Comparison of its Algorithms
1
1
Nirbhay Kashyap, 2Dr. Shailendra Narayan Singh
Assistant Professor, Department of Computer Science and Engg.
Amity School of Engineering and Technology
Amity University Uttar Pradesh
2
Department of Computer Science and Engg.
Amity School of Engineering and Technology
Amity University Uttar Pradesh
Abstract
Signal and Image processing is a field which has been revolutionized by the application of computer and imaging technology. It
has become very difficult to manage uncompressed multimedia (graphics, audio and video) data because it requires considerable
storage capacity and transmission bandwidth. To solve this issue several techniques have been developed. This paper gives an idea
about popular image compression algorithms based on Wavelet, JPEG/DCT, VQ, and Fractal approaches.
We review and discuss the advantages and disadvantages of these algorithms for compressing grayscale images. Among all
available techniques wavelet transforms have been found very useful.
Keywords : Image compression; DCT; JPEG; VQ; Fractal; Wavelet.
I. Introduction:
Wavelet offers a powerful set of tools for handling fundamental problems in imaging technology like image compression,
noised reduction, image enhancement, texture analysis,diagnostic heart trouble and speech recognition.
Wavelets, studied in mathematics, quantum physics, statistics, and signal processing, are basis functions that represent
signals in different frequency bands, each with a resolution matching its scale .They have been successfully applied to
image compression, enhancements, analysis, classification, and retrieval. A common characteristic of most images is that
the neighboring pixels are correlated and therefore contain redundant information. The foremost task then is to find less
correlated representation of the image. Two fundamental components of compression are redundancy and irrelevancy
reduction.
Redundancy reduction aims at removing duplication from the signal source (image/video). Irrelevancy reduction omits
parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System (HVS). In general,
three types of redundancy can be identified:
A. Coding Redundancy
A code is a system of symbols (letters, numbers, bits, and the like) used to represent a body of information or set of events.
Each piece of information or events is assigned a sequence of
code symbols, called a code word. The number of symbols in each code word is its length. The 8-bit codes that are used to
represent the intensities in the most 2-D intensity arrays contain more bits than are needed to represent the intensities [5].
B. Spatial Redundancy and Temporal Redundancy
Because the pixels of most 2-D intensity arrays are correlated spatially, information is unnecessarily replicated in the
representations of the correlated pixels. In video sequence, temporally correlated pixels also duplicate information.
C. Irrelevant Information
Most 2-D intensity arrays contain information that is ignored by the human visual system and extraneous to the intended
use of the image. It is redundant in the sense that it is not used [4].
Image compression research aims at reducing the number of bits needed to represent an image by removing the spatial
and spectral redundancies as much as possible.
II. Types of Image compression.
Image compression can be lossy or lossless. Lossless image compression is preferred for archival purposes and often for
medical imaging, technical drawings, clip art, or comics. This is because lossy compression methods, especially when
used at low bit rates, introduce compression artifacts.[1] Much higher compression ratio can be obtained if some error,
which is usually difficult to perceive, is allowed between the decompressed image and the original image. This is lossy
compression. In many cases, it is not necessary or even desirable that there be error-free reproduction of the original
image. For example, if some noise is present, then the error due to that noise will usually be significantly reduced via
some denoising method. In such a case, the small amount of error introduced by lossy compression may be acceptable.
Volume 2, Issue 12, December 2013
Page 267
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 12, December 2013
ISSN 2319 - 4847
Another application where lossy compression is acceptable is in fast transmission of still images over the Internet. As an
intuitive example, most people know that a compressed ZIP file is smaller than the original file, but repeatedly
compressing the file will not reduce the size to nothing and will in fact usually increase the size.[10]
III. Various Methods of compression:
A. JPEG:
DCT-Based Image Coding Standard .The JPEG/DCT still image compression has become a standard recently. JPEG is
designed for compressing full-color or grayscale images of natural, real-world scenes.
To exploit this method, an image is first partitioned into non overlapped 8×8 blocks. A discrete Cosine transform (DCT)
is applied to each block to convert the gray levels of pixels in the spatial domain into coefficients in the frequency domain
[3].
The coefficients are normalized by different scales according to the quantization table provided by the JPEG standard
conducted by some psycho visual evidence. The quantized coefficients are rearranged in an order to be further compressed
by an efficient lossless coding strategy such as run length coding, arithmetic coding, or Huffman coding. The information
loss occurs only in the process of coefficient quantization. The JPEG standard defines a standard 8×8 quantization table
for all images which may not be appropriate. To achieve a better decoding quality of various images with the same
compression by using the DCT approach, an adaptive quantization table may be used instead of using the standard
quantization table [2].
B. Wavelet Transform:
Wavelets are functions defined over a finite interval and having an average value of zero. The basic idea of the wavelet
transform is to represent any arbitrary function (t) as a superposition of a set of such wavelets or basis functions. These
basis functions or baby wavelets are obtained from a single prototype wavelet called the mother wavelet, by dilations or
contractions (scaling) and translations (shifts). The Discrete Wavelet Transform of a finite length signal x(n) having N
components, for example, is expressed by an N x N matrix Despite all the advantages of JPEG compression schemes
based on DCT namely simplicity, satisfactory performance, and availability of special purpose hardware for
implementation; these are not without their shortcomings. Since the input image needs to be blocked, correlation across
the block boundaries is not eliminated. This results in noticeable and annoying ``blocking artifacts'' particularly at low bit
rates [7].
C. VQ Compression:
Vector quantization is a technique from signal processing which allows the modeling of probability density functions by
the distribution of prototype vectors It works by encoding values from a multidimensional vector space into a finite set of
values from a discrete subspace of lower dimension. A lower-space vector requires less storage space, so the data is
compressed. Due to the density matching property of vector quantization, the compressed data have errors that are
inversely proportional to their density[6].
D. Fractal Compression:
Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for
textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image .
Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the
encoded image.
Volume 2, Issue 12, December 2013
Page 268
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 12, December 2013
ISSN 2319 - 4847
IV. Comparison of Various methods:
Method
Wavelet
Advantages
High Compression
Ratio
JPEG
Current Standard
VQ
Simple decoder
No-coefficient
quantization
Fractal
Good
mathematical
Encoding-frame
Disadvantages
Coefficient
quantization
Bit allocation
Coefficient(dct)
quantization
Bit allocation
Slow codebook
generation
Small bpp
Slow Encoding
Algorithm
PSNR
Encoding time Decoding time
Wavelet
36.71
0.8 sec
0.7 sec
JPEG
34.27
0.2 sec
0.2 sec
VQ
28.26
6.0 sec
0.7 sec
Fractal
27.21
6.3 hrs
3.5 sec
Performance of Coding Algorithms on A 400×400 Fingerprint Image Of 0.5 bpp
Algorithm
PSNR
Encoding time Decoding time
32.47
Wavelet
0.7 sec
0.5 sec
JPEG
29.64
0.2 sec
0.2 sec
VQ
N/A
N/A
N/A
Fractal
N/A
N/A
N/A
Performance of Coding Algorithms on A 400×400 Fingerprint Image Of 0.25 bpp
Algorithm
PSNR
CPU Time
values
Encoding
Decoding
Wavelet
34.66
0.35 sec
0.27 sec
JPEG
31.73
0.12 sec
0.12 sec
VQ
29.28
2.45 sec
0.18 sec
Fractal
29.04
5.65 hrs
1.35 sec
Performance Of Coding Algorithms On 256×256 Leena Images
Method
Compression ratio
Wavelet
>>32
JPEG
<=50
VQ
<32
Fractal
>=16
Performance On The Basis Of Compression Ratio Of Different Coding Algorithms
Fig 1 Original Image of Leena
Volume 2, Issue 12, December 2013
Page 269
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 12, December 2013
ISSN 2319 - 4847
A)
B)
C)
D)
Fig.3: Decoded image of Lena by (a) Wavelet, (b) JPEG, (c) VQ,and (d) Fractal algorithms.
Fig 1 Original Image of fingerprint
A)
B)
C)
D)
Fig.4 : Decoded fingerprints by (a) Wavelet, (b)
JPEG, (c) VQ, (d) Fractal algorithms.
Volume 2, Issue 12, December 2013
Page 270
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 12, December 2013
ISSN 2319 - 4847
V. An image can be represented by a linear combination of the wavelet basis functions and
compression can be performed on the wavelet coefficients[8]
Since wavelet transforms decompose images into several resolutions, the coefficients, in their own
right, form a successive approximation of the original images. For this reason, wavelet transforms
are naturally suited for progressive image compression algorithms.
• In 1992, DeVore et al. discovered an efficient image compression algorithm by preserving only the largest coefficients
(which are scalar quantized) and their positions.
• In the same year, Lewis and Knowles published their image compression work using 2-D wavelet transform and a treestructured predictive algorithm to exploit the similarity across frequency bands of the same orientation[9]
• Many current progressive compression algorithms apply quantization on coefficients of wavelet transforms which
became more widely used after Shapiro's invention of the zero-tree structure, a method to group wavelet coefficients
across different scales to take advantage of the hidden correlation among coefficients. A major breakthrough in
performance was achieved. Much subsequent research has taken place based on the zero-tree idea.
• A very significant improvement was made by Said and Pearlman, referred to as the S & P algorithm. This algorithm
was applied to a large database of mammograms by a group of researchers at Stanford University and was shown to be
highly efficient even by real statistical clinical-quality evaluations.
An important advantage of the S & P algorithm and many other progressive compressions Algorithms is the low
encoding and decoding complexity. No training is needed since trivial scalar Quantization of the coefficients is applied in
the compression process. However, by trading off Complexity, the S & P algorithm was improved by tuning the zero-tree
structure to specific data. [11]
• A progressive transmission algorithm with automatic security filtering features for on-line medical image distribution
using Daubechies' wavelets has been developed by Wang, Li, and Wiederhold of Stanford University.
• Other work in this area include the work by LoPresto et al. on image coding based on mixture modeling of wavelet
coefficients and a fast estimation-quantization framework
• The work byVillasenor et al. on wavelet filter evaluation for compression, and the work by Rogers and Cosman on
fixed-length packetization of wavelet zero tree
• A recent study by a group of radiologists in Germany concluded that only wavelets provided accurate review of lowcontrast details at a compression of 1:13, tested among a set of compression techniques including wavelets, JPEG, and
fractal.
• A modified version of LBG algorithm using Partial Search Partial Distortion is presented for coding the wavelet
coefficients to speed up the codebook generation. The proposed scheme can save 70 - 80 % of the Vector Quantization
encoding time compared to fully search and reduced arithmetic complexity without sacrificing performance.
VI. Conclusion:
We have discussed role of wavelet in imaging technology. All types of irrelevancy and redundancy of images have been
presented. Different types of compression methods have been examined on pictures of leena. All the four approaches are
satisfactory for fingerprint image at 0.5 bpp , however wavelet method has larger PSNR. For a very low bit rate, for
example 0.25 bpp or lower, the wavelet approach is superior to other approaches. VQ & fractal approach are not
appropriate for a low bit rate compression.
References:
[1] Adams CN, Cosman PC, Fajardo L, Gray RM, Ikeda D, Olshen RA, Williams M. Evaluating quality and utility of
digital mammograms and lossy compressed digital mammograms. In: Proc Workshop on Digital Mammography.
Amsterdam: Elsevier, 1996:169-76.
[2] Antonini A, Barlaud M, Mathieu P, Daubechies I. Image coding using wavelet transform.IEEE Trans. on Image
Processing. 1992; 1(2):205-20.
[3] Beylkin G, Coifman R, Rokhlin V. Fast wavelet transforms and numerical algorithms. Comm.Pure Appl. Math.
1991; 44:141-83.
[4] Bouman CA, Shapiro M. A multiscale random _eld model for Bayesian image segmentation.IEEE Trans. on Image
Processing. 1994; 3(2):162-77.
[5] Brammer MJ. Multidimensional wavelet analysis of functional magnetic resonance images.Hum Brain Mapp.
1998;6(5-6):378-82.
[6] Cohen A, Daubechies I, Vial P. Wavelets on the interval and fast wavelet transforms. Appl. Comput. Harm. Anal.
1993;1:54-82.
[7] Crouse MS, Nowak RD, Baraniuk RG. Wavelet based statistical signal processing using hid-den Markov models.
IEEE Trans. on Signal Processing. 1998;46(4):886-902.
[8] Crowe JA, Gibson NM, Woolfson MS, Somekh MG. Wavelet transform as a potential tool for ECG analysis and
compression. J Biomed Eng. 1992 May;14(3):268-72.
Volume 2, Issue 12, December 2013
Page 271
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 12, December 2013
ISSN 2319 - 4847
[9] Daubechies I. Orthonormal bases of compactly supported wavelets. Comm. Pure and Appl.Math. 1988;41:909-96.
[10] DeVore RA, Jawerth B, Lucier BJ. Image compression through wavelet transform coding. IEEE Trans. on
Information Theory. 1992;38:719-46.
[11] Donoho DL. Smooth wavelet decomposition with blocky coeÆcient kernels. In: Recent Ad-vances in Wavelet
Analysis. Boston:Academic Press, 1994:1-43.
AUTHER
Dr. Shailendra Narayan Singh working in Amity University Uttar Pradesh and having experience in teaching
and administration. He did his
B. Tech, M. Tech, Ph.D. in the field of Computer Science and
Engineering. His research area is software reusability for aspect oriented and object oriented software.
Dr. Shailendra Narayan Singh is having more than fifteen years of teaching experience in the academic field of
computer science and engineering in Indian and Abroad Universities. Dr. S N. Singh is author of four book titles
Operating System, Fundamentals of Internet, Software Project Management Part-I and Part-II. Teaching is his first love
and participates in various conferences, seminars in National and International.
Mr Nirbhay Kashyap is working in Amity University Uttar Pradesh and having experience in teaching and
administration. He did his
B. Tech, M. Tech in the field of Computer Science and Engineering. His
research area is digital image processing, soft computing , image compression. He is also CCNA certified
professional.
Volume 2, Issue 12, December 2013
Page 272
Download