Comparative Analysis of JPEG2000 Image Compression with Other Image Compression

advertisement

International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Issue 3- July 2014

Comparative Analysis of JPEG2000 Image

Compression with Other Image Compression

Standards using Discrete Wavelet Transforms

Technique

Pallavi B

#1

, Sunil M P

#2

#1

Department of Electronics and Communication Engineering, Jain University

#2

Assistant Professor, Department of Electronics and Communication Engineering, Jain University

Bangalore, India

Abstract— With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. Compression is a process that creates a compact data representation for storage and transmission purposes. Media compression usually involves the use of special compression tools because media is different from the generic data.

Generic data file, such as a computer executable program, a

Word document, must be compressed losslessly. Even a single bit error may render the data useless. To address this need a new standard is currently being developed, the JPEG2000. It provides rate-distortion and subjective image quality performance superior to existing standards. It provides features and functionalities that current standards can either not address efficiently or in many cases cannot address at all.JPEG 2000 is a new format for image compression. It is developed to replace popular JPEG format and has a lot of advantages like higher compression ratios are available, lossless mode, progressive downloads, error correction, etc. In this paper, we are analyzing the performance of JPEG2000 compression standard. Also comparing JPEG2000 with the other compression standards like

JPEG (Joint Photographic Experts Group) and L-JPEG

(Lossless JPEG).We investigate the PSNR values of the images which are compressed using all the three above mentioned compression standards. The results shows that the choice of the

“best” standard having higher PSNR value.

Keywords— Compression, JPEG2000, JPEG, L-JPEG

I.

I NTRODUCTION

Compressing an image is significantly different than compressing raw binary data. Of course, general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image can be sacrificed for the sake of saving a little more bandwidth or storage space. This also means that lossy compression techniques can be used in this area.

Uncompressed multimedia (graphics, audio and video) data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass-storage density, processor speeds, and digital communication system performance, demand for data storage capacity and data transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology. For still image compression, the

`Joint Photographic Experts Group' or JPEG standard has been established by ISO (International Standards Organization) and

IEC (International Electro-Technical Commission). The performance of these coders generally degrades at low bit-rates mainly because of the underlying block-based Discrete Cosine

Transform (DCT) scheme. The lossless mode is based on a completely different algorithm, which uses a predictive scheme. The prediction is based on the nearest three causal neighbors and seven different predictors are defined (the same one is used for all samples). The prediction error is entropy coded with Huffman coding. Hereafter we refer to this mode as

L-JPEG. The progressive and hierarchical modes of JPEG are both lossy and differ only in the way the DCT coefficients are coded or computed, respectively, when compared to the baseline mode. They allow a reconstruction of a lower quality or lower resolution version of the image, respectively, by partial decoding of the compressed bit stream. Progressive mode encodes the quantized coefficients by a mixture of spectral selection and successive approximation, while hierarchical mode uses a pyramidal approach to computing the

DCT coefficients in a multi-resolution way. More recently, the wavelet transform has emerged as a cutting edge technology, within the field of image compression. Wavelet-based coding provides substantial improvements in picture quality at higher compression ratios. Over the past few years, a variety of powerful and sophisticated wavelet-based schemes for image compression have been developed and implemented. Because of the many advantages, the top contenders in the upcoming

JPEG-2000 standard are all wavelet-based compression algorithms. This paper is organized as follows. Section II describes the literature survey. Section III describes the

ISSN: 2231-5381 http://www.ijettjournal.org

Page 107

International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Issue 3- July 2014 proposed scheme. In Section IV, we discuss the encoding and decoding algorithm. The experimental results are discussed in

Section V. Section VI concludes the paper.

II.

L ITERATURE S URVEY

Since the mid-80s, members from both the International

Telecommunication Union (ITU) and the International

Organization for Standardization (ISO) have been working together to establish a joint international standard for the compression of grayscale and color still images. This effort has been known as JPEG, the Joint Photographic Experts

Group the “joint” in JPEG refers to the collaboration between

ITU and ISO). Officially, JPEG corresponds to the ISO/IEC international standard 10928-1, digital compression and coding of continuous-tone(multilevel) still images or to the ITU-T

Recommendation T.81. The text in both these ISO and ITU-T documents is identical. The process was such that, after evaluating a number of coding schemes, the JPEG members selected a DCT1-based method in 1988. From 1988 to 1990, the JPEG group continued its work by simulating, testing and documenting the algorithm. JPEG became a Draft International

Standard (DIS) in 1991 and an International Standard (IS) in

1992 [1-3].With the continual expansion of multimedia and

Internet applications, the needs and requirements of the technologies used, grew and evolved. In March 1997 a new call for contributions were launched for the development of a new standard for the compression of still images, the

JPEG2000 [4,5]. This project, JTC2 1.29.14 (15444), was intended to create a new image coding system for different types of still images (bi-level, gray-level, color, multicomponent), with different characteristics (natural images, scientific, medical, remote sensing, text, rendered graphics, etc) allowing different imaging models (client/server, real-time transmission, image library archival, limited buffer and bandwidth resources, etc) preferably within a unified system.

This coding system should provide low bit-rate operation with rate-distortion and subjective image quality performance superior to existing standards, without sacrificing performance at other points in the rate-distortion spectrum, incorporating at the same time many interesting features. The standard intended to compliment and not to replace the current JPEG standards.

One of the aims of the standardization committee has been the development of Part I, which could be used on a royalty and fee free basis. This is important for the standard to become widely accepted, in the same manner as the original JPEG with

Huffman coding is now. The standardization process, which is coordinated by the JTC1/SC29/WG1 of ISO/IEC3 has already

(as of August 2000) produced the Final Draft International

Standard (FDIS) and the International Standard (IS) is scheduled for December 2000 [9].Only editorial changes are expected at this stage and therefore, there will be no more technical or functional changes in Part I of the Standard. The

JPEG2000 standard provides a set of features that are of importance to many high-end and emerging applications by taking advantage of new technologies. It addresses areas where current standards fail to produce the best quality or performance and provides capabilities to markets that currently do not use compression. The markets and applications better served by the JPEG2000 standard are Internet, color facsimile, printing, scanning (consumer and pre-press), digital photography, remote sensing, mobile, medical imagery, digital libraries / archives and E-commerce. Each application area imposes some requirements that the standard should fulfill.

Some of the most important features that this standard should possess are the following [4-5]:

· Superior low bit-rate performance

· Lossless and lossy compression

· Progressive transmission by pixel accuracy and resolution

· Region-of-Interest Coding

· Random code stream access and processing

· Robustness to bit-errors

· Open architecture

· Content-based description [6]

· Side channel spatial information (transparency)

· Protective image security

· Continuous-tone and bi-level compression

III.

B LOCK D IAGRAM

Fig.3.1.JPEG encoder

Fig.3.2.JPEG decoder

The JPEG2000 compression process involves encoder [7][8] and decoder. The JPEG2000 compression is divided into five different stages. In the first stage the input image is preprocessed by dividing it into non-overlapping rectangular tiles. The input image is a grayscale image. The image formats are uncompressed hence they are large . So compression should be performed to reduce the storage and the transmission bandwidth. The unsigned samples are then reduced by a constant to make it symmetric around zero and finally a multi component transform is performed. In the second stage, the discrete wavelet transform (DWT) is applied.

The purpose

ISSN: 2231-5381 http://www.ijettjournal.org

Page 108

International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Issue 3- July 2014 served by the Wavelet Transform is that it produces a large number of values having zeroed, or near zero, magnitudes. In the third stage quantization is performed.

After the wavelet transforms; the coefficients are scalar-quantized to reduce the number of bits to represent them, at the expense of quality. The output is a set of integer numbers which have to be encoded bit-by-bit. The parameter that can be changed to set the final quality is the quantization step: the greater the step, the greater is the compression and the loss of quality. With a quantization step that equals 1, no quantization is performed .

Multiple levels of DWT [9] gives a multi-resolution image. The lowest resolution contains the low-pass image while the higher resolutions contain the high-pass image. These resolutions are further divided into smaller blocks known as code-blocks where each code-block is encoded independently. Further, the quantized-DWT coefficients are divided into different bit planes and coded through multiple passes at embedded block coding with optimized truncation (EBCOT)[10][11] to give compressed byte stream in the fourth stage. The compressed byte stream is arranged into different wavelet packets based on resolution, precincts, components and layers in the fifth and final stage. At the decoder end the compressed byte stream is decoded i.e conversion of cipher text to plain text. The coefficients are dequantized and in the next stage IDWT is performed on those quantized coefficients. Thus at final stage the original image is recovered.

IV.

R

ESULTS

A

ND

D

ISCUSSION

(a) (b) (c)

Fig 4.1: Barbara image a)JPEG -Compressed image(61.87db) b)JPEG2000-Compressed image(76.09db) c)L-JPEG -

Compressed image (66.24db)

(a) (b) (c)

Fig 4.2: Lena image a)JPEG -Compressed image(60.23db) b)JPEG2000-Compressedimage(101.90db)c)L-JPEG-

Compressed image (69.81db)

V. C ONCLUSION

In this paper we propose the JPEG2000 compression standard which is based on DWT.JPEG 2000 is a new format for image

A) Experiments have been carried out for two gray scale image of size 512X512.The proposed algorithm have been evaluated with two images(Barbara and Lena images).The compression ratios of these two images under various compression standards like JPEG, JPEG2000,L-JPEG(lossless

JPEG)have been evaluated. Table 1 shows the resulted compression ratio. From the below results the JPEG2000 standard provides higher PSNR value for the compressed image than JPEG and L-JPEG. Thus JPEG2000 provides higher compression ratio and also provides higher image quality. PSNR can be calculated using the following equation

PSNR = 10*log10 (255^2/MSE)

Where Mean Square Error (MSE) is given by

MSE = (1/m*n)*sum(sum(I(i,j)- I w

(i,j))^2),Where I is the original image and Iw is the watermarked image

TABLE I.PSNR (db) values for various compression standards compression. It is developed to replace popular JPEG format and has a lot of advantages like higher compression ratios are available, lossless mode, progressive downloads, error correction, etc. In this paper, we are analysing the performance of JPEG2000 compression standard. Also comparing JPEG2000 with the other compression standards like JPEG (Joint Photographic Experts Group) and L-JPEG

(Lossless JPEG).We investigate the PSNR values of the images which are compressed using all the three above mentioned compression standards. The results shows that the choice of the “best” standard having higher PSNR value.

From the results the JPEG2000 (88.99db) standard provides higher PSNR value for the compressed image than JPEG

(61.05db) and L-JPEG (68.02).Thus JPEG2000 provides higher compression ratio and also provides higher image quality.

R EFERENCES

Image(512X512)

Barbara

Lena

Average PSNR(db) of the compressed images

JPEG

61.87

60.23

61.05

JPEG2000 L-JPEG

76.09 66.24

101.90

88.99

69.81

68.02

[1] G. K. Wallace, “The JPEG Still Picture Compression

Standard”, IEEE Trans. Consumer Electronics, Vol.38, No 1, Feb. 1992.

[2] W. B. Pennebaker and J. L. Mitcell, “JPEG: Still Image Data Compression

Stndard”, Van Nostrand Reinhold, 1993.

[3] V. Bhaskaran and K. Konstantinides, “Image and Video Compression

Standards: Algorithms and Applications”, 2nd

Ed., Kluwer Academic Publishers,1997.

[4] ISO/IEC JTC1/SC29/WG1 N505, “Call forcontributions for JPEG 2000

(JTC 1.29.14, 15444):Image Coding System,” March 1997.

[5] ISO/IEC JTC1/SC29/WG1 N390R, “New work item:JPEG 2000 image coding system,” March 1997.

ISSN: 2231-5381 http://www.ijettjournal.org

Page 109

International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Issue 3- July 2014

[6] ISO/IEC JTC1/SC29/WG11 N3464, “MPEG-7 Multimedia Description

Schemes XM (version 4.0),”August 2000.

[7] C. Christopoulos (editor), “JPEG2000 Verification Model 8.0 (technical description),” ISO/IEC JTC1/SC29/WG1 N1822, July 21, 2000.

[8] M. Boliek, C. Christopoulos and E. Majani (editors),

“JPEG2000 Part I Final Draft International Standard,” (ISO/IEC FDIS15444-

1), ISO/IEC JTC1/SC29/WG1 N1855, August 18, 2000.

[9] M. Antonini, M. Barlaud, P. Mathieu and I.Daubechies: “Image Coding

Using the Wavelet Transform”, IEEE Trans. Image Proc., pp. 205-220,April

1992.

[10] D. Taubman, “High Performance Scalable Image Compression With

EBCOT”, Proc. IEEE Int.Conference Image Processing, Vol.III, pp. 344-

348,Kobe, Japan, October 1999.

[11] D. Taubman, “High Performance Scalable Image Compression With

EBCOT”, IEEE Trans. Image Processing, Vol. 9, No. 7, pp. 1158-1170, July

2000.

AUTHOR INFORMATION:

Ms. Pallavi B is a student in the Department of

Electronics and Communication Engineering, School of Engineering, Jain University, Bangalore. She obtained her Bachelor degree in Electronics and

Communication Engineering from SBMJCE,

Bangalore in 2012, Visvesvaraya Technological

University, Belgaum and she is pursuing M.Tech (SP and VLSI) in Electronics and Communication

Engineering, Jain University Bangalore. My research interest includes VLSI and DSP.

Mr. Sunil M P, currently working as an assistant professor in the Department of Electronics &

Communication Engineering, School of Engineering and technology, Jain University, Karnataka, India. He has received Bachelor degree in Electronics and

Communication from VTU. He has received M.Tech degree in Electronics design and Technology from

National Institute of Technology, Calicut, Kerala in

2011. His research interests include Embedded System design, Analog and mixed signal VLSI design, Ultrathin gate insulators for VLSI Technologies, RF VLSI design, Image Processing, Microelectronics system packaging, Microelectronics, Micro/nano sensor technology, High-speed CMOS Analog/RF-wave integrated circuits and Systems.

ISSN: 2231-5381 http://www.ijettjournal.org

Page 110

Download