Performance Improvisation of Medical Image Compression Based on Discrete Wavelet Transform

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Number 7 – Jul 2014
Performance Improvisation of Medical Image
Compression Based on Discrete Wavelet
Transform
Ramendra Kumar Singh1, Prof.Mahesh Prasad Parsai2,
1
PG student, Dept. of Electronics and Communication Engg., Jabalpur Engg. College, Jabalpur, M.P., India,
2
Asst. Prof., Electronics & Instrumentation Engg., Jabalpur Engg. College, Jabalpur, M.P., India
Abstract:- Medical images are very important in the field of
medicine. Every year, terabytes of medical image data are
generated through advance imaging modalities such as magnetic
resonance imaging (MRI), ultrasonography (US), computed
tomography (CT), digital subtraction angiography (DSA),
digital flurography (DF), positron emission tomography (PET),
X-rays and many more recent techniques of medical imaging.
But storing and transferring these enormous voluminous data
could be a tedious job. All these medical images are need to be
stored for future reference of the patients and their hospital
findings .So to reduce transmission time and storage costs,
economical image compression schemes, without degradation of
image quality are needed. Many latest techniques have been
discovered to help in adequate compression of these massive
images. Generally, compression algorithms can be categorized
into two main categories: one is the lossless category and the
other is the lossy category. This paper explores some of the
medical image compression techniques that are existing as of
today. A comparison of these image compression techniques has
been made on the basis of performance parameters like
compression ratio, MSE and PSNR for various medical image ..
Keywords: - DWT (discrete wavelet transform), MSE (mean
square error), PSNR (peak signal to noise ratio),
CR(compression ratio).
INTRODUCTION
Image compression has become one of the most important
disciplines in digital electronics because of the ever growing
popularity and usage of the internet and multimedia systems
combined with the high requirements of the bandwidth and
storage space[1-2]. The increasing volume of data generated by
some medical imaging modalities, justifies the use of different
compression techniques to decrease the storage space and
efficiency of transfer of images over the network for access to
electronic patient records.
In medical image compression, diagnosis is effective only when
compression techniques preserve all the relevant information
ISSN: 2231-5381
needed without any appreciable loss of information [1,2,5-6].
This is the case with lossless compression while lossy
compression techniques are more efficient in terms of storage
and transmission needs because of high compression ratio and
the quality. In lossy compression, image characteristics are
usually preserved in the coefficients of the domain space in
which the original image is transformed. The quality of the
image after compression is very important and it must be within
the tolerable limits which vary from image to image and the
method to method and hence the compression becomes more
interesting as a part of qualitative analysis of different types of
medical image compression techniques [7-9].There are mainly
two categories of compression namely: 1-Lossless or
irreversible compression and Lossy or reversible compression.
DISCRETE WAVELET TRANSFORM AND JPEG
2000
Wavelet transform has become an important method for image
compression. Wavelet based coding provides substantial
improvement in picture quality at high compression ratios
mainly due to better energy compaction property of wavelet
transforms[1,6,9]. With the increasing use of multimedia
technologies, image compression requires higher performance
as well as new features.
The current JPEG standard provides excellent
performance at rates above 0.25 bits per pixel. However, at
lower rates is a sharp degradation in the quality of the
reconstructed image. To correct this and other shortcomings, the
JPEG committee initiated work on another standard, commonly
known as JPEG 2000. The JPEG 2000 is the standard which
will be based on wavelet decomposition.
Fig 2 Schematic diagram of JPEG2000 compression
http://www.ijettjournal.org
Page 322
International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Number 7 – Jul 2014
length. It is easy to see that the first observation is correct. If
symbols that occur more frequently had code words that were
1. The Discrete Wavelet Transform is first applied on the source
longer than the code words for the symbol that occurred less
image data.
often, the average number of bits per symbol would be larger if
2. The transform coefficients are then quantized.
the condition were reversed. Therefore, a code that assigns
3. Finally the entropy encoding technique is used to generate the
longer code words to symbol that occurs more frequently cannot
output stream. The Huffman encoding is generally used for this
be optimum [1, 5].
purpose.
ALGORITHM FOR JPEG 2000 COMPRESSION
Orthogonal or Bi orthogonal wavelet transform has often been
used in many image processing applications, because it makes
possible multi resolution analysis and does not yield redundant
information.
PERFORMANCE PARAMETERS
WAVELET SELECTION
1) MSE and PSNR
The ability of the wavelet to pack information into a small
number of transform coefficients determines its compression
and reconstruction performance. The wavelets chosen as the
basis of the forward and inverse transforms, affect all aspects of
wavelet coding system design and performance. They impact
directly the computational complexity of the transforms and the
system’s ability to compress and reconstruct images of
acceptable error. The most widely used expansion functions for
wavelet-based compression are the Daubechies wavelets and
biorthogonal wavelets.
Techniques commonly employed for image compression result
in some degradation of the reconstructed image. A widely used
measure of reconstructed image fidelity for an N * M size image
is the mean square error (MSE) and is given by –
Image compression research aims to reduce the number of bits
required to represent an image by removing the spatial and
spectral redundancies as much as possible.
=
( , ) − ( , ) (1)
.
=
(2)
2) COMPRESSION RATIO
DECOMPOSITION LEVEL SELECTION
Another factor affecting wavelet coding computational
complexity and reconstruction error is the number of transform
decomposition level. The number of operations, in the
computation of the forward and inverse transform increases with
the number of decomposition levels. In applications like,
searching image databases on transmitting images for
progressive reconstruction, the resolution of the stored or
transmitted images and the scale of the lowest useful
approximations normally determine the number of transform
levels.
HUFFMAN CODING
The most popular technique for removing coding redundancy is
due to David Huffman. It creates variable length codes that are
an integral number of bits. Huffman codes have unique prefix
attributes which means that they can be correctly decoded
despite being variable lengths. The Huffman codes are based on
two observations regarding optimum prefix codes: 1-In an
optimum code, symbols that occur more frequently (have higher
probability of occurrence) will have shorter code words than
symbols that occur less frequently and 2-In an optimum code,
the two symbols that occur least frequently will have the same
ISSN: 2231-5381
Data redundancy is the central issue in digital image
compression. If n1 and n2 denote the number of information
carrying units in original and encoded image respectively, then
the compression ratio, CR can be defined as
=
(3)
And data redundancy of the original image can be defined as
=
−
(4)
PROPOSED STRATEGIES
With the use of different compression technique our purpose is
to get the good compression ratio with sufficient PSNR. The
commonly used JPEG2000 is good enough for the compression
but we may get much improvisation in its performances by the
changing the encoding technique.
In our work, the Schematic methodology, that we followed, is
same, but we made some changes, in the coding technique,
thresholding, and quantization.
In the above two techniques, the Huffman encoding generally
uses. We have used the Run Length encoding followed by the
Huffman encoding and the significant coefficients are encoded
separately using the Huffman encoding. This encoding
http://www.ijettjournal.org
Page 323
International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Number 7 – Jul 2014
technique provides a measurable improvisation in the
compression ratio without changing the PSNR.
In the quantization stage, we have limited the levels to 2 .
For the thresholding we used a parameter alpha, where alpha
Where n is number of bits corresponds to each pixel of
can be selected manually from 0 to 1.
compressed image. We have chosen the n=4,5,6 in our
measurement.
Threshold level is decided by the formula:
=
∗(
−
)+
;
RESULT AND ANALYSIS
Where:
MaxI=Maximum intensity value present in the image.
MinI=Minimum intensity present in the image.
The results are shown below for the two methods, for the
different images, in the tabular form
Table 2 Result of image compression using DWT for
medical image 1
Quantization bits
(Tbits)
4
5
6
Alpha
PSNR after
thresholding
0.1600
0.1800
0.2000
0.2200
0.2400
0.2600
0.2800
0.3000
0.1600
0.1800
0.2000
0.2200
0.2400
0.2600
0.2800
0.3000
0.1600
0.1800
0.2000
0.2200
0.2400
0.2600
0.2800
0.3000
47.4532
46.1962
45.5381
45.0259
44.1852
43.2092
42.0635
40.9962
47.4532
46.1962
45.5381
45.0259
44.1852
43.2092
42.0635
40.9962
47.4532
46.1962
45.5381
45.0259
44.1852
43.2092
42.0635
40.9962
ISSN: 2231-5381
PSNR of the
reconstructed
image
42.7999
42.7999
42.6265
42.3598
41.9176
41.3288
40.5548
39.7930
45.5307
45.149
44.6472
44.2297
44.1852
42.6808
41.6566
40.6766
46.9337
45.9280
45.3128
44.8278
44.0230
43.0807
41.9655
40.9203
Compression
ratio
20.8971
20.8971
22.3215
22.9880
23.1699
23.3775
23.6539
23.8378
13.3407
16.5354
17.8724
18.2032
18.4440
18.6540
18.8803
19.0263
8.5528
13.8767
14.8743
15.1288
15.3363
15.5028
15.6850
15.8156
Table 3 Result of image compression using DWT for
medical image 2 (mri.tif)
Quantization
bits (Tbits)
4
5
6
Alpha
0.1000
0.1200
0.1400
0.1600
0.1800
0.2000
0.2200
0.2400
0.2600
0.1000
0.1200
0.1400
0.1600
0.1800
0.2000
0.2200
0.2400
0.2600
0.1000
0.1200
0.1400
0.1600
0.1800
0.2000
02200
02400
02600
http://www.ijettjournal.org
PSNR after
thresholding
38.1088
35.8070
34.8013
33.6891
32.0394
30.1489
28.1017
25.9951
24.0763
38.1088
35.8070
34.8013
33.6891
32.0394
30.1489
28.1017
25.9951
24.0763
38.1088
35.8070
34.8013
33.6891
32.0394
30.1489
28.1017
25.9951
24.0763
PSNR of the
reconstructe
d image
32.4307
32.2950
31.8504
31.2639
30.3185
28.9763
27.3420
25.5563
23.7986
36.2610
34.6861
33.9190
33.0075
31.5674
29.8472
27.9214
25.8859
24.009
37.5326
35.5114
34.5700
33.5127
31.9212
30.0742
28.0573
25.9685
24.0601
Page 324
Compre
ssion
ratio
10.3166
11.0899
11.6737
11.8221
12.0046
12.1069
12.3487
12.8448
12.8799
6.7710
8.4853
8.9335
9.0879
9.2340
9.4071
9.6096
9.9487
10.2568
5.6044
6.9137
7.2241
7.3560
7.4822
7.6636
7.8511
8.1363
8.4895
International Journal of Engineering Trends and Technology (IJETT) – Volume 13 Number 7 – Jul 2014
(a)
(b)
(c)
(d)
Fig.3(a) Original Image (medical image 1 –brain image) and (b) compressed image at alpha =0.18, Tbits =4 (c) compressed image at alpha
=0.20, Tbits =5 (d) compressed image at alpha =0.20, Tbits =6.
(a)
(b)
(c)
(d)
Fig.2 (a) Original Image (medical image 2 -mri.tif ) and (b) compressed image at alpha =0.18, Tbits =4 (c) compressed image at alpha
=0.30, Tbits =5 (d) compressed image at alpha =0.20, Tbits =6.
From the above result we have analyzed that the compression
ratio is increasing and the PSNR is decreasing with the
increment in the value of alpha. Alpha is the parameter
corresponds to the value of non zero coefficient in the
transformed image.
Another governing factor is the Number of bits corresponds to
the each pixel of compressed image (Tbits). For the higher value
of Tbits the number of levels in the compressed image is higher
so PSNR is higher but the Compression ratio is lower.
PSNR decreases but the compression ratio is increases. Thus we
may select the desired level of compression with the
corresponding PSNR. Our implemented strategies work
sufficiently well, to provide improvisation in the compression
ratio with the desired PSNR.
REFRENCES
[1]
[2]
CONCLUSION
We have seen the different compression methods that are most
commonly and frequently used for the image compression. The
JPEG compression scheme is a standard technique that uses
DCT, whereas JPEG2000 used the DWT and provides the
results that are good enough. But for both JPEG and JPEG2000,
We may use the improvised encoding technique to get the better
results. We have used the run-length encoding followed by
Huffman encoding and the significant coefficients are encoded
separately using the Huffman encoding. With the use of good
encoding technique the PSNR remains unaffected but the
compression ratio get increased.
Further we have seen the results with the different number of
significant coefficient by changing the parameter alpha. With
increasing the value of alpha the compression ratio increased
and PSNR decreased. We have also seen the results for the
different number of bits corresponds to the each pixel of
compressed image (Tbits). With lowering the value of Tbits the
ISSN: 2231-5381
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Smitha Joyce Pinto , Prof. Jayanand P.Gawande "Performance analysis of
medical image compression techniques” IEEE, 2012.
M.A. Ansari, R.S. Anand “Performance Analysis of Medical Image
Compression Techniques with respect to the quality of compression” IETUK International Conference on Information and Communication
Technology in Electrical Sciences (ICTES 2007), pp. 743-750. Dec.2007
M. Antonini, M Barlund “Image coding using wavelet transform” ,
IEEE,vol 1, no.2 , April 1992.
Sumathi Poobal, G.Ravindran, “The performance of fractal image
compression on different imaging modalities using objective quality
measures” IJEST, Vol. 3, No. 1, Jan 2011.
M.A. Ansari , R.S. Anand “Comparative Analysis of Medical Image
Compression Techniquesand their Performance Evaluation for
Telemedicine”, Proceedings of the International Conference on Cognition
and Recognition, PP.670-677.
Charilaos Christopoulos, Athanassios Skodras “The jpeg2000 still image
coding system:an overview” IEEE Transactions on Consumer Electronics,
Vol. 46, No. 4, pp. 1103-1127, November 2000.
Ahmed N. T., Natarajan and K. R. Rao, "On image processing and discrete
cosine transform", IEEE Trans. Medical Imaging, vol C-23, pp. 90-93,
1974.
Wallace.G, "The JPEG still picture compression standard", vol.34, pp. 3044, 1991.
Charilaos Christopoulos, “The JPEG 2000 still image coding system: an
overview”, IEEE trans. On consumer electronics, vol.46, No.4, PP.11031127, Nov.2000.
http://www.ijettjournal.org
Page 325
Download