International Journal of Application or Innovation in Engineering & Management... A Web Site: www.ijaiem.org Email: , Volume 2, Issue 1, January 2013

advertisement
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 1, January 2013
ISSN 2319 - 4847
IMPROVED INTELLIGENT IMAGE
COMPRESSION WITH ADAPTIVE
WAVELET THRESHOLD FOR IMAGE
DENOISING
1
Jibanananda Mishra, 2Ranjan kumar Jena
1
Orissa Engineering College, BPUT, Bhubaneswar
College of Engineering and Technology, BPUT, Bhubaneswar
2
ABSTRACT
The first part of the paper an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding is assessed. The
threshold is derived in a Bayesian framework and the prior used on the wavelet coefficients is the Generalized Gaussian
distribution (GGD) widely used in image processing applications. The threshold is simple and it is adaptive to each sub band
because it depends on data-driven estimates of the parameters. Simulation results shown that with this method, called
BayesShrink, achieve the state-of-the-art image denoising performance at the low computational complexity. In the second
part of the paper an intelligent image compression using neural network is proposed. The objective picture quality measures
like Signal to Noise Ratio(SNR), Mean Square Error(MSE), are used to measure the picture quality. It is shown through
performance measure that with such denoising and compression the MSE is typically within 5%.
Key Words: Image denoising, Adaptive wavelet threshold, Bayes shrink, Mean square error
1. INTRODUCTION
Digital images are often degraded by noise in the acquisition and/or transmission phase. The goal of image denoising is
to recover the true/original image from such a distorted/noisy copy. Traditionally, this is achieved by linear processing
such as Wiener filtering. A vast literature has emerged recently on signal denoising using nonlinear techniques in the
setting of additive white Gaussian noise. Since the early 1990s, the wavelet theory has been well developed and widely
introduced into many fields of applications, including statistics estimation, density estimation, solution of partial
differential equations, and image compression. In 1992, Donoho and Johnstone presented a method named wavelet
shrinkage, and showed its obvious efficiency on signal denoising and inverse problem solving [1]. In this method a
discrete wavelet transform (DWT) is performed on the noisy signal first. Then with a present threshold, coefficients
with magnitude smaller than the threshold are set to zero while those with larger magnitude are kept and used to
estimate the noiseless coefficients. Finally, an inverse DWT (IDWT) reconstructs the signal from the estimated
coefficients. Later, in their subsequent paper, Donoho and Johnstone proved that wavelet shrinkage is nearly optimal
(in a minimax sense) over a wide range of function classes and error criterions and always provides an estimated
function with a smoothness not less than the original [2]. However, the universal threshold
(is the
standard deviation of the noise and length of noisy data), which was first given for wavelet shrinkage, was found
tending to set all the detail coefficients to zero, especially when N approaches infinite [3]. In fact, Donoho and
Johnstone pointed out that the universal threshold has a good mean square error (MSE) performance when the sample
size is in the tens or hundreds [1]. Since the threshold plays a key role in this appealing method, variant methods
appeared later to set an appropriate threshold [4]-[5]. Donoho and Johnstone found that in the criterion under which
the universal threshold is nearly optimal, the smoothness restriction is probably the cause for over-erasing wavelet
coefficients. Therefore, they removed the restriction and gave what is known as the Stein’s Unbiased Risk Estimate
(SURE), threshold, which minimizes SURE, an estimate of MSE risk [6]. Assuming that wavelet coefficients obey the
generalized Gaussian distribution (GGD), Chang et al. presented a new threshold,
(
is the noise variance
and
the standard deviation of noiseless coefficients in a sub band), which was claimed to often provide a better
denoising result than the SURE threshold [7] and can be unwarrantedly large due to its dependence on the number of
samples, which is more than for typical test image of size. The formulation is grounded on the empirical observation
that the wavelet coefficients in a sub band of a natural image can be summarized adequately by a generalized Gaussian
distribution(GGD). This observation is well-accepted in the image processing community (for example, see [8], [9],
[10], [11], [12]) and is used for state-of-the-art image coders in [8], [9], [12]. It follows from this observation that the
Volume 2, Issue 1, January 2013
Page 249
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 1, January 2013
ISSN 2319 - 4847
average MSE (in a sub band) can be approximated by the corresponding Bayesian squared error risk with the GGD as
the prior applied to each in an independent and identically distributed(i.i.d) fashion. That is, a sum is approximated by
an integral. We emphasize that this is an analytical approximation and our framework is broader than assuming
wavelet coefficients are iid draws from a GGD. The goal is to find the soft-threshold that minimizes this Bayesian risk,
and we call our method BayesShrink. The proposed Bayesian risk minimization is sub band-dependent. Given the
signal being generalized Gaussian distributed and the noise being Gaussian, via numerical calculation a nearly optimal
threshold for soft-thresholding is found.
2. WAVELET SHRINKAGE
Consider the following additive noise model in time domain:
x(i) = s(i) + e(i) -----------------------------------------(1)
Where the signal s is corrupted by the zero-mean white Gaussian noise process e ~ N (0, ),resulting in x, i.e., noisy
signal. Throughout this text, A(i) is often used to stand for the i th element for a given sequence A . By the linearity of
wavelet transformation, there is a regression model in the wavelet domain:
=
+
---------------------------------- (2)
Where
is the wavelet transform of x, while
and
are those corresponding to s and n. The elements of
will be termed as data coefficients (noisy coefficients), signal coefficients (noiseless coefficients), and
noise coefficients respectively. The term denoised coefficients is used to refer to the estimate of the noiseless coefficient
from the corresponding noisy coefficient .When the transformation is orthogonal, which is only the case considered
here, the noise in each wavelet sub band remains a zero-mean white Gaussian process with variance consistent with
that in the time domain. Given a threshold T , two popular thresholding rules in wavelet shrinkage called hard
thresholding and soft thresholding which correspond to the strategies “kill or keep” and “kill or “shrink” respectively
can be expressed by two different functions upon the noisy coefficients of which the values are denoised coefficients.
=
(
-----------------------------------(3)
I(
)
=
(
I(
----------------------------------(4)
)(
- )
Herein, I (.) denotes the identification function. In a wavelet shrinkage scheme, no modification is performed on the sub
band at the coarsest scale due to belief that the coefficients there are in have high enough signal–to-noise ratios (SNR).
To choose between the two thresholding rules, Bruce and Gao took a profound research and gave several important
instructions. For a given threshold , soft thresholding has smaller variance, however higher bias than hard thresholding
especially for very large wavelet coefficients. If the coefficients distribute densely close to the threshold, hard
thresholding will show large variance and bias. For soft thresholding, smaller error often happens when the coefficient
is close to zero. In general, soft thresholding chosen for smoothness while hard thresholding for lower error. The softthresholding rule is chosen over hard thresholding for several reasons. First, soft thresholding has been shown to
achieve near optimal minimax rate over a large range of
Besov spaces. Second, for the generalized Gaussian prior assumed in this work, the optimal soft-thresholding estimator
yields a smaller risk than the optimal hard-thresholding estimator .Lastly, in practice, the soft thresholding method
yields more visually pleasant images over hard-threshold because the later is discontinuous and yields abrupt artifacts
in the recovered images, especially when the noise energy is significant. Statistical image modeling in the waveletdomain has applications in image compression, estimation, and segmentation. It can be roughly categorized into three
groups: the interscale models, the intrascale models and the hybrid inter–and intrascale models [13] - [15]. In the
statistical Bayesian literature, many works have concentrated on deriving the best threshold (or shrinkage factor) based
on priors such as the Laplacian and a mixture of Gaussians ([16],[17]). With an integral approximation to the pixelwise MSE distortion measure as discussed earlier, the formulation here is also Bayesian for finding the best softthresholding rule under the generalized Gaussian prior. The GGD has been used in many sub band or wavelet-based
image processing applications [16], [18]. In [16], it was observed that a GGD with the shape parameter ranging from
0.5 to 1 can adequately describe the wavelet coefficients of a large set of natural images.
3. ADAPTIVE THRESHOLD FOR BAYES SHRINK
The GGD is
GG
=C
-<x<,
>0,
α
exp{ [α
>0. Where
│x│ }------(5)
=
Volume 2, Issue 1, January 2013
Page 250
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 1, January 2013
ISSN 2319 - 4847
and C
=
du is the gamma function. The parameter
is the standard deviation and is the shape parameter.
For a given set of parameters, the objective is to find a soft-threshold T which minimizes the Bayes risk.
Fig.1 Flow chart for image Denoising Algorithm Using wavelet Transform
----------------- (6)
where = (Y),Y/X N(X, ) and X
denote the optimal threshold by
T*, T*
= arg min
------------------------(7)
Which is the function of the parameters
and
.
Referring to [7], we can obtain
------------------------------------------- (8)
This threshold is not only nearly optimal but also has an intuitive appeal. The normalized threshold is inversely
proportional to the standard deviation and proportional to the noise standard deviation. When the signal is much
stronger than the noise the normalized threshold is chosen to be small in order to preserve most of the signal and
remove some of the noise. When the noise dominates , the normalized threshold is chosen to be large to remove the
noise which has overwhelmed the signal[19].Thus, this threshold choice adapts to both the signal and noise
characteristics. Fig.1shows the flow chart for image Denoising Algorithm Using wavelet Transform.
4. ALGORITHM OF INTELLIGENT COMPRESSION
Multilayer neural network can be employed to achieve intelligent image compression [20]. The network parameters
will be adjusted using different learning rules for comparison purposes. Mainly, the input pixels will be used as target
values so that assigned mean square error can be obtained, and then the hidden layer output will be the compressed
image. Artificial Neural network have found increasing applications in image compression due to their noise
suppression and learning capabilities. A number of different neural network models, based on learning approach have
been proposed. Models can be categorized as either linear or nonlinear neural nets according to their activation
Volume 2, Issue 1, January 2013
Page 251
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 1, January 2013
ISSN 2319 - 4847
function. The network will be trained by back propagation, using different learning algorithms. Mainly Newton’s
method, gradient descent and adaptive gradient descent learning algorithms will be used for this purpose. The neural
network structure can be illustrated in fig. 2. Three layers, one input layer, one output layer and one hidden layer, are
assigned. Both of input layer and output layer are fully connected to hidden layer. Compression is achieved by
designing the value of the number of neurons at the hidden layer, less than that of neurons at both input and output
layers. Log-sigmoid function which is given in equation1 is one of the most common functions employed in different
neural networks problems.
---------------------------------(9)
It was shown that nonlinear functions have a more capability of learning both linear and nonlinear problems than
linear ones.
Fig.2 Back Propagation Neural Network
The output
of the
neuron in the hidden layer is given by
.--------------------------- (10)
Also the output
=
Where
and
of the
neuron in the output layer is given by
.--------------------------- (11)
are the activation function of the hidden layer and the output layer respectively. The
synaptic weight connecting the
input node to the
neuron of the hidden layer. The
is the bias of the
is the
neuron
of the hidden layer. N is the number of neurons in the hidden layer and M is the number of neurons in the output
layer.Training the network is an important step to get the optimal values of weight and biases after being initialize
randomly. The training processes require a set of prototypes and targets to learn the proper network behaviour. During
training, the weights and biases of the network are iteratively adjusted to minimize the network performance function
which is the mean square error for the feed
forward networks. The mean square error is calculated as the average squared error between the inputs and targets. In
the basic back propagation training algorithm, the weights are moved in the direction of the negative gradient, which is
the direction in which the performance function decreases most rapidly.
Iteration of this algorithm can be written as:
=
. ------------------------------------(12)
Where
is a vector of current weights and biases,
. is the current gradient, and
is the learning rate.
Here all training algorithms have been developed using MATLAB. It is important to explain the steps which have been
done. For our purpose three layers feed forward neural network had been used. Input layer, hidden layer with 16
neurons, and output layer with 64 neurons. Back propagation algorithms had been employed for the training processes.
To do the training input prototypes and target values are necessary to be introduced to the network so that the idea
behind supplying target values is that this will enable us to calculate the difference between the output and target values
and then recognize the performance function which is the criteria of our training. For training the network, the
512x512 pixels barbara image had been employed.
Procedure for Image Compression
The image is split into non-overlapping sub-images. Say for example 256 x 256 bit image will be split into 4 x 4 or 8 x
8 or 16 x 16 pixels. The normalized pixel value of the sub-image will be the input to the nodes. The three-layered back
propagation learning network [21] will train each sub-image. The number of neurons in the hidden layer will be
designed for the desired compression. The number of neurons in the output layer will be the same as that in the input
Volume 2, Issue 1, January 2013
Page 252
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 1, January 2013
ISSN 2319 - 4847
layer. The input layer and output layer are fully connected to the hidden layer. The Weights of synapses connecting
input neurons and hidden neurons and weight of synapses connecting hidden neurons and weight of synapses
connecting hidden neurons and output neurons are initialized to small random values from say –1 to +1.The output of
the input layer is evaluated using linear activation function. The input to the hidden layer is computed by multiplying
the corresponding weights of synapses. The hidden layer units evaluate the output using the sigmoidal function. The
input to the output layer is computed by multiplying the corresponding weights of synapses. The output layer neuron
evaluates the output using sigmoidal function. The Mean Square error of the difference between the network output and
the desired output is calculated. This error is back propagated and the weight synapses of output and input neurons are
adjusted. With the updated weights error is calculated again. Iterations are carried out till the error is less than the
tolerance. The compression performance is assessed in terms of Compression ratio, PSNR and execution time [22].
Steps of the algorithm
1. Divide the original image into 8x8 pixel blocks and reshape each one into 64x1 column vector.
2. Arrange the column vectors into a matrix of 64x1024.
3. Let the target matrix equal to the matrix in step 2.
4. Choose a suitable learning algorithm, and parameters to start training.
5. Simulate the network with the input matrix and the target matrix.
6. Obtain the output matrices of the hidden layer and the output layer.
7. Post-process them to obtain the compressed image, and the reconstructed image respectively.
Picture quality measures
Image quality measures (IQM) are figures of merit used for the evaluation of imaging systems or of coding/processing
techniques. Some Objective quality measures as available in literature and used in this work are presented here for
better understanding of the work ahead. Let x (m, n) denotes the samples of original image, and x’(m,n) denotes the
samples of compressed image. M and N are number of pixels in row and column directions, respectively. In this paper,
we have used MSE, PSNR, PQS, SSIM, and MD as the five objective quality measures, which are
calculated as follows. MSE and PSNR are the most common picture quality measures. Mean Square Error is given bym n
MSE  I
[ I ( x, y)  I '  x, y ]2 --------------------(13)
MN 
y 1 x 1
Peak Signal To Noise Ratio is given asPSNR=20log[255/sqrt(MSE)]----------------------- (14)
5. SIMULATION RESULTS & DISCUSSION
The grayscale images “barbara” is used as test image. Harr wavelet transform give better result at decomposition level
2. Most of the time the minimum variance 0.01 is taken for better result. To assess the performance of BayesShrink, it
is compared with hard and the soft threshold. The 1-D implementation of BayesShrink, can be obtained from the
WaveLab toolkit [23], and the 2-D extension is straightforward. The MSEs from the various methods are compared in
Table I.
Table I
Wavelet
Theshold
Noise
PSNR
MSE
Harr
Hard
Gaussian
Salt & pepper
34.7711
30.3822
0.0739
0.1602
Poisson
Speckle
Gaussian
Salt & pepper
Poisson
Speckle
Gaussian
Salt & pepper
Poisson
Speckle
39.9538
38.5009
34.5299
31.5224
37.7077
36.9704
28.1056
30.0358
43.1812
41.7077
0.0564
0.0582
0.0751
0.0955
0.0647
0.0658
0.1036
0.1699
0.0332
0.0389
Image
Barbara
Soft
Bayes soft
Volume 2, Issue 1, January 2013
Page 253
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 1, January 2013
ISSN 2319 - 4847
Fig.3. Denoising with Bayes soft threshold where SNR is maximum (43.1812) and MSE is minimum (0.0332)
Fig.4.Performance measure of image compression with back propagation neural network
6. CONCLUSION
In this paper at first an adaptive threshold for wavelet thresholding images was proposed, based on the GGD modeling
of sub band coefficients, and simulation results showed excellent performance. It was demonstrated that spatially
adaptive thresholds greatly improves the denoising performance over uniform thresholds. That is, the threshold value
changes for each coefficient. Secondly, back propagation neural network was designed specifically for compression.
Although the setting in this paper was in the wavelet domain, the idea extended to other domains such as back
propagation neural network to achieve good compression.
REFERENCES
[1] D. L. Donoho and I. M. Johnstone, Ideal spatial adaptation via wavelet shrinkage, Biometrika, vol. 81, pp. 425–
455, 1994.
[2] D. L. Donoho, I. M. Johnstone, G. Kerkyacharian, and D. Picard, Wavelet shrinkage: asymptopia?, J.
R.
Statist. Soc. B, vol. 57, pp. 301– 369, 1995
[3] J. Fan, P. Hall, M. Martin, and P. Patil, Adaptation to high spatial in homogeneity based on wavelets and on
local linear smoothing, Australian National Univ., Canberra, Australia, Tech. Rep., CMA-SR18- 93.
Volume 2, Issue 1, January 2013
Page 254
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 2, Issue 1, January 2013
ISSN 2319 - 4847
[4] L. K. Shark and C. Yu, Denoising by optimal fuzzy thresholding in wavelet domain, Electron. Lett., vol.36, pp.
581–582, Mar. 2000.
[5] P. Moulin and J. Liu, Analysis of multiresolution image denoising schemes using generalized Gaussian and
complexity priors, IEEE Trans. Inform. Theory, vol. 45, pp. 909–919, Apr. 1999.
[6] D. L. Donoho and I. M. Johnstone, Adapting to unknown smoothness via wavelet shrinkage, J. Amer. Statist.
Assoc., vol. 90, pp. 1200–1224, 1995.
[7] S. G. Chang, B. Yu, and M. Vetterli, Adaptive wavelet thresholding for image denoising and compression, IEEE
Trans. Image Processing, vol.9, pp. 1532–1546, Sept. 2000.
[8] C. M. Stein, “Estimation of the mean of a multivariate normal distribution,” Ann. Statist., vol. 9, no. 6, pp.
1135–1151, 1981.
[9] R. L. Joshi, V. J. Crump, and T. R. Fisher, “Image subband coding using arithmetic and trellis coded
quantization,” IEEE Trans. Circuits Syst. Video Technol., vol. 5, pp. 515–523, Dec. 1995.
[10] S. M. LoPresto, K. Ramchandran, and M. T. Orchard, “Image coding based on mixture modeling of wavelet
coefficients and a fast estimationquantization framework,” in Proc. Data Compression Conf., Snowbird, UT, Mar.
1997, pp. 221–230.
[11] S. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,” IEEE Trans.
Pattern Anal. Machine Intell., vol. 11, pp. 674–693, July 1989.
[12] E. Simoncelli and E. Adelson, “Noise removal via Bayesian wavelet coring,” Proc. IEEE Int. Conf. Image
Processing, vol. 1, pp. 379–382, Sept. 1996.
[13] P. H. Westerink, J. Biemond, and D. E. Boekee, “An optimal bit allocation algorithm for sub-band coding,” in
Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, Dallas, TX, Apr. 1987, pp. 1378–1381.
[14] J. K. Romberg, H. Choi, and R. Baraniuk, Bayesian tree-structured image modeling using wavelet-domain
hidden Markov model, in Proc.SPIE, vol. 3816, Denver, CO, July 1999, pp. 31–44.
.[15] G. Fan, X.-G. Xia, Image denoising using a local contextual hidden Markov model in the wavelet domain,
IEEE Signal Processing Letters, vol.8, no.5, pp.125-128, May 2001.
[16] E.Simoncelli and E. Adelson, Noise removal via Bayesian wavelet coring, Proc. IEEE Int. Conf. Image
Processing, vol. 1, pp. 379–382,Sept. 1996.
[17] N. Weyrich and G. T. Warhola, De-noising using wavelets and crossvalidation, Dept. of Mathematics and
Statistics, Air Force Inst. Of Tech., AFIT/ENC, OH, Tech. Rep. AFIT/EN/TR/94-01, 1994.
[18] Y. Yoo, A. Ortega, and B. Yu, Image subband coding using context based classification and adaptive
quantization, IEEE Trans. Image Processing, vol. 8, pp. 1702–1715, Dec. 1999
[19] Cajo J.F. ter Braak/ “Bayesian sigmoid shrinkage with improper variance priors and an application to wavelet
denoising” / Computational Statistics & Data Analysis vol. 51, pp. 1232 – 1242 / 2006
[20] B.Verma, B.Blumenstin and S. Kulkarni, Griggith University, Australia, “A new Compression technique using
an artificial neural network”.
[21] Hahn-Ming Lee, Chih-Ming Cheb, Tzong-Ching Huang, “Learning improvement of back propagation algorithm
by error saturation prevention method”, Neurocomputing, November 2001
[22] M. Miyahara, K. Kotani, and V.R. Algazi, “Objective Picture Quality Scale (PQS) for Image Coding”, IEEE
Transactions on Communications, vol. 46, no. 9, pp. 1215- 1226, 1998.
[23] J. Buckheit, S. Chen, D. Donoho, I. Johnstone, and J. Scargle, “WaveLab Toolkit,”, http://wwwstat.stanford.edu:80/~wavelab/.
Volume 2, Issue 1, January 2013
Page 255
Download