Comparison of Image Reconstruction Using Adaptive Non

advertisement
Comparison of Image Reconstruction Using
Adaptive Non-Linearing and Adaptive
Regularization Techniques
MOORA RENUKUMAR1, GARAGA SRILAKSHMI2
M.Tech [Scholar], Pragathi Engineering College, Surampalem, Peddapuram, A.P, India 1
Assistant Professor, Pragathi Engineering College, Surampalem, Peddapuram, AP, India 2
Abstract- Image analysis in terms of blurring and deblurring in compressed form is one of the important
parts of image processing. It is essentially involved in
the pre-processing stage of image analysis and
computer vision. It generally detects the contour of an
image and thus provides important details about an
image. So, it reduces the content to process for the
high-level processing tasks like object recognition and
image segmentation. Compressed sensing is a new
paradigm for signal recovery and sampling. It states
that a relatively small number of linear measurements
of a sparse signal can contain most of its salient
information and that the signal can be exactly
reconstructed
from
these
highly
incomplete
observations. The major challenge in practical
applications of compressed sensing consists in
providing efficient, stable and fast recovery algorithms
which, in a few seconds, evaluate a good approximation
of a compressible image from highly incomplete and
noisy samples. In this paper, we propose to approach
the compressed sensing image recovery problem using
adaptive nonlinear filtering strategies in an iterative
framework, and same time we analyze the data with
autoregressive method in adaptive regularization. The
image blurring and de-blurring is been compared. We
compare the analysis of peak signal to noise ratio
(PSNR) values for the adaptive non-linearing and
adaptive regularization techniques. The techniques
explain the comparison between them. we analyze
which is the best technique to obtain better PSNR
values.
Keywords: blurring, compression, PSNR, non linear
filtering.
I. INTRODUCTION
Digital image processing is an area characterized by the
need for extensive experimental work to establish the
viability of proposed solutions to a given problem. An
important characteristic underlying the design of image
processing systems is the significant level of testing &
experimentation that normally is required before arriving
at an acceptable solution. This characteristic implies that
the ability to formulate approaches &quickly prototype
candidate solutions generally plays a major role in
reducing the cost & time required to arrive at a viable
system implementation. An image may be defined as a
two-dimensional function f(x, y), where x & y are spatial
coordinates, & the amplitude of f
at any pair of
coordinates (x, y) is called the intensity or gray level of the
image at that point. When x, y & the amplitude values of
f are all finite discrete quantities, we call the image a
digital image. The field of DIP refers to processing digital
image by means of digital computer. Digital image is
composed of a finite number of elements, each of which
has a particular location & value. The elements are called
pixels.
Vision is the most advanced of our sensor, so it is
not surprising that image play the single most important
role in human perception. However, unlike humans, who
are limited to the visual band of the EM spectrum imaging
machines cover almost the entire EM spectrum, ranging
from gamma to radio waves. They can operate also on
images generated by sources that humans are not
accustomed to associating with image.
There is no general agreement among authors
regarding where image processing stops & other related
areas such as image analysis& computer vision start.
Sometimes a distinction is made by defining image
processing as a discipline in which both the input & output
at a process are images. This is limiting & somewhat
artificial boundary. The area of image analysis (image
understanding) is in between image processing &
computer vision. There are no clear-cut boundaries in the
continuum from image processing at one end to complete
vision at the other. However, one useful paradigm is to
consider three types of computerized processes in this
continuum: low-, mid-, & high-level processes. Low-level
process involves primitive operations such as image
processing to reduce noise, contrast enhancement & image
sharpening. A low- level process is characterized by the
fact that both its inputs & outputs are images. Mid-level
process on images involves tasks such as segmentation,
description of that object to reduce them to a form suitable
for computer processing & classification of individual
objects. A mid-level process is characterized by the fact
that its inputs generally are images but its outputs are
attributes extracted from those images. Finally higherlevel processing involves “Making sense” of an ensemble
of recognized objects, as in image analysis & at the far end
of the continuum performing the cognitive functions
normally associated with human vision.
1
Digital image processing, as already defined is used another coordinate convention called spatial coordinates
successfully in a broad range of areas of exceptional social which uses x to refer to columns and y to refers to rows.
& economic value.
This is the opposite of our use of variables x and y.
II. REPRESENTATION OF IMAGES
Image as Matrices:
An image is represented as a two dimensional function f(x,
The preceding discussion leads to the following
y) where x and y are spatial co-ordinates and the
amplitude of ‘f’ at any pair of coordinates (x, y) is called representation for a digitized image function:
f (0,0) f(0,1)
……….. f(0,N-1)
the intensity of the image at that point.
Gray scale image:
f(1,0)
f(1,1)
…………
f(1,N-1)
A grayscale image is a function I (xylem) of the two
spatial coordinates of the image plane. I(x, y) is the
intensity of the image at the point (x, y) on the image
plane. I (xylem) takes non-negative values assume the
image is bounded by a rectangle [0, a] [0, b]I: [0, a]  [0,
b]  [0, info)
f(xylem)= f(M-1,0) f(M-1,1) ………… f(M-1,N-1)
The right side of this equation is a digital image by
definition. Each element of this array is called an image
element, picture element, pixel or pel. The terms image
and pixel are used throughout the rest of our discussions to
denote a digital image and its elements.
Color image:
A digital image can be represented naturally as a
It can be represented by three functions, R (xylem) for red, MATLAB matrix:
G (xylem) for green and B (xylem) for blue. An image
may be continuous with respect to the x and y coordinates f(1,1) f(1,2) ……. f(1,N)
and
also in amplitude. Converting such an image to
.
.
digital form requires that the coordinates as well as the f(2,1) f(2,2) …….. f(2,N)
amplitude to be digitized. Digitizing the coordinate’s
values is called sampling. Digitizing the amplitude values f = f(M,1) f(M,2) …….f(M,N)
is called quantization.
Where f(1,1) = f(0,0) (note the use of a monoscope font to
denote MATLAB quantities). Clearly the two
Coordinate convention
representations are identical, except for the shift in origin.
The result of sampling and quantization is a matrix The notation f(p ,q) denotes the element located in row p
of real numbers. We use two principal ways to represent and the column q. For example f(6,2) is the element in the
digital images. Assume that an image f(x, y) is sampled so sixth row and second column of the matrix f. Typically we
that the resulting image has M rows and N columns. We use the letters M and N respectively to denote the number
say that the image is of size M X N. The values of the of rows and columns in a matrix. A 1xN matrix is called a
coordinates (xylem) are discrete quantities. For notational row vector whereas an Mx1 matrix is called a column
clarity and convenience, we use integer values for these vector. A 1x1 matrix is a scalar.
discrete coordinates. In many image processing books, the
Matrices in MATLAB are stored in variables with
image origin is defined to be at (xylem)=(0,0).The next
coordinate values along the first row of the image are names such as A, a, RGB, real array and so on. Variables
(xylem)=(0,1).It is important to keep in mind that the must begin with a letter and contain only letters, numerals
notation (0,1) is used to signify the second sample along and underscores. As noted in the previous paragraph, all
the first row. It does not mean that these are the actual MATLAB quantities are written using mono-scope
values of physical coordinates when the image was characters. We use conventional Roman, italic notation
sampled. Following figure shows the coordinate such as f(x ,y), for mathematical expressions
convention. Note that x ranges from 0 to M-1 and y from 0
III. NON LINEAR FILTERING
to N-1 in integer increments.
Non linear filtering follows this basic prescription. The
The coordinate convention used in the toolbox to median filter is normally used to reduce noise in an image,
denote arrays is different from the preceding paragraph in somewhat like the mean filter. However, it often does a
two minor ways. First, instead of using (xylem) the better job than the mean filter of preserving useful detail in
toolbox uses the notation (race) to indicate rows and the image. This class of filter belongs to the class of edge
columns. Note, however, that the order of coordinates is preserving smoothing filters which are non-linear filters.
the same as the order discussed in the previous paragraph, This means that for two images a(x) and b(x):
in the sense that the first element of a coordinate topples,
(alb), refers to a row and the second to a column. The
other difference is that the origin of the coordinate system
is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1
to N in integer increments. IPT documentation refers to These filters smooths the data while keeping the small and
the coordinates. Less frequently the toolbox also employs sharp details. The median is just the middle value of all the
2
values of the pixels in the neighborhood. Note that this is
not the same as the average (or mean); instead, the median
has half the values in the neighborhood larger and half
smaller. The median is a stronger "central indicator" than
the average. In particular, the median is hardly affected by
a small number of discrepant values among the pixels in
the neighborhood. Consequently, median filtering is very
effective at removing various kinds of noise. Figure 1
illustrates an example of median filtering.
specific application. The techniques applied are
application-oriented. Also, the different procedures are
related to the types of noise introduced to the image. Some
examples of noise are: Gaussian or White, Rayleigh, Shot
or Impulse, periodic, sinusoidal or coherent, uncorrelated,
and granular.
Noise Models
Noise can be characterized by its:
Probability density function (pdf): Gaussian, uniform,
Poisson, etc.
Spatial properties: correlation
Frequency properties: white noise vs pink noise
Fig. 1: representation of image
Like the mean filter, the median filter considers each pixel
in the image in turn and looks at its nearby neighbors to
decide whether or not it is representative of its
surroundings. Instead of simply replacing the pixel value
with the mean of neighboring pixel values, it replaces it
with the median of those values. The median is calculated
by first sorting all the pixel values from the surrounding
neighborhood into numerical order and then replacing the
pixel being considered with the middle pixel value. (If the
neighborhood under consideration contains an even
number of pixels, the average of the two middle pixel
values is used.) Figure 2 illustrates an example calculation.
Figure 2 Calculating the median value of a pixel
neighborhood. As can be seen, the central pixel value of
150 is rather unrepresentative of the surrounding pixels
and is replaced with the median value: 124. A 3×3 square
neighborhood is used here --- larger neighborhoods will
produce more severe smoothing.
Noise
Noise is any undesirable signal. Noise is everywhere and
thus we have to learn to live with it. Noise gets introduced
into the data via any electrical system used for storage,
transmission, and/or processing. In addition, nature will
always plays a "noisy" trick or two with the data under
observation. When encountering an image corrupted with
noise you will want to improve its appearance for a
Figure 3 Original Image
Figure 4 Images and histograms resulting from adding
Gaussian, Rayleigh and Gamma noise to the original
image.
IV. SYSTEM STUDY
Image blurring and de-blurring process is an important
part of image processing. It is beneficial for many research
areas of computer vision and image analysis. Image deblurring provides important details for the high-level
processing tasks like feature detection etc. The following
from fig.5 through fig.9 are the images of which of which
the performance analysis is done using MATLAB tools.
Table 1 gives the comparative analysis of the techniques.
3
Fig: 5. Original parrots
(a)
(c)
Fig: 8. Mat lab simulation result of de-blurring using
Adaptive Non Linear de-blurred images (a) original image
(b) blurred and noisy image (d) de-blurred image.
(b)
(a)
(b)
(c)
Fig: 6.Matlab simulation result of de-blurring using
Adaptive Non-Linear de-blurred images (a) original image
(b) blurred and noisy image (d) de-blurred image.
(a)
(b)
(c)
Fig: 7. Matlab simulation result of de-blurring using
Adaptive Regularization Filter de-blurred images (a)
original image (b) blurred and noisy image (d) de-blurred
image.
De-blurring of medical images:
(a)
(c)
Fig: 9. Mat lab simulation result of de-blurring using
Adaptive Regularization Filter de-blurred images (a)
original image (b) blurred and noisy image (c) de-blurred
image.
Comparison of Performance metrics:
Table 1 comparison of two techniques using images PNR
values.
Name
ANL
ANL
AR
AR
AR
of the
blurred
De-
blurred
De-
SSIM
Image
PSNR
blurred
PSNR
blurred
values
values
PSNR
values
PSNR
Parrot
12.46
21.85
23.87
26.43
0.827927
Flower
12.49
21.24
23.54
26.12
0.717007
Eye
12.07
22.88
23.74
28.78
0.852935
Lena
12.38
21.41
23.56
26.64
0.760090
Boats
12.25
21.15
22.27
26.53
0.739262
Barbara
12.1
20.33
22.42
25.29
0.706009
Hat
12.49
21.85
25.73
28.45
0.789969
Straw
11.72
18.54
18.69
20.36
0.429532
(b)
4
V. CONCLUSION
The proposed work in this paper is having a lot of
potential for further research in the area of image blurring
and de-blurring using different paradigm making the work
more versatile and flexible. The research can be extended
in the area of noisy images directly as input in the
methodology presented in this work. Also the proposed
work can be further studied observing the different
parameter variations and inclusion of some dynamic
problem sensing feature which can adjust the parameter
values to the values optimal for the specific situation. The
comparison analysis is obtained in this paper using
adaptive sparse domain selection and adaptive
regularization (ASAR) with autoregressive (AR) models
and Fast robust de-blurring techniques (FRT) with Peak
signal to noise ratio (PSNR) values. In this paper using
adaptive sparse domain selection and adaptive
regularization (ASAR) able to obtain better PSNR values
than fast robust de-blurring techniques (FRT). This can be
further advanced for new techniques in obtaining the
image de-blurring for better peak signal to noise ratio
(PSNR) values..
REFERENCES
A. Antoniadis and J. Fan, “Regularization of wavelet
approximations,” J. Amer. Statist. Assoc., vol. 961, no. 455, pp.
939–967, 2001.
[2] R. Berinde and P. Indyk, Sparse Recovery Using Sparse Random
Matrices.Cambridge, MA: MIT Press, 2008.
[3] J. Bioucas-Dias and M. Figueiredo, “A new TwIST: Two step
iterative shrinkage-thresholding algortihms for image restoration,”
IEEE Trans. Image Process., vol. 16, no. 12, pp. 2992–3004, Dec.
2007.
[4] K. Bredies and D. A. Lorenz, “Iterated hard shrinkage for
minimization problems with sparsity constraints,” J. Sci. Comput.,
vol. 30, no. 2, pp. 657–683, 2008.
[5] J. Cai, S. Osher, and Z. Shen, “Linearized Bregman iterations for
compressed sensing,” Math. Comput., to be published.
[6] E. J. Candés, “Compressive sampling,” in Proc. Int. Congr. Math.,
Madrid, Spain, 2006, vol. 3, pp. 1433–1452.
[7] E. J. Candés, J. Romberg, and T. Tao, “Stable signal recovery from
incomplete and inaccurate measurements,” Commun. Pure Appl.
Math., vol. 59, no. 8, pp. 1207–1223, 2006.
[8] E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty
principle: Exact signal reconstruction from highly incomplete
frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp.
489–509, Feb. 2006.
[9] A. Chambolle, “An algorithm for total variation minimization and
applications,” J. Math. Imag. Vis., vol. 20, pp. 89–97, 2004.
[10] T. F. Chan, S. Osher, and J. Shen, “The digital TV filter and
nonlinear denoising,” IEEE Trans. Image Process., vol. 10, no. 2,
pp. 231–241, Feb. 2001.
[11] R. Chartrand and W. Yin, “Iteratively reweighted algorithms for
compressive sensing,” in Proc. 33rd Int. Conf. Acoust., Speech,
Signal Process., 2008, pp. 3869–3872.
[12] P. L. Combettes and V. R.Wajs, “Signal recovery by proximal
forwardbackward splitting,” SIAM J. Multiscale Model. Sim., vol. 4,
no. 4, pp. 1168–1200, Nov. 2005.
[1]
5
Download