Statistical analysis of Image restoration using GeometricTransform approach

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 18 Number1- Dec 2014
Statistical analysis of Image restoration using GeometricTransform
approach
Ankita Sharma #1, Uday Bhan Singh#2, Ankur Chourasia#3
#1
#2,3
M.Tech Scholor IASSCOM Fortune Institute of Technology, Bhopal, India
Assistant Professor IASSCOM Fortune Institute of Technology, Bhopal, India
Abstract— While digital imaging systems have
been widely used for many applications including
consumer photography, microscopy, aerial photography,
astronomical imaging, etc., their output images/videos
often suffer from spatially varying blur caused by lens,
transmission medium, post processing algorithms, and
camera/object motion. Measuring the amount of blur
globally and locally is an important issue. It can help us in
removing the spatially varying blur, and enhancing the
visual quality of the imaging system outputs.
In this paper, we study the blur measurement problem for
different scenarios. We have applied the Geometric
Transformation algorithm for restoration of the blurred
image. First we apply it for that image where there is only
spatial variation in terms of coordinate geometry by
keeping the neighborhood pixels orientation constant.
Then we apply the algorithm for the case where there is
geometric variation spatial as well as local. In both cases
we estimated the PSNR & MSE value. Geometric
Transformation methods provide us an easiest way to
restore the match because here only matched features
points are involved in the process, Whereas in local
probability estimation we have to concentrate over all
pixels and then cause so the computation become complex
and hence it enhances the cost of the system.
the terms and the expression of the contents, thus
we may lose the part of the contents as a result.
Using telephone and video chat such as Skype
indirect communication can be achieved across
space but not time. For communication across time,
the better way is written communication. Ancient
people used pictograph, resemblance of objects for
the purpose. They painted images onto walls or
incised into stones using mineral pigments. Figure 1
shows a cave painting of a horse drawn by CroMagnon peoples. Even though we have no idea
what they wanted to tell by such pictograph, the
graph can tell the information of the era. Printing
technology further encouraged this type of
communications, especially for text.
Keywords— Geometric transformation, Image, Noise. PSNR,
MSE, Restoration
I. INTRODUCTION
When we convey a thing to others, how do we do?
In face to face communication, we rely on both oral
and
non-verbal
communication.
Oral
communication, spoken verbal communication in
other words, typically relies on words. In contrast,
non-verbal communication meaning wordless
communication relies on gesture and facial
expression and etc. When we want to do it across
time and space, what should we do? The oldfashioned way follows the above type of
communications. For example, folk stories and
songs passed by word of mouth are categorized into
this type. However, the reprise of such style varies
ISSN: 2231-5381
Figure 1: Written communication at ancient time:
Cave painting of a horse at Lascaux drawn by CroMagnon peoples.
A big leap in the technology occurred by printing press
technology intended in the 15th century. The printing-press
devices enabled rapid and precise copy of text document. This
is the reason why the invention and spread of the technology
are regarded as the revolutionizing events in the second
millennium. With the printing technology, the contents can be
preserved semi-permanently an idiom seeing is believing
means that physical or concrete evidence is convincing. This
http://www.ijettjournal.org
Page 1
International Journal of Engineering Trends and Technology (IJETT) – Volume 18 Number1- Dec 2014
indicates that conveying a thing prefers showing the thing
rather than telling the thing. Thus, it is natural that we have
developed devices taking a photograph, an image of
projecting lights of a scene. Before the first photographs, the
principal of pinhole camera was mentioned by Mo Di,
Chinese philosopher, and Aristotle, Greek mathematician, in
the fifth and fourth centuries BC. Camera obscura consisting
of a box or a room with a hole in one side is the concrete
device of pinhole camera. Light from an external scene of the
camera passes through the hole and then reaches a surface
inside. The image of the scene can be projected onto the
surface, and can then be manually traced to produce a
photograph of the scene. Joseph Nic´ephore Ni´epce, a French
inventor, invented revolutionizing camera like printing-press
technology for text [2]. His key idea is to omit manual
drawing from imaging process by relying on photochemical
action so that we can automatically obtain a photograph.
Before Ni´epce, photographs were not permanent, unable to
permanently secure the images from fading. Gorman
mentioned that his camera was designed based on heliograph
[3].
camera takes two steps to provide the observation of the
photograph as shown in Fig. 1.3, while Ni´epce’s camera
directly generated a photo of the scene via one process. First
process is image acquisition process. The process receives
lights from the scene and then converts the received light as
the latent image. For this process, film camera uses
photographic film or plate while digital camera uses imaging
sensor, e.g., a Charge Coupled Device (CCD) image sensor or
Complementary Metal-Oxide-Semiconductor (CMOS) sensor.
Figure 3: Imaging processes of digital camera: Image
acquisition process converts the energy of lights coming from
the target scene to measurable value. Image display process
shows the digital image using a display device.
Next is image display process. This step transforms the latent
image into a visible image. For images saved on the film, we
follow photographic processing, which is the chemical ways
to produce a negative or positive image. On the other hand,
digital image has various ways of displaying the photo. One
may use printers to make the photo permanent while another
may use display devices to see the photo temporarily.
Figure 2: The earliest surviving photograph of a scene from
nature taken with a camera obscura: View from the Window
at Le Gras
Figure 2 shows the earliest surviving photograph taken by
Ni´epce. The big limitation of Ni´epce’s camera is its
exposure time. It takes about eight hours for the camera to
yield the photochemical action. Thus, his follower focused on
achieving shorter exposure time. Through the 19th century,
many advances in photographic glass plates and printing were
made in. George Eastman replaced photographic plates to
photographic film. This replacement was distributed through
the late 19 century and results the technology of today’s film
camera. Nowadays, digital cam- era is one of the most popular
devices for photo shooting.
The difference between digital camera and film camera is their
memory media. As memory medium, film camera uses
photographic film while digital camera does memory devices
such as memory card by converting the received light to
digital data format via an electronic image sensor. Digital
ISSN: 2231-5381
Typical display device is computer monitors including
Cathode Ray Tubes (CRT) display and Liquid Crystal Display
(LCD). Thanks to the recent development of display
technologies, bigger and brighter display is available with
cheaper cost. In contrast to such monitors, projectors only
have light emitting devices. To form an image, projectors
require display surface, onto which they emit the light.
I.I Problem definition: What is image denoising?
Image denoising is the problem of finding a clean image,
given a noisy one. In most cases, it is assumed that the noisy
image is the sum of an underlying clean image and a noise
component.
Figure:3 A noisy images are assumed to be the sum of an
underlying clean image and noise.
http://www.ijettjournal.org
Page 2
International Journal of Engineering Trends and Technology (IJETT) – Volume 18 Number1- Dec 2014
In case of image denoising methods, the characteristics of the
degrading system and the noises are assumed to be known
beforehand. The image s(x,y) is blurred by a linear operation
and noise n(x,y) is added to form the degraded image w(x,y).
This is convolved with the restoration procedure g(x,y) to
produce the restored image z(x,y). The “Linear operation”
shown in Figure 3 is the addition or multiplication of the noise
n(x,y) to the signal s(x,y) [4].
Figure 4: Degradation model
Literature survey on the Image Restoration technique explores
the various ways to restore the image. There are lots of
algorithms suggested and implemented for denoising and
defocusing the image. The prime objective of the thesis is to
restore the image based on Geometric transform estimation
algorithm. The objective can be characterized as: Image
restoration using Geometric Transform Estimation technique
(Feature matching) & performance analysis based on PSNR &
MSE evaluation.
II DESIGN METHODOLOGY
The algorithm finds the featured values like edges, corner,
inliers and outliers points from the original and transformed
image and based on these information it restore the image.
MATLAB software will be used to implement this research
work. Proposed algorithm for the research methodology can
be illustrated as follows:
As discussed the methodology adopted for the implementation
of Geometrical Estimation algorithm to find out the deblur
image when there is uniform variation of pixel throughout the
image at particular angle it is important to consider the image
size. Several techniques have been proposed for the denoising
of an image when the user has the apriori knowledge of the
feature of image in terms of PSF. But when it is unknown the
retrieval of the image becomes complex.
The first step is image acquisition that include image
capturing, resizing and refining the image data that has to be
processed.
III.I Steps for Deblurring using Geometrical Transform
Estimation Algorithm:
Figure 5: Original Image gathered from Image acquisition
Original image is treated with transformation and each pixel
of the image is rotated with 31 degree. This is called uniform
ISSN: 2231-5381
http://www.ijettjournal.org
Page 3
International Journal of Engineering Trends and Technology (IJETT) – Volume 18 Number1- Dec 2014
geometrical displacement when the relative distance between
the pixels is unchanged but the geometrical position has been
changed.
To get this type of image we use simple rotation technique of
the image because in rotation the distance between the pixels
remains constant but the geometrical location changed.
Table No. 1: SURFPoints (Properties):
SURFPoints Object for storing SURF interest points: The
main purpose of this class is to pass the data between detect
SURFF features and extract Features functions. It can also be
used to manipulate and plot the data returned by these
functions. Using the class to fill the points interactively is
considered an advanced maneuver. It is useful in situations
where you might want to mix a non-SURF interest point
detector with a SURF descriptor.
'Orientation' is specified as an angle, in radians, as measured
Count
Location
Scale
Metric:
Sign Of
Laplacian
Orientation
428
428x2
single
428x1
single
428x1
single
428x1 int8
428x1 single
from the X-axis with the origin at 'Location'. 'Orientation'
should not be set manually. You should rely on the call to
extract Features for filling this value. 'Orientation' is mainly
useful for visualization purposes.
Figure 6: Transformed Image
After transformation our task is to detect and extract the
features of original Image and Transformed image. To detect
the robust feature of the gray scale image from which we can
inculcate the exact degree of angle of the distortion, we have
used here detectSURFfeature function (Detect Speeded-Up
Robust Features (SURF) features in grayscale image). It also
treats with outliers. Outliers are data values that deviate from
the mean by more than three standard deviations. When
estimating parameters from data containing outliers, the
results may not be accurate.
'Sign of Laplacian' is a property unique to SURF detector.
Blobs with identical metric values but different signs of
Laplacian will differ by their intensity values: white blob on
black background vs. black blob on white background. This
value can be used to quickly eliminate blobs that don't match
in this sense. For non-SURF detectors, this value is not
relevant although it should be set consistently as to not affect
the matching process. For example, for corner features, you
can simply use the default value of 0.
Note that SURFPoints is always a scalar object which may
hold many points. Therefore, NUMEL (surf Points) always
returns 1. This may be different from LENGTH (surf Points),
which returns the true number of points held by the object.
By using the Speeded-Up Robust Features (SURF) algorithm
we find blob features in the previous step. These features are
very important for the matching purpose. It extracts the
featured value of the image which is found in both images.
Now find out the index pair between these images.
Table No 2: Matched Points (Input Image & Output Image) :
SURFPoints
Count
Scale
Location
Metric
92
[92x1
single]
[92x2
single]
[92x1
single]
Sign of
Laplacian
[92x1
int8]
Orientation
[92x1
single]
Figure 7: Matched SURF points, including outliers
ISSN: 2231-5381
http://www.ijettjournal.org
Page 4
International Journal of Engineering Trends and Technology (IJETT) – Volume 18 Number1- Dec 2014
Figure 11: Same methodology (Geometric Transformation)
applied on Reference Image-3
Figure 8: Matching inliers
Now Recover the Image with the help of matching
inliers.
Figure 12: Same methodology (Geometric Transformation)
applied on Reference Image-4
Figure 9: Recovered Image
Figure 10: Same methodology (Geometric Transformation)
applied on Reference Image-2
Figure 13: Same methodology (Geometric Transformation)
applied on Real Image data
III RESULT ANALYSIS
ISSN: 2231-5381
http://www.ijettjournal.org
Page 5
International Journal of Engineering Trends and Technology (IJETT) – Volume 18 Number1- Dec 2014
In Geometrical Transformation method the
feature point in Original Image and the transformed
image is calculated. By calculating the feature point
only we avoid the huge calculation on each pixel,
consequently reducing the time, effort and the
complexity of the system. We analyzed in this
algorithm all the perspective matching and
extracting procedure and at the end calculated the
PSNR and MSE value. The calculated PSNR value
is resembled and up to the mark which is desirable.
Now consider another fact. If image is first blurred and then
led to face geometrical rotations then reconstruction of the
Image becomes challenging, because here spatial coordinates
of the pixels are also affected. Now the question is how to
resolve this issue?
Here practically by applying Geometrical method iteratively
we found that blur has been removed and the desired value of
PSNR and MSE has been achieved. Geometrical
Transformation gives us the facility to extract the feature point
in the image those feature points are nothing but the pixel
which is found commonly at both Images.
By applying the same method on the Reference and Real
images the study has been carried out.
IV.I Quality Measurements
In order to evaluate the quality of watermarked image, the
following signal-to-noise ratio (SNR) & MSE equation is used:
Table No 4.4: PSNR and MSE value for various Images
OR,
Image
PSNR Value (db)
MSE Value
Reference Image I
44.2836
0.0108
Reference Image II
46.6766
0.0094
Reference Image III
45.2553
0.0106
Reference Image IV
39.4302
0.0148
Real Image
43.4253
0.0121
Result for Geometric Transformation Algorithm
(when the distance between pixels is constant but
PSNR and MSE calculated by applying the Geometric the spatial location has been changed)
Transform Algorithm at various Reference & Real Images.
Result has been tabulated here in table number 3.
Table No 3: PSNR & MSE value estimation
Image
PSNR Value
MSE Value
Reference Image I
42.3769
0.0144
Reference Image II
40.8362
0.0165
Reference Image III
43.8668
0.0124
Reference Image IV
38.0147
0.0171
Real Image
45.4133
0.0107
In the previous study we see that if all the geometric pixel
location rotates by a particular
angle while the distance
between their neighbourhoods is constant, Real Image shows
the better PSNR value with Least MSE.
ISSN: 2231-5381
Figure 14: PSNR Vs MSE for different Image (when the
distance between pixel is constant but the spatial location
has been changed)
http://www.ijettjournal.org
Page 6
International Journal of Engineering Trends and Technology (IJETT) – Volume 18 Number1- Dec 2014
Result for Geometric Transformation Algorithm (when
the distance between pixel & the spatial location both has
been changed)
Figure 15: PSNR Vs. MSE for different Image (when the
distance between pixel & the spatial location both has been
changed)

By applying Geometric Transformation method we
observe by study that it is very easy to implement,
take less time than any algorithm & less complexity.
It also gives desirable PSNR & MSE value.

It is useful and compatible for gray scale image.
V CONCLUSION
Estimating the amount of blur in a given image is important
for computer vision applications. More specifically, the
spatially varying defocus point-spread-functions (PSFs) over
an image reveal geometric information of the scene, and their
estimate can also be used to recover an all-in-focus image. A
PSF for a defocus blur can be specified by a single parameter
indicating its scale. Most existing algorithms can only select
an optimal blur from a finite set of candidate PSFs for each
pixel. Some of those methods require a coded aperture filter
inserted in the camera.
In this thesis we used Geometric Transform Algorithm to
estimate the feature point from the original and blur image.
These featured points are further used for the restoration of the
blur image or the image whose pixels coordinate value has
been changed due to transformation or blur.

By applying Geometric Transformation method we
observe by study that it is very easy to implement,
take less time than any algorithm & less complexity.
It also gives desirable PSNR & MSE value.
[1] Deepa Kundur, Student Member, and Dimitrios
Hatzinakos, “A Novel Blind Deconvolution
Scheme for Image Restoration Using Recursive
Filtering ,” IEEE Transactions On Signal
Processing, Vol. 46, no. 2, February 1998.
[2] Xiumei Kang, Qingjin Peng, Gabriel Thomas
and Yu, “Blind Image Restoration Using The
Cepstrum Method” IEEE CCECE/CCGEI, Ottawa,
May 2006.
[3] Punam Patil & R.B.Wagh “Implementation of
Restoration of Blurred Image Using Blind
Deconvolution Algorithm” IEEE 2013.
[4] Xiang Zhu, Member, IEEE, Scott Cohen,
Member, IEEE, Stephen Schiller, Member, IEEE,
and Peyman Milanfar, Fellow, IEEE Estimating
Spatially Varying Defocus Blur From A Single
Image, IEEE TRANSACTIONS ON IMAGE
PROCESSING, VOL. 22, NO. 12, DECEMBER
2013.
[5] A. Levin, Y. Weiss, F. Durand, and W. T.
Freeman, “Understanding and evaluating blind
deconvolution algorithms,” in Proc. IEEE CVPR,
Aug. 2009, pp. 1964–1971.
[6] L. Xu and J. Jia, “Two-phase kernel estimation
for robust motion deblurring,” in Proc. ECCV, 2010,
pp. 157–170.
[7] O.Whyte, J. Sivic, A. Zisserman, and J. Ponce,
“Non-uniform deblurring for shaken images,” in
Proc. IEEE CVPR, Jun. 2010, pp. 491–498.
[8] A. Chakrabarti, T. Zickler, and W. T. Freeman,
“Analyzing spatially varying blur,” in Proc. IEEE
CVPR, Jun. 2010, pp. 2512–2519.
[9] S. Bae and F. Durand, “Defocus magnification,”
Comput. Graph. Forum, vol. 26, no. 3, pp. 571–
579, 2007.
[10] D. Zoran and Y. Weiss, “Scale invariance and
noise in natural images,” in Proc. IEEE 12th Int.
Conf. Comput. Vis., Oct. 2009, pp. 2209–2216.
[11] M. Aharon, M. Elad, and A. Bruckstein, “K-svd: An
algorithm for designing overcomplete dictionaries for sparse
representation”. IEEE Transactions on Signal Processing
(TIP), 54(11):4311–4322, 2006.
[12] R. Amiri, M. Alaee, H. Rahmani, and M. Firoozmand,
“Chirplet based denoising of reflected radar signals. In
Proceedings of the Third Asia International Conference on
Modelling & Simulation, AMS ’09, pages 304–308. IEEE,
2009.
REFERENCES
ISSN: 2231-5381
http://www.ijettjournal.org
Page 7
International Journal of Engineering Trends and Technology (IJETT) – Volume 18 Number1- Dec 2014
[13] D.F. Andrews and C.L. Mallows, “Scale mixtures of
normal distributions”. Journal of the Royal Statistical Society.
Series B (Methodological), 36(1):99–102, 1974.
[14] E. Arias-Castro and D.L. Donoho, “Does median filtering
truly preserve edges better than linear filtering?” The Annals
of Statistics, 37(3):1172–1206, 2009.
[15] V. Aurich and J. Weule, “Non-linear gaussian filters
performing edge preserving diffusion”. In Musterer kennung
1995, 17. DAGM-Symposium, pages 538–545. SpringerVerlag, 1995.
[16] Y. Bengio, “Learning deep architectures for Artificial
Intellegence”. Foundations and Trends in Machine Learning,
2(1):1–127, 2009.
[17] Y. Bengio and X. Glorot, “Understanding the difficulty of
training deep feed forward neural networks”. In Proceedings
of the Thirteenth International Conference on Artificial
Intelligence and Statistics (AISTATS), volume 13, pages 249–
256, 2010.
[18] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle,
“Greedy layerwise training of deep networks”. Advances in
Neural Information Processing Systems (NIPS), 2006.
[19] A. Buades, B. Coll, and J.M. Morel, “A non-local
algorithm for image denoising”. In International Conference
on Computer Vision and Pattern Recognition (CVPR). IEEE,
2005.
[20] A. Buades, C. Coll, and J.M. Morel, “A review of image
denoising algorithms, with a new one. Multiscale Modeling
and Simulation”, 4(2):490–530, 2005.
Ankita Sharma is a Research Scholar at
IASSCOM Fortune Institute of Technology
affiliated
to
Rajiv
Gandhi
Proudyogiki
Vishwavidyalaya, Bhopal. She is persuing her
M.Tech in Digital Communication. She has keen
interest to work on Image Processing.
Uday Bhan Singh has received his degree in
Electronics & Communication Engineering &
Master’s in Nano Technology, presently working as
HOD, Deptt. of Electronics & Communication at
IASSCOM Fortune Institute of Technology
affiliated
to
Rajiv
Gandhi
Proudyogiki
Vishwavidyalaya, Bhopal.
Ankur Chourasia has received his degree in
Electronics & Communication Engineering and
master’s in Digital Communication, presently
working as an Assistant Professor in Deptt. Of
Electronics & Communication Engineering at
IASSCOM Fortune Institute of Technology
affiliated
to
Rajiv
Gandhi
Proudyogiki
Vishwavidyalaya, Bhopal.
Author’s Profile
ISSN: 2231-5381
http://www.ijettjournal.org
Page 8
Download