Rotation Invariant Content-Based Image Retrieval System

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
Rotation Invariant Content-Based Image Retrieval System
P. Vijaya Bharati1, A.Rama Krishna2
1,2
Assistant Professor
Department of Computer Science & Engineering,
Vignan’s Institute of Engineering for Women, AP, India
Abstract—The emergence of multimedia technology and
the rapid growth in the number and type of multimedia
assets controlled by several entities, yet because the
increasing range of image and video documents showing on
the Internet, have attracted vital analysis efforts in
providing tools for effective retrieval and management of
visual data. So the need for image retrieval system arose.
Out of many existing systems “ROTATION
INVARIANT
CONTENT-BASED
IMAGE
RETRIEVAL SYSTEM” is the most efficient and
accurate one.
Effective texture feature is an essential component in
any CBIR system. In the past, spectral features like Gabor
and Wavelet have shown superior retrieval performance
than most statistical and structural options. Recent
researches on multi-resolution analysis have found that
curvelet captures texture properties like curves, lines and
edges with additional accuracy than Gabor filters.
However, the texture feature extracted using curvelet
transform is not rotation invariant. This can degrade its
retrieval performance considerably, particularly in cases
where there are many similar images with different
orientations. We analyses the curvelet transform and
derives a useful approach to extract rotation invariant
curvelet features. The new system which uses curvelet
transform for extracting texture features includes rotation
invariant.
named Roses from a large database, we give an input
as Rose.jpg. But if the database also contains other
images (not roses) having the same name as
Rose.jpg, then we can also get those images which
are irrelevant for our search. To improve the
efficiency of these existing systems, Content-Based
Image Retrieval Systems are developed.
In the Content-Based Image Retrieval System, we
can retrieve images based on content of an image i.e.
Texture, Shape, Color features [1]. We have to build
a database consisting the features of all.
Fig. 1.1 Representation of a Digital Image.
Keywords: Texture features, Color features, Shape
features, Rotation Invariant, Gabor Filters, Wavelets
I. INTRODUCTION
Databases of art works, satellite and medical
imagery have been attracting more and more users in
various professional fields — for example, medicine,
geography, architecture, advertising, design, fashion,
and publishing. Effectively and efficiently retrieving
relevant images from large and varied image
databases is now a necessity. Many Retrieval systems
are developed. All are based on either text
(image_name) or content of an image [7].
In the Text-Based Image Retrieval System, we can
retrieve images supported by keywords i.e. we give
an image_name as input and based on this name,
images having similar names are retrieved. For
example, suppose if we want to search all images
ISSN: 2231-5381
The notation that is used to represent the complete
M*N digital image in a matrix form as shown in
Figure 1.2.
Fig. 1.2 Matrix Representation of a Digital Image
1.1.2 Digital Image processing
The field of digital image processing refers to
process digital images by means of a computer. A
digital image is consists of a finite variety of
components. Every component has a particular
location and value. These components are cited as
picture elements, image elements, pels, and pixels
http://www.ijettjournal.org
Page 429
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
[7]. Figure 1.3 shows the overall model of a digital
image processing system.
Fig. 1.3 Model of a Image Processing System
Figure 1.3 clearly shows that, in a digital image
processing system, both input and output are digital
images.
1.2 Image Analysis
Visual feature extraction is the basis of any
content-based image retrieval technique. Wide used
options embody color, texture, shape, form and
spatial relationships. Because of the subjectiveness of
perception and the complex composition of visual
information, there doesn’t exist atleast one best
illustration for any given visual feature. Multiple
approaches are introduced for every feature
extraction and each of them characterizes the feature
from a different perspective.[5]
1.2.1 Color Features
Color is one among the foremost wide used visual
options in content-based image retrieval. It's strong
and easy to represent. Numerous studies of color
perception and color areas are projected, so as to seek
out color-based techniques that are a lot of closely
aligned with the ways in which humans understand
color [6]. Figure 1.4 (a) shows a sample image and
Figure 1.4 (b) shows its corresponding histogram.
Fig. 1.4 (a) Sample Image
ISSN: 2231-5381
Fig 1.4 (b) Corresponding Histogram
The color histogram is the most commonly used
representation technique, which statistically describes
the probabilistic properties of the various color
channels (such as the (R)ed, (G)reen, and (B)lue
channels, by capturing the number of pixels having
specific properties [6].
For example, a color histogram might describe the
number of pixels of each red channel value in the
range [0, 255]. Typically the particular channel
values are shown along the x-axis, the numbers of
pixels are shown along the y-axis, and the particular
color channel used is indicated in each histogram [6].
Disadvantage: It is well known that histograms lose
information related to the spatial distribution of
colors and that two different images can have similar
histograms.
To overcome this disadvantage we use two
approaches and they are correlograms and
anglograms.
 Correlograms capture the distribution of
colors of pixels in particular areas around
pixels of particular colors. It is easy to
compute and it is more stable than color
histogram.
 Anglograms capture a particular signature of
the spatial arrangement of areas (single
pixels or blocks of pixels) having common
properties, such as similar colors. They can
also be used for extracting texture and shape
features.
Different color spaces that are used for extracting
color features of an image are:
i.
NTSC color space
http://www.ijettjournal.org
Page 430
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
ii.
iii.
YCbCr color space
HSV color space
Where Kru = - Kry , Kgu = - Kgy , Kbu = 1 – Kby,
Krv = 1 – Kry , Kgv= -Kgy , Kbv= -Kby
i. NTSC Color Space
This is used in Television, main advantage is that
grey-scale information is separated from color data,
so that the same signal can be used for both Color
and Monochrome television sets. NTSC is the color
space with the best separation between the luminance
and the chrominance data. It is useful for enhancing
the interpretability of geophysical images in a simple
way, easy to implement into software and
computationally inexpensive.
In NTSC format, image data consists of 3
components
a.
Luminance(Y)
b.
Hue (I)
c.
Saturation (Q)
 Luminance part represents grey-scale data.
 Hue and Saturation parts carry the color data
of a TV signal.
The following relation can be used to transform RGB
image into NTSC image and vice versa.
We can easily calculate Y, I, Q values from an RGB
values of an image.
ii. YCbCr Color Space
YCbCr is sometimes abbreviated to YCC. This is
used in digital video. In this, luminance information
is represented by a single component Y. Color
information is stored as two color different
components Cb and Cr.
 Cb is the difference between blue
component and reference value.
 Cr is the difference between red component
and reference value.
The following relations can be used to transform
RGB image into YCbCr image and vice versa.
Y = Kry · R + Kgy · G + Kby · B
Cb = B – Y , Cr = R – Y
Kry + Kgy + Kby = 1
Y = Kry · R + Kgy · G + Kby · B
Cb = Kru · R + Kgu · G + Kbu · B
Cr = Krv · R + Kgv · G + Kbv · B
ISSN: 2231-5381
iii. HSV Color Space
This color space is very close to the RGB system
to the way in which humans experience and describes
color sensations which are straightforward for human
to grasp. This is widely used to generate high quality
computer graphics. In simple terms, it is used to
select various different colors needed for a particular
picture. It gives the color according to human
perception. Figure 1.5 shows HSV color space
model.
Fig. 1.5 Model of a HSV color space
 Hue is expressed as associate angle around a color
hexagon typically using the red axis as a 0 degree
axis.
 Value is measured on the axis of cone,
 V=0 at end of axis is black.
 V=1 at end of axis is white, center of hexagon.
 Saturation is measure of distance from the axis.



The following relations can be used to transform
RGB image into HSV image and vice versa.
Max = maximum {R,G,B}
Min = minimum {R,G,B}
VALUE (i.e, V in HSV) is easy to describe: It is
simply the largest of the R, G, B components
Value = Max(R,G,B)
SATURATION (S in HSV) is also easy to compute.
It is defined to be:
Saturation = (Max - Min) / Value
HUE is the trickiest to compute. It is defined in
cases, depending on which of the red, green, and blue
components of the color is the greatest. When green
is the greatest, Hue will fall between 60 and 180, and
when blue is the greatest, Hue will fall between 180
and 300. When red is the greatest, Hue will be an
angle falling either between 300 and 360 or between
0 and 60 [5].
http://www.ijettjournal.org
Page 431
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
1.2.2 Texture Features
Texture refers to the patterns in an image that
present the properties of homogeneity that do not
result from the presence of a single color or intensity
value. It is a powerful discriminating feature, present
almost everywhere in nature. However, it is almost
impossible to describe texture in words, because it is
virtually a statistical and structural property. There
are two major categories of texture-based techniques,
namely,
1. Statically/Spatial Techniques
2. Spectral Techniques
1. Statistical /Spatial Techniques
These methods treat texture patterns as samples of
certain random fields and extract texture features
from these properties.
These are based on statistical moments. They are:
a. Mean
b. Variance
c. Smoothness
d. Third Moment
e. Uniformity
f. Entropy
» But these are sensitive to rotation, scaling and
translation. To overcome these problems, we use
spectral techniques [3].
2. Spectral Techniques
Spectral approaches involve the sub-band
decomposition of images into different channels, and
the analysis of spatial frequency content in each of
these sub-bands in order to extract texture features.
These are based on Fourier Spectrum, suited for
describing the directionality of periodic or almost
periodic 2-D patterns in an image. Interpretation of
spectrum features is simplified by expressing the
spectrum in polar coordinates to yield a function s(r,
) where r, are the variables in coordinate system.
Types of Spectral Techniques
a. Gabor wavelets
b. Ridgelets
c. Curvelets
a.
Gabor wavelets
Wavelets generalize the Fourier transform by
using a basis that represents both location and spatial
frequency.[3] The below formula can be used to
calculate wavelet coefficients.
∞
x(s,y)= ∫ ∞ ( )[
Advantage
√
h(t-λ/s)]dt
It is efficient in detecting points.
ISSN: 2231-5381
Disadvantage It is not efficient in detecting lines
and edges.
»
To overcome this disadvantage, Ridgelets are
developed.
b. Ridgelets
A ridgelet is a wavelet type function and is
constant along the lines. A ridgelet is much sharper
than a sinusoidal wavelet. Ridgelet coefficients can
be calculated by the following formula.
f(a,b, )= ∫ , , ( , ) f(x,y)dx dy
Where a - scaling, b - shift,
- rotation
Advantage:
It can capture lines and edges more
accurately.
Disadvantage: Frequency spectrum covered by this
is not complete. To overcome this, we use Curvelets.
c. Curvelets
It was originally proposed for image denoising
application and showed promising results in character
recognition and image retrieval. The concept of
Curvelets transform has been extended from the 2-d
ridgelet transform. The Curvelets transform, like the
wavelet transform is a multiscale transform with
frame elements indexed by scale and location
parameters. Unlike the wavelet transform, it has
directional parameters and the Curvelets pyramid
contains elements with a very high degree of
directional specificity. [4]
Advantage: Frequency spectrum covered by
curvelets is complete.
» By using curvelet coefficients we can compare the
input query image with database images.
» Each image is decomposed into 4 or 5 levels of
scales using curvelet transform.
The below formula can be used to calculate curvelet
coefficients.
CTd(a,b, )=
∑
∑
( , )
, , ( , )
The curvelets coefficients obtained from the
above are rotation variant because feature vector
significantly changes when the image is rotated. So,
the idea is to rearrange the feature values based on
the dominant orientation.
1.2.3 Shape Features
Shape representation is normally required to be
invariant to translation, rotation, and scaling. In
general, shape representations can be categorized as
http://www.ijettjournal.org
Page 432
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
either boundary-based or region-based. A boundarybased representation uses only the outer boundary
characteristics of the entities, while a region-based
representation uses the entire region. Shape features
may also be local or global. A shape feature is local if
it is derived from some proper subpart of an object,
while it is global if it is derived from the entire object
[6].
It is also important to distinguish images with
different shapes. They are several methods for
extracting shape features of an image. They are:
1. Moment Invariants
2. Fourier descriptors
1. Moments Invariants
Moments Invariants of an image are calculated to
extract the shape features and these are insensitive to
translation, scaling, mirroring and rotation.
2. Fourier Descriptors
Fourier Descriptors of an image are calculated to
trace the boundary of an image starting at an arbitrary
point(x0,y0).Coordinate pairs (x0,y0), (x1,y1),(
x2,y2)…….( xk-1,yk-1) are encountered in traversing
the boundary, say, within the counter clock direction.
These coordinates will be expressed as x(k) = xk and
y(k)=yk[7]. The boundary itself will be diagrammatic
as the sequence of coordinates.
Steps that are performed in calculating Fourier
Descriptors are
1. Calculate Boundary points of objects in an
image.
2. Each point is represented as a complex
number.
s(k)=x(k)+j y(k)
3. Calculate Discrete Fourier transform of s(k)
as ,
The Complex coefficients a(u) are called the
Fourier Descriptors of the boundary [6].
Advantage: The Coefficients can be easily
calculated by using a small Matlab program. So
complexity decreases.
II. RETRIEVAL TECHNIQUES
Image retrieval techniques integrate both low-level
visual features, addressing the more detailed
perceptual aspects and high-level semantic features
underlying the more general conceptual aspects of
ISSN: 2231-5381
visual data [1]. Image retrieval relies on the supply of
image content. Image content descriptors could also
be the options like color, texture, shape, and spatial
relationships, or linguistics primitives.
Conventional information retrieval is based
exclusively on text, and these approaches to matter
data retrieval have been transplanted into image
retrieval in a variety of ways, together with the
illustration of a picture as a vector of feature values.
However, “an image is a worth of many number of
words.” Image contents are a way more versatile
compared with text, and the amount of visual
information is terribly apace. Hoping to address these
special characteristics of visual information, contentbased image retrieval methods have been introduced.
It’s been widely known that the image retrieval
techniques should become an integration of both lowlevel visual features, addressing the additional
elaborated sensory activity aspects, and high-level
underlying the more general conceptual aspects of
visual data. Neither of these 2 styles of options is
enough to retrieve or manage visual data in an
efficient manner. Though efforts are dedicated to
combining the two aspects of visual information, the
gap between them continues to be barrier before the
researchers. Intuitive and heuristic approaches don’t
give satisfactory performance. Therefore, there is an
immediate necessity to compute and manage the
latent correlation between low-level ideas and highlevel ideas.
In general, image retrieval can be categorized into
the following types:
• Exact Matching
This category is applicable only to static
environments or the environments in which features
of the images do not evolve over an extended period
of time. Databases containing industrial and
architectural drawings or electronics schematics are
examples of such environments.
• Low-Level Similarity-Based Searching
In most cases, it is difficult to determine which
images best satisfy the query. Different users may
have different desires. Even the same user might have
varied preferences under different circumstances.
Thus, it is desirable to return the top several similar
images based on the similarity measure, so as to give
users a decent sampling. The similarity measure is
http://www.ijettjournal.org
Page 433
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
generally based on simple feature matching and it is
quite common for the user to interact with the system
so as to indicate to it the quality of each of the
returned matches, which helps the system adapt to the
user’s interest.
• High-Level Semantic-Based Searching
In this case, the notion of similarity is not based
on simple feature matching and usually results from
extended user interaction with the system.
For any type of retrieval, the dynamic and versatile
characteristics of image content require expensive
computations and sophisticated methodologies in the
areas of computer vision, image process, information
visualization, indexing, and similarity feature
calculation. Typically, each of these schemes builds
independence. Symbolic pictures are then employed
in conjunction with various index structures as
proxies for image comparisons to reduce the
searching scope [9]. The high-dimensional visual
information is usually reduced into a lowerdimensional subspace so that it is easier to index and
manage the visual contents. Once the similarity
measure has been calculated, indexes of
corresponding pictures are located in the image space
and those images are retrieved from the database.
Due to the lack of any unified framework for image
representation and retrieval, certain methods may
have the tendency to offer better result than others
under differing queries. Therefore, these schemes and
retrieval techniques have somehow integrated and
adjusted to facilitate the image data management.
2.1 Existing Systems:
2.1.1 Text Based Image Retrieval System:
Fig. 2.1: Basic model of a Text-based image retrieval system
ISSN: 2231-5381
In these systems, we can retrieve images based on
keywords i.e. we give an image name as input and
based on this name; images having similar names are
retrieved. For example, suppose if we want to search
all images named Roses from a large database, we
give an input as Rose.jpg. But if the database also
contains other images (not roses) having the same
name as Rose.jpg, then we can also get those images
which are irrelevant for our search. To improve the
efficiency of these existing systems, Content-Based
Image Retrieval Systems have come into existence.
2.1.2 CONTENT-BASED IMAGE RETRIEVAL
(CBIR) SYSTEM
Figure 4.2 shows the basic model of a contentbased image retrieval system.
Fig. 2.2 Basic model of a Content-based Image retrieval system
There are several excellent surveys of contentbased image retrieval systems. We mention here
some of the more notable systems.
The first, QBIC (Query-by-Image-Content), was
one of the first prototype systems which allow
queries by color, texture, and shape and introduced a
sophisticated similarity function. As this similarity
function has a quadratic time-complexity, the notion
of dimensional reduction was used in order to reduce
the computation time. Another notable property of
QBIC was its use of multidimensional indexing to
speed-up searches [1].
The Chabot system brings text and images
together into the search task, allowing the user to
define concepts in terms of various feature values,
and used the post-relational database management
system [1].
The MARS system allows sophisticated
relevance feedback from the user.
In all these systems, we can retrieve images
based on the content of a picture i.e. Texture, Shape,
http://www.ijettjournal.org
Page 434
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
Color features etc so we have to build a database
consisting the features of all images. For instance,
suppose if we want to search all images named
Horse.jpg from a large database, we give an input as
Horse.jpg. Then the system extracts the content of the
features of the given image and compares these
features with the image features stored in the
database, and then all the images having similar
content are retrieved. But if the database contains
images that have different orientation, it is not
possible to retrieve those images.
2.3 PROPOSED SYSTEM
Rotation
Invariant
Content-Based
Image
Retrieval (RICBIR) System
Figure 2.3 shows the basic model of a Rotation
Invariant Content-based image retrieval system.
this paper to show the image retrieval we used a
sample Corel dataset of 1000 images. To use the
memory efficiently we used cell arrays to save the
images, its properties, and all clustering values.
III. RICBIR SYSTEM DESIGN
For simple and efficient design of our system, we
divided our system into two modules.
Module-1 (Constructing a Database): For each and
every image we can extract texture, shape and color
features of an image, and store it in a database by
using cell arrays. Figure 3.1 shows the overall
process in module-1.
Fig. 2.3 Basic model of a RCBIR system
In these systems also, we can retrieve images
based on visual features such as color, texture and
shape but also with different orientations. Reasons
for its development are that in many large image
databases, traditional methods of text-based, contentbased have proven to be insufficient, laborious, and
extremely time consuming. To overcome these
drawbacks RICBIR can be used. It involves two
steps:
Feature Extraction: The first step in the process is
extracting image features to a distinguishable extent.
Matching: The second step involves matching these
features to yield a result that is visually similar.
The sole purpose of the system is to provide an
easy way in finding the similar images from the large
set of images. Here in this system the user will
provide a query image to find the similar images. In
ISSN: 2231-5381
Fig. 3.1 Module-1 (Constructing a Database)
Steps that are performed in Module-1 are:
i. Extracting Texture Features
In this system, we choose Curvelets for extracting
texture features of an image. The concept of curvelets
transform has been extended from the 2-d ridgelet
transform. It uses fewer coefficients than traditional
transforms. Curvelet coefficients are calculated by
using the below formula
CTd(a,b, )=
∑
∑
( , )
, , ( , )
The curvelet features obtained from the above are
rotation variant because feature vector significantly
changes when the image is rotated. So, the idea is to
rearrange the feature values based on the dominant
orientation.
http://www.ijettjournal.org
Page 435
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
ii. Extracting Color Features
In this system, we choose HSV Color Space for
extracting Color Features of an image. HSV is well
suited for describing colors in terms that are practical
for human interpretation. This is very suited to
describe the color of an object.
 Hue is an attribute that describes a pure color (e.g.
pure orange, red or red etc.)
 Saturation gives a measure of the degree to which a
pure color is diluted by white light.
 Values embody the achromatic notion of intensity
and it is subjective.
Calculation of Hue, Saturation and Intensity
values: By using the below formulae, we have
calculated color features of an image.
iii. Extracting Shape Features
In this system, we choose Moment Invariants for
extracting shape features of an image. There are
seven formulae to calculate Moment Invariants:
Ф1= η21+η02
Ф2= (η20-η02)2 + 4η211
Ф3= (η30-3η12)2 + (3η21-η03)2
Ф4= (η30+η12)2 + (η21+η03)2
Ф5=(η30-3η12)2
(η30+η12)
[(η30+η12)23(η21+η03)2]
+
(3η21-η03)
(η21+η03)
[3(η30+η12)2-(η21+η03)2]
Ф6= (η20-η02)2 [(η30+η12)2 - (η21+η03)2] + 4η11
(η30+η12) (η21+η03)
Ф7= (3η21-η03) (η30+η12) [(η30+η02)2 3(η21+η03)2] + 3(η21-η03) (η21+η03) [ 3
(η30+η12)2 - (η21+η03)2]
iv. Feature vector formation
ISSN: 2231-5381
In case of vector-based representation, feature
vector can be represented as, Vi=[W1 W2….Wd] of
image i of the database and Vq = [q1 q2….qd] of the
query q. Let Color features i.e. Hue, Saturation and
Intensity be C1, C2, and C3. Texture features be T1,
T2, T3 and Shape features are S1,S2,S3.Final feature
vector is [C1,C2,C3,T1,T2,T3,S1,S2,S3].
Module-2 (Compare image features and Display
similar images): By taking a sample image, we
extracted the features, compared the features with the
stored features in a database and display the relevant
images. Finally we calculate efficiency of the system
using precision and Recall.
Fig 3.2: Module-2 (Comparing and display similar images)
IV. RESULTS and COMPARISION
4.1 SIMILARITY MEASURE
In case of vector-based representation, the use of
feature vector Vi=[W1 W2….Wd] of an image i, of
the database and Vq = [q1 q2….qd] of the query q,
the matching can be computed as a quantification of
some similarity measure between Vi and Vq.
4.2 EFFICIENCY CALCULATION
We calculate efficiency of the system using
precision and recall.
http://www.ijettjournal.org
Page 436
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
i. Precision
In the field of information retrieval, precision can be
seen as a measure of exactness or fidelity. Precision
is the fraction of retrieved documents that are
relevant to the search
Fig 4.3: Comparison using precision & Recall
Precision is defined as the number of relevant
images retrieved by a search divided by the total
number of images retrieved by that search.
Precision and Recall calculation
In the above search, retrieved images=1
ii. Recall In the field of information retrieval, Recall
is a measure of completeness. Recall is the fraction of
the documents that are relevant to the query that are
successfully retrieved.
Relevant images = 1
So, precision=1/1= 1
Recall =
1/1 =1
Recall is defined as the number of relevant images
retrieved by a search divided by the total number of
existing relevant images.
4.3 EXPERIMENTAL RESULTS
Fig 4.4: : Figure to input an image and retrieve the similar images
Fig 4.1: Figure to input an image and retrieve the same image
Fig 4.5: Values in the match table
Fig 4.2: Values in the match table
ISSN: 2231-5381
http://www.ijettjournal.org
Page 437
International Journal of Engineering Trends and Technology (IJETT) – Volume17 Number 9–Nov2014
4.
5.
6.
7.
8.
Fig 4.6: Comparison using precision & Recall
Precision and Recall calculation
9.
J. Starck, et al., “The Curvelet Transform for Image
Denoising,” IEEE Trans. on Image Processing, 11(6), 670684, 2002.
Digital Image Processing Rafel C.Gonzalez and Richard
E.Woods Addison Wesley.
Digital Image Processing using MATLAB, Gonzalez and
Woods.
Fundamentals of Digital Image Processing, Anol K Jain,
Pearson.
Digital Image Processing and Analysis, B.Chanda & D Dutta
majumder, Pearson.
Barbeau Jerome, Vignes-Lebbe Regine, and Stamon
Georges, “A Signature based on Delaunay Graph and Cooccurrence
Matrix,”
Laboratoire
Informatique
et
Systematique, University of Paris, Paris, France, July 2002,
Foundation
:
http://www.math-info.univ-paris5.fr/siplab/barbeau\barbeau.pdf
In the above search, retrieved images=3
Relevant images = 2
So, precision=2/3= 0.66
Recall =
2/2 =1
V. CONCLUSION
The vast increase in the image database sizes as
well as its vast development in various applications
the necessity of effective and efficient retrieval
systems have taken place. The development of these
systems started with retrieving images using textual
words but later introduced image retrieval based on
content. This came to be known as Content Based
Image Retrieval System. But these systems did not
retrieve images that have different orientation. So we
introduced a new system called as Rotation Invariant
Content Based Image Retrieval System in this paper.
In this we can also retrieve images that have different
orientation.
REFERENCES
1.
2.
3.
F. Long, et al., ‘Fundamentals of Content-based Image
Retrieval,” in Multimedia Information Retrieval and
Management, D. Feng Eds, Springer, 2003.
Gajanand Gupta, ‘Algorithm for Image Processing Using
Improved Median Filter and Comparison of Mean, Median
and Improved Median Filter’, in International Journal of Soft
Computing and Engineering (IJSCE) ISSN: 2231-2307,
Volume-1, Issue-5, November 2011
S. Bhagavathy and K. Chhabra, “A Wavelet-based Image
Retrieval System,” Technical Report—ECE278A, Vision
Research Laboratory, University of California, Santa
Barbara, 2007.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 438
Download