Uploaded by Yong Pi

Grycuk2018 Chapter ArchitectureOfDatabaseIndexFor

advertisement
Architecture of Database Index for
Content-Based Image Retrieval Systems
Rafal Grycuk1 , Patryk Najgebauer1 , Rafal Scherer1(B) ,
and Agnieszka Siwocha2,3
1
Computer Vision and Data Mining Lab, Institute of Computational Intelligence,
Czȩstochowa University of Technology,
Al. Armii Krajowej 36, 42-200 Czȩstochowa, Poland
{rafal.grycuk,patryk.najgebauer,rafal.scherer}@iisi.pcz.pl
2
Information Technology Institute, University of Social Sciences,
90-113 Lodz, Poland
3
Clark University, Worcester, MA 01610, USA
http://iisi.pcz.pl
Abstract. In this paper, we present a novel database index architecture for retrieving images. Effective storing, browsing and searching collections of images is one of the most important challenges of computer
science. The design of architecture for storing such data requires a set of
tools and frameworks such as relational database management systems.
We create a database index as a DLL library and deploy it on the MS
SQL Server. The CEDD algorithm is used for image description. The
index is composed of new user-defined types and a user-defined function.
The presented index is tested on an image dataset and its effectiveness is
proved. The proposed solution can be also be ported to other database
management systems.
Keywords: Content-based image retrieval
1
· Image indexing
Introduction
The emergence of content-based image retrieval (CBIR) in the 1990s enabled
automatic retrieval of images and allowed to depart from searching collections
of images by keywords and meta tags or just by manual browsing them. Contentbased image retrieval (CBIR) is a group of technologies which general purpose
is to organize digital images by their visual content. Many methods, algorithms
or technologies can be aggregated into this definition. CBIR takes unique place
within the scientific society. This challenging field of study involves scholars
from various fields, such as [4] computer vision (CV), machine learning, information retrieval, human-computer interaction, databases, web mining, data mining, information theory, statistics. Bridging of this fields proved to be very
effective and provided interesting results and practical implementations thus,
it creates new fields of research [32]. The current CBIR state of the art allows
c Springer International Publishing AG, part of Springer Nature 2018
L. Rutkowski et al. (Eds.): ICAISC 2018, LNAI 10842, pp. 36–47, 2018.
https://doi.org/10.1007/978-3-319-91262-2_4
Architecture of Database Index for Content-Based Image Retrieval Systems
37
using its methods in real-world applications used by millions of people globally
(e.g. Google image search, Microsoft image search, Yahoo, Facebook, Instagram,
Flickr and many others). The databases of these applications contain millions
of images thus, the effective storing and retrieving images is extremely challenging. Images are created every day in tremendous amount and there is ongoing
research to make it possible to efficiently search these vast collections by their
content. Recognizing images and objects on images relies on suitable feature
extraction which can be basically divided into several groups, i.e. based on color
representation [19], textures [29], shape [17], edge detectors [14] or local invariant features [7,9,18,21], e.g. SURF [1], SIFT [25], neural networks [16] bag of
features [6,23,30] or image segmentation [10,12].
A process associated with retrieving images in the databases is query formulation (similar to the ’select’ statement in the SQL language). In the literature, it
is possible to find many algorithms which operate on one of the three levels: [24]
1. Level 1: Retrieval based on primary features like color, texture and shape. A
typical query is “search for a similar image”.
2. Level 2: Retrieval of a certain object which is identified by extracted features,
e.g. “search for a flower image”.
3. Level 3: Retrieval of abstract attributes, including a vast number of determiners about the presented objects and scenes. Here, it is possible to find names
of events and emotions. An example query is: “search for satisfied people”.
Such methods require the use of algorithms from many different areas such as
computational intelligence, mathematics and image processing. There are many
content-based image processing systems developed so far, e.g. [8,11,13]. A good
review of such systems is provided in [31]. To the best of our knowledge, no other
system uses a similar set of tools to the system proposed in the paper.
2
Color and Edge Directivity Descriptor
In this section we briefly describe the Color and Edge Directivity Descriptor
(CEDD) [3,20,22]. CEDD is a global feature descriptor in the form of a histogram
obtained by so-called fuzzy-linking. The algorithm uses a two-stage fuzzy [2,27,
28] system in order to generate the histogram. A term fuzzy-linking defines that
the output histogram is composed of more than one histogram. In the first stage,
image blocks in the HSV colour space channels are used to compute a ten-bin
histogram. The input channels are described by fuzzy sets as follows [20]:
– the hue (H) channel is divided in 8 fuzzy areas,
– the saturation (S) channel is divided in 2 fuzzy regions,
– the value (V) channel is divided in 3 fuzzy areas.
The membership functions are presented in Fig. 1. The output of the fuzzy system is obtained by a set of twenty rules and provides a crisp value [0:1] in order to
produce the ten-bin histogram. The histogram bins represent ten preset colours:
38
R. Grycuk et al.
Fig. 1. Representations of fuzzy membership functions for the channels in the HSV
color space, respectively: H (a), S (b), V (c) [20].
black, grey, white, red, etc. In the second stage of the fuzzy-linking system, a
brightness value of seven colours is computed (without black, grey, white). Similar to the previous step, S and V channels and image blocks are inputs of the
fuzzy system. The output of the second-stage is a three-bin histogram of crisp
values, which describes the brightness of the colour (light, dark, normal and
dark). Both histogram outputs (the first and the second stage) are combined,
which allows producing the final 24-bin histogram. Each bin corresponds with
color [20]: (0) Black, (1) Grey, (2) White, (3) Dark Red, (4) Red, (5) Light Red,
(6) Dark Orange, (7) Orange, (8) Light Orange, (9) Dark Yellow, (10) Yellow,
(11) Light Yellow, (12) Dark Green, (13) Green, (14) Light Green, (15) Dark
Cyan, (16) Cyan, (17) Light Cyan, (18) Dark Blue, (19) Blue, (20) Light Blue,
(21) Dark Magenta, (22) Magenta, (23) Light Magenta. In parallel to the Colour
Unit, a Texture Unit of the Image-Block is computed, which general schema is
presented in Fig. 2.
Fig. 2. A general schema of computing the CEDD descriptor [20].
In the first step of the Texture Unit, an image block is converted to the YIQ
colour space. In order to extract texture information, MPEG-7 digital filters are
Architecture of Database Index for Content-Based Image Retrieval Systems
39
used. One of these filters is the Edge Histogram Descriptor, which represents five
edge types: vertical, horizontal, 45 diagonal, 135 diagonal, and isotropic (Fig. 3).
Fig. 3. Edge filters used to compute the texture descriptor [20].
The output of the Texture Unit is a six-bin histogram. When both histograms
are computed, we obtain a 144-bin vector for every image block. Then, the vector
is normalized and quantized into 8 predefined levels. This is the final step of
computing the CEDD descriptor and now it can be used as a representation of
the visual content of the image.
3
Database Index for Content-Based Image Retrieval
System
In this section, we present a novel database architecture used to image indexing.
The presented approach has several advantages over the existed ones:
– It is embedded into Database Management System (DBMS),
– Uses all the benefits of SQL and object-relational database management systems (ORDBMSs),
– It does not require any external program in order to manipulate data. A user
of our index operate on T-SQL only, by using Data Modification Language
(DML) by INSERT, UPDATE, and DELETE,
– Provides a new type for the database, which allows storing images along with
the CEDD descriptor,
– It operates on binary data (vectors are converted to binary) thus, data processing is much faster as there is no JOIN clause used.
Our image database index is designed for Microsoft SQL Server, but it can
be also ported to other platforms. A schema of the proposed system is presented in Fig. 4. It is embedded in the CLR (Common Language Runtime),
which is a part of the database engine. After compilation, our solution is a .NET
library, which is executed on CLR in the SQL Server. The complex calculations
40
R. Grycuk et al.
Fig. 4. The location of the presented image database index in Microsoft SQL Server.
of the CEDD descriptor cannot be easily implemented in T-SQL thus, we decided
to use the CLR C#, which allows implementing many complex mathematical
transformations.
In our solution we use two tools:
– SQL C# User-Defined Types - it is a project for creating a user defined types,
which can be deployed on the SQL Server and used as the new type,
– SQL C# Function - it allows to create SQL Function in the form of C#
code, it can also be deployed on the SQL Server and used as a regular T-SQL
function. It should be noted that we use table-valued functions instead of
scalar-valued functions.
At first we need to create a new user-defined type for storing binary data
along with the CEDD descriptor. During this stage we encountered many issues
which were resolved eventually. The most important ones are described below:
– The P arse method cannot take the SqlBinary type as a parameter, only
SqlString is allowed. This method is used during INSERT clause. Thus, we
resolve it by encoding binary to string and by passing it to the P arse method.
In the body of the method we decode the string to binary and use it to obtain
the descriptor,
– Another interesting problem is registration of external libraries. By default
the library System.Drawing is not included. In order to include it we need
to execute an SQL script.
– We cannot use reference types as fields or properties and we resolve this issue
by implementing the IBinarySerialize interface.
We designed three classes: CeddDescriptor, UserDefinedFunctions, QueryResult
and one static class Extensions (Fig. 5). The CeddDescriptor class implements
two interfaces IN ullable and IBinarySerialize. It also contains one field null of
type bool. The class also contains three properties and five methods. A IsN ull
Architecture of Database Index for Content-Based Image Retrieval Systems
41
Fig. 5. Class diagram of the proposed database visual index.
and N ull properties are required by user defined types and they are mostly
generated. The Descriptor property allows to set or get the CEDD descriptor
value in the form of a double array. A method GetDescriptorAsBytes provides a descriptor in the form of a byte array. Another very important method is
P arse. It is invoked automatically when the T-SQL Cast method is called (Listing 1.2). Due to the restrictions implemented in UDT, we cannot pass parameter of type SqlBinary as it must be SqlString. In order to resolve the nuisance we encode byte array to string by using the BinaryT oString method
from the U serDef inedF unctions class. In the body of the P arse method we
decode the string to byte array, then we create a bitmap based on the previously
obtained byte array. Next, the Cedd descriptor value is computed. Afterwards,
the obtained descriptor is set as a property. The pseudo-code of this method is
presented in Algorithm 1 The Read and W rite method are implemented in order
to use reference types as fields and properties. They are responsible for writing
and reading to or from a stream of data. The last method (T oString) represents
the CeddDescriptor as string. Each element of the descriptor is displayed as a
string with a separator, this method allows to display the descriptor value by
the SELECT clause.
42
R. Grycuk et al.
INPUT: EncodedString
OUTPUT: CeddDescriptor
if EncodedString = NULL then
RETURN NULL;
end
ImageBinary := DecodeStringT oBinary(EncodedString);
ImageBitmap := CreateBitmap(ImageBinary);
CeddDescriptor := CalculateCeddDescriptor(ImageBitmap);
SetAsP ropertyDescriptor(CeddDescriptor)
Algorithm 1. Steps of the P arse method.
Another very important class is U serDef inedF unctions, it is composed of
three methods. The QueryImage method performs the image query on the previously inserted images and retrieves the most similar images with respect to
the threshold parameter. The method has three parameters: image, threshold,
tableDbN ame. The first one is the query image in the form of a binary array, the
second one determines the threshold distance between the image query and the
retrieved images. The last parameter determines the table to execute the query
on (it possible that many image tables exist in the system). The method takes the
image parameter and calculates the CeddDescriptor. Then, it compares it with
those existed in the database. In the next step the similar images are retrieved.
The method allows filtering the retrieved images by the distance with the threshold. The two remaining methods BinaryT oString and StringT oBinary allow
to encode and decode images as string or binary. The QueryResult class is used
for presenting the query results to the user. All the properties are self-describing
(see Fig. 5). The static Extension class contains two methods which extend double array and byte array, what allows to convert a byte array to a double array
and vice versa.
4
Simulation Environment
The presented visual index was built and deployed on Microsoft SQL Server as
a CLR DLL library written in C#. Thus, we needed to enable CLR integration
on the server. Afterwards, we also needed to add System.Drawing and index
assemblies as trusted. Then, we published the index and created a table with
our new CeddDescriptor type. The table creation is presented on Listing 1.1. As
can be seen, we created the CeddDescriptor column and other columns for the
image meta-data (such as ImageN ame, Extension and T ag). The binary form
of the image is stored in the ImageBinaryContent column.
Listing 1.1. Creating a table with the CeddDescriptor column.
CREATE TABLE CbirBow . dbo . CeddCorelImages
(
Id i n t primary key i d e n t i t y ( 1 , 1 ) ,
C e d d D e s c r i p t o r C e d d D e s c r i p t o r not n u l l ,
Architecture of Database Index for Content-Based Image Retrieval Systems
43
ImageName v a r c h a r (max) not n u l l ,
E x t e n s i o n v a r c h a r ( 1 0 ) not n u l l ,
Tag
v a r c h a r (max) not n u l l ,
ImageBinaryContent v a r b i n a r y (max) not n u l l
);
Now we can insert data into the table what requires a binary data that will be
loaded into a variable and passed as a parameter. This process is presented in
Listing 1.2
Listing 1.2. Inserting data to a table with the CeddDescriptor.
DECLARE @ f i l e d a t a AS v a r b i n a r y (max ) ;
SET @ f i l e d a t a = (SELECT ∗
FROM OPENROWSET(BULK N’ { p a t h t o f i l e } ’ ,
SINGLE BLOB) as BinaryData )
INSERT INTO dbo . CeddCorelImages
( C e d d D e s c r i p t o r , ImageName , E x t e n s i o n , Tag , ImageBinaryContent )
VALUES (
CONVERT( C ed d D es c r ip t o r , dbo . B i n a r y T o S t r i n g ( @ f i l e d a t a ) ) ,
’ 6 4 4 0 1 0 . jpg ’ , ’ . jpg ’ , ’ a r t d i n o ’ , @ f i l e d a t a ) ;
Such prepared table can be used to insert images from any visual dataset,
e.g. Corel, Pascal, ImageNet, etc. Afterwards, we can execute queries by the
QueryImage method and retrieve images. For the experimental purposes, we
used the PASCAL Visual Object Classes (VOC) dataset [5]. We split the image
sets of each class into a training set of images for image description and indexing (90%) and evaluation, i.e. query images for testing (10%). In Table 1 we
presented the retrieved factors of multi-query. As can be seen, the results are
satisfying which allows us to conclude that our method is effective and proves to
be useful in CBIR techniques. For the purposes of the performance evaluation
we used two well-known measures: precision and recall [26]. These measures are
widely used in CBIR for evaluation. The representation of measures is presented
in Fig. 6.
–
–
–
–
–
–
AI - appropriate images which should be returned,
RI - returned images by the system,
Rai - properly returned images (intersection of AI and RI),
Iri - improperly returned images,
anr - proper not returned images,
inr - improper not returned images.
These measures allows to define precision and recall by the following formulas
[26]
precision =
recall =
|rai|
,
|rai + iri|
|rai|
.
|rai + anr|
(1)
(2)
44
R. Grycuk et al.
Table 1. Simulation results (MultiQuery). Due to limited space only a small
part of the query results is presented.
Image Id
598(pyramid)
599(pyramid)
600(revolver)
601(revolver)
602(revolver)
603(revolver)
604(revolver)
605(revolver)
606(revolver)
607(rhino)
608(rhino)
609(rhino)
610(rhino)
611(rhino)
612(rooster)
613(rooster)
614(rooster)
615(rooster)
616(saxophone)
617(saxophone)
618(saxophone)
619(schooner)
620(schooner)
621(schooner)
622(schooner)
623(schooner)
624(scissors)
625(scissors)
626(scissors)
627(scorpion)
628(scorpion)
629(scorpion)
630(scorpion)
RI AI rai
50 47 33
51 47 31
73 67 43
72 67 41
73 67 40
73 67 42
73 67 44
71 67 40
73 67 40
53 49 39
53 49 42
53 49 42
52 49 38
52 49 39
43 41 36
43 41 33
43 41 34
44 41 35
36 33 26
36 33 26
35 33 26
56 52 37
56 52 37
56 52 39
55 52 37
56 52 35
35 33 22
36 33 22
36 33 20
75 69 59
73 69 57
73 69 58
73 69 59
iri
17
20
30
31
33
31
29
31
33
14
11
11
14
13
7
10
9
9
10
10
9
19
19
17
18
21
13
14
16
16
16
15
14
anr
14
16
24
26
27
25
23
27
27
10
7
7
11
10
5
8
7
6
7
7
7
15
15
13
15
17
11
11
13
10
12
11
10
Precision
66
61
59
57
55
58
60
56
55
74
79
79
73
75
84
77
79
80
72
72
74
66
66
70
67
62
63
61
56
79
78
79
81
Table 2. Example query results. The
image with the border is the query
image.
Recall
70
66
64
61
60
63
66
60
60
80
86
86
78
80
88
80
83
85
79
79
79
71
71
75
71
67
67
67
61
86
83
84
86
Table 2 shows the visualization of experimental results from a single
image query. As can be seen, most images were correctly retrieved. Some of
them are improperly recognized because they have similar features such as
Architecture of Database Index for Content-Based Image Retrieval Systems
45
Fig. 6. Performance measures diagram [15].
shape or colour background. The image with the red border is the query
image. The AverageP recision value for the entire dataset equals 71 and for
AverageRecall 76.
5
Conclusion
The presented system is a novel architecture of a database index for contentbased image retrieval. We used Microsoft SQL Server as the core of our architecture. The approach has several advantages: it is embedded into RDBMS, it
benefits from the SQL commands, thus it does not require external applications to manipulate data, and finally, it provides a new type for DBMSs. The
proposed architecture can be ported to other DBMSs (or ORDBMSs). It is
dedicated to being used as a database with CBIR feature. The performed experiments proved the effectiveness of our architecture. The proposed solution uses
the CEDD descriptor but it is open to modifications and can be relatively easily
extended to other types of visual feature descriptors.
References
1. Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (SURF).
Comput. Vis. Image Underst. 110(3), 346–359 (2008)
2. Beg, I., Rashid, T.: Modelling uncertainties in multi-criteria decision making using
distance measure and topsis for hesitant fuzzy sets. J. Artif. Intell. Soft Comput.
Res. 7(2), 103–109 (2017)
3. Chatzichristofis, S.A., Boutalis, Y.S.: CEDD: color and edge directivity descriptor:
a compact descriptor for image indexing and retrieval. In: Gasteratos, A., Vincze,
M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 312–322. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79547-6 30
4. Datta, R., Joshi, D., Li, J., Wang, J.Z.: Image retrieval: ideas, influences, and
trends of the new age. ACM Comput. Surv. (CSUR) 40(2), 5 (2008)
5. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The
pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338
(2010)
46
R. Grycuk et al.
6. Gabryel, M.: The bag-of-words methods with pareto-fronts for similar image
retrieval. In: Damaševičius, R., Mikašytė, V. (eds.) ICIST 2017. CCIS, vol. 756, pp.
374–384. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67642-5 31
7. Gabryel, M., Damaševičius, R.: The image classification with different types of
image features. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R.,
Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2017. LNCS (LNAI), vol. 10245, pp.
497–506. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59063-9 44
8. Gabryel, M., Grycuk, R., Korytkowski, M., Holotyak, T.: Image indexing and
retrieval using GSOM algorithm. In: Rutkowski, L., Korytkowski, M., Scherer, R.,
Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2015. LNCS (LNAI),
vol. 9119, pp. 706–714. Springer, Cham (2015). https://doi.org/10.1007/978-3-31919324-3 63
9. Grycuk, R.: Novel visual object descriptor using surf and clustering algorithms. J.
Appl. Math. Comput. Mech. 15(3), 37–46 (2016)
10. Grycuk, R., Gabryel, M., Korytkowski, M., Romanowski, J., Scherer, R.: Improved
digital image segmentation based on stereo vision and mean shift algorithm. In:
Wyrzykowski, R., Dongarra, J., Karczewski, K., Waśniewski, J. (eds.) PPAM 2013.
LNCS, vol. 8384, pp. 433–443. Springer, Heidelberg (2014). https://doi.org/10.
1007/978-3-642-55224-3 41
11. Grycuk, R., Gabryel, M., Korytkowski, M., Scherer, R.: Content-based image
indexing by data clustering and inverse document frequency. In: Kozielski, S.,
Mrozek, D., Kasprowski, P., Malysiak-Mrozek, B., Kostrzewa, D. (eds.) BDAS
2014. CCIS, vol. 424, pp. 374–383. Springer, Cham (2014). https://doi.org/10.
1007/978-3-319-06932-6 36
12. Grycuk, R., Gabryel, M., Korytkowski, M., Scherer, R., Voloshynovskiy, S.: From
single image to list of objects based on edge and blob detection. In: Rutkowski,
L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M.
(eds.) ICAISC 2014. LNCS (LNAI), vol. 8468, pp. 605–615. Springer, Cham (2014).
https://doi.org/10.1007/978-3-319-07176-3 53
13. Grycuk, R., Gabryel, M., Nowicki, R., Scherer, R.: Content-based image retrieval
optimization by differential evolution. In: 2016 IEEE Congress on Evolutionary
Computation (CEC), pp. 86–93. IEEE (2016)
14. Grycuk, R., Gabryel, M., Scherer, M., Voloshynovskiy, S.: Image descriptor based
on edge detection and crawler algorithm. In: Rutkowski, L., Korytkowski, M.,
Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2016.
LNCS (LNAI), vol. 9693, pp. 647–659. Springer, Cham (2016). https://doi.org/10.
1007/978-3-319-39384-1 57
15. Grycuk, R., Gabryel, M., Scherer, R., Voloshynovskiy, S.: Multi-layer architecture for storing visual data based on WCF and microsoft SQL server database.
In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A.,
Zurada, J.M. (eds.) ICAISC 2015. LNCS (LNAI), vol. 9119, pp. 715–726. Springer,
Cham (2015). https://doi.org/10.1007/978-3-319-19324-3 64
16. Grycuk, R., Knop, M.: Neural video compression based on SURF scene change
detection algorithm. In: Choraś, R.S. (ed.) Image Processing and Communications
Challenges 7. AISC, vol. 389, pp. 105–112. Springer, Cham (2016). https://doi.
org/10.1007/978-3-319-23814-2 13
17. Grycuk, R., Scherer, M., Voloshynovskiy, S.: Local keypoint-based image detector with object detection. In: Rutkowski, L., Korytkowski, M., Scherer, R.,
Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2017. LNCS (LNAI),
vol. 10245, pp. 507–517. Springer, Cham (2017). https://doi.org/10.1007/978-3319-59063-9 45
Architecture of Database Index for Content-Based Image Retrieval Systems
47
18. Grycuk, R., Scherer, R., Gabryel, M.: New image descriptor from edge detector
and blob extractor. J. Appl. Math. Comput. Mech. 14(4), 31–39 (2015)
19. Huang, J., Kumar, S., Mitra, M., Zhu, W.J., Zabih, R.: Image indexing using
color correlograms. In: Proceedings of 1997 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, pp. 762–768, June 1997
20. Iakovidou, C., Bampis, L., Chatzichristofis, S.A., Boutalis, Y.S., Amanatiadis, A.:
Color and edge directivity descriptor on GPGPU. In: 2015 23rd Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP),
pp. 301–308. IEEE (2015)
21. Karczmarek, P., Kiersztyn, A., Pedrycz, W., Dolecki, M.: An application of chain
code-based local descriptor and its extension to face recognition. Pattern Recogn.
65, 26–34 (2017)
22. Kumar, P.P., Aparna, D.K., Rao, K.V.: Compact descriptors for accurate image
indexing and retrieval: FCTH and CEDD. Int. J. Eng. Res. Technol. (IJERT) 1
(2012). ISSN 2278–0181
23. Lavoué, G.: Combination of bag-of-words descriptors for robust partial shape
retrieval. Vis. Comput. 28(9), 931–942 (2012)
24. Liu, Y., Zhang, D., Lu, G., Ma, W.Y.: A survey of content-based image retrieval
with high-level semantics. Pattern Recogn. 40(1), 262–282 (2007)
25. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)
26. Meskaldji, K., Boucherkha, S., Chikhi, S.: Color quantization and its impact on
color histogram based image retrieval accuracy. In: First International Conference
on Networked Digital Technologies, NDT 2009, pp. 515–517, July 2009
27. Riid, A., Preden, J.S.: Design of fuzzy rule-based classifiers through granulation
and consolidation. J. Artif. Intell. Soft Comput. Res. 7(2), 137–147 (2017)
28. Sadiqbatcha, S., Jafarzadeh, S., Ampatzidis, Y.: Particle swarm optimization for
solving a class of type-1 and type-2 fuzzy nonlinear equations. J. Artif. Intell. Soft
Comput. Res. 8(2), 103–110 (2018)
29. Śmietański, J., Tadeusiewicz, R., L
uczyńska, E.: Texture analysis in perfusion
images of prostate cancer-a case study. Int. J. Appl. Math. Comput. Sci. 20(1),
149–156 (2010)
30. Valle, E., Cord, M.: Advanced techniques in CBIR: local descriptors, visual dictionaries and bags of features. In: 2009 Tutorials of the XXII Brazilian Symposium on
Computer Graphics and Image Processing (SIBGRAPI TUTORIALS), pp. 72–78.
IEEE (2009)
31. Veltkamp, R.C., Tanase, M.: Content-based image retrieval systems: a survey, pp.
1–62. Utrecht University, Department of Computing Science (2002)
32. Wang, J.Z., Boujemaa, N., Del Bimbo, A., Geman, D., Hauptmann, A.G., Tesić,
J.: Diversity in multimedia information retrieval research. In: Proceedings of the
8th ACM International Workshop on Multimedia Information Retrieval, pp. 5–12.
ACM (2006)
Download