2020
DOI: 10.1007/s11042-020-09292-9
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional neural networks for relevance feedback in content based image retrieval

Abstract: Given the great success of Convolutional Neural Network (CNN) for image representation and classification tasks, we argue that Content-Based Image Retrieval (CBIR) systems could also leverage on CNN capabilities, mainly when Relevance Feedback (RF) mechanisms are employed. On the one hand, to improve the performances of CBIRs, that are strictly related to the effectiveness of the descriptors used to represent an image, as they aim at providing the user with images similar to an initial query image. On the othe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 30 publications
(17 citation statements)
references
References 46 publications
(75 reference statements)
0
17
0
Order By: Relevance
“…No medical knowledge was used in this process; hence, there exists the domain gap if we want to apply the systems to medical domain. This loss of information can be reduced by incorporating prior knowledge and other sources of knowledge [ 78 ]. Table 5 lists the overview of datasets used in the image retrieval articles included in this review.…”
Section: Resultsmentioning
confidence: 99%
“…No medical knowledge was used in this process; hence, there exists the domain gap if we want to apply the systems to medical domain. This loss of information can be reduced by incorporating prior knowledge and other sources of knowledge [ 78 ]. Table 5 lists the overview of datasets used in the image retrieval articles included in this review.…”
Section: Resultsmentioning
confidence: 99%
“…We extracted the features from the last fully connected layer for a total of 23 features. The CNNs are known to have a sufficient representational power and generalisation ability to perform different visual recognition tasks [ 42 ]. Nevertheless, we fine-tuned the above CNNs on both data sets before the feature extraction in order to produce a fairer comparison to the standard machine learning classifiers trained with handcrafted features.…”
Section: Materials and Methodsmentioning
confidence: 99%
“…By comparing these baseline methods and our method with them, we confirm that our re-ranking method can improve the initial retrieval performance. Also, basic re-ranking methods [12], [13], [31], [40], [41], [42], [43], [44] were utilized as comparative methods, respectively. By using these various types of re-ranking methods, we confirm that our re-ranking ( f E and f E n ) via the encoders M L (•) and M V (•) following each initial cross-modal retrieval method.…”
Section: Microsoft Common Objects In Context (Mscoco) [33]mentioning
confidence: 99%