2019
DOI: 10.1109/access.2019.2951471
|View full text |Cite
|
Sign up to set email alerts
|

Feature Fusion for Image Retrieval With Adaptive Bitrate Allocation and Hard Negative Mining

Abstract: By combining Convolutional Neural Network (CNN) descriptor and Compact Descriptors for Visual Search (CDVS), the visual search performance can be boosted. However, some redundancies still exist in the CDVS representation and the hard negative mining is not very accurate when training CNN embeddings. In this paper, we propose a high performance image retrieval scheme based on descriptor fusion. In detail, we first propose a more compact CDVS descriptor database building scheme through bitrate allocation, which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…where i is the i th query index and N is the number of total queries [3]. M i relevant is the number of relevant images corresponding to the i th query, r is the rank and P (r) is the precision at the cut-off rank of r. The employed Recall@K metric is determined by the existence of at least one correct retrieved sample in the K nearest neighbors [26].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…where i is the i th query index and N is the number of total queries [3]. M i relevant is the number of relevant images corresponding to the i th query, r is the rank and P (r) is the precision at the cut-off rank of r. The employed Recall@K metric is determined by the existence of at least one correct retrieved sample in the K nearest neighbors [26].…”
Section: Discussionmentioning
confidence: 99%
“…Deep metric learning aims to learn an embedding that can decrease the distance between matching pairs and increase the distance between non-matching pairs [3,28]. To learn such an embedding, many works are proposed, such as the traditional contrastive loss [12,29] under Siamese network and triplet loss [13,25].…”
Section: Preliminariesmentioning
confidence: 99%
See 1 more Smart Citation