2018
DOI: 10.48550/arxiv.1812.01584
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Detect-to-Retrieve: Efficient Regional Aggregation for Image Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 32 publications
0
9
0
Order By: Relevance
“…In this work, T i is considered, as in DELF, as a block containing W xH D-dimensional descriptors. The recent work of Teichmann et al [24] builds upon the same idea, they describe selected candidate regions in the query with VLAD, and then propose a regional version of the ASMK [25] (another aggregation method) to aggregate these descriptors into a global descriptor.…”
Section: Local Methodsmentioning
confidence: 99%
“…In this work, T i is considered, as in DELF, as a block containing W xH D-dimensional descriptors. The recent work of Teichmann et al [24] builds upon the same idea, they describe selected candidate regions in the query with VLAD, and then propose a regional version of the ASMK [25] (another aggregation method) to aggregate these descriptors into a global descriptor.…”
Section: Local Methodsmentioning
confidence: 99%
“…In [31], authors utilized deep correspondence matching based on image patches in aerial images for retrieval task. In [32], authors utilized a novel regional aggregated selective match kernel to combine the information from the regions detected into a holistic description. In [33], authors augmented regional-attention network with Regional-Maximum Activation of Convolutions (R-MAC) to improve the retrieval performance.…”
Section: B Deep Features Based Retrievalmentioning
confidence: 99%
“…This allows us to extract local features from input images and then perform geometric verification with RANSAC [5]. As follows from the experiments described in [19], DELF and its extension [24] are the state-of-the-art approaches.…”
Section: Related Workmentioning
confidence: 99%
“…Results on RPar with and without distractors in medium and hard modes are presented in Tables 7 and 8. We also added results of methods from [19] and [24] in best settings to these tables.…”
Section: Revisited Paris Datasetmentioning
confidence: 99%