2013 IEEE International Conference on Computer Vision 2013
DOI: 10.1109/iccv.2013.177
|View full text |Cite
|
Sign up to set email alerts
|

To Aggregate or Not to aggregate: Selective Match Kernels for Image Search

Abstract: This paper considers a family of metrics to compare images based on their local descriptors. It encompasses the VLAD descriptor and matching techniques such as Hamming Embedding. Making the bridge between these approaches leads us to propose a match kernel that takes the best of existing techniques by combining an aggregation procedure with a selective match kernel. Finally, the representation underpinning this kernel is approximated, providing a large scale image search both precise and scalable, as shown by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
162
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 190 publications
(164 citation statements)
references
References 29 publications
2
162
0
Order By: Relevance
“…Given the sparsity of BoW and the low discriminative ability of visual words, the inverted index and binary signatures are used [13]. The trade-off between accuracy and efficiency is a major influencing factor [20].…”
Section: Categorization Methodologymentioning
confidence: 99%
See 3 more Smart Citations
“…Given the sparsity of BoW and the low discriminative ability of visual words, the inverted index and binary signatures are used [13]. The trade-off between accuracy and efficiency is a major influencing factor [20].…”
Section: Categorization Methodologymentioning
confidence: 99%
“…Considering the relatively small computational cost compared with large codebooks (Section 3.4.1), flat k-means can be adopted for codebook generation [85], [20]. It is also shown in [31], [86] that using AKM [12] for clustering also yields very competitive retrieval accuracy.…”
Section: Codebook Generation and Quantizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Second, during matching verification, the Hamming distance between two binary features can be efficiently calculated via xor operations, while the Euclidean distance between floating-point vectors is very expensive to compute. Previous work of this line includes Hamming Embedding (HE) [1] and its variants [10], [11], which use binary SIFT features for verification. Meanwhile, binary features also include spatial context [12], heterogeneous feature such as color [13], etc.…”
mentioning
confidence: 99%