2017
DOI: 10.1109/tmm.2017.2697824
|View full text |Cite
|
Sign up to set email alerts
|

Compact Hash Codes for Efficient Visual Descriptors Retrieval in Large Scale Databases

Abstract: Abstract-In this paper, we present an efficient method for visual descriptors retrieval based on compact hash codes computed using a multiple k-means assignment. The method has been applied to the problem of approximate nearest neighbor (ANN) search of local and global visual content descriptors, and it has been tested on different datasets: three large scale standard datasets of engineered features of up to one billion descriptors (BIGANN) and, supported by recent progress in convolutional neural networks (CN… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
13
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 41 publications
(13 citation statements)
references
References 52 publications
(157 reference statements)
0
13
0
Order By: Relevance
“…Sparse Product Quantization (SPQ) [39] encodes the high-dimensional feature vectors into sparse representation by decomposing the feature space into a Cartesian product of low-dimensional subspaces and quantizing each subspace via K-means clustering, and the sparse representations are optimized by minimizing their quantization errors. [40] proposes to learn compact hash code by computing a sort of soft assignment within the k-means framework, which is called "multi-k-means", to void the expensive memory and computing requirements. Latent Semantic Minimal Hashing (LSMH) [41] refines latent semantic feature embedding in the image feature to refine original feature based on matrix decomposition, and a minimum encoding loss is combined with latent semantic feature learning process simultaneously to get discriminative obtained binary codes.…”
Section: Related Workmentioning
confidence: 99%
“…Sparse Product Quantization (SPQ) [39] encodes the high-dimensional feature vectors into sparse representation by decomposing the feature space into a Cartesian product of low-dimensional subspaces and quantizing each subspace via K-means clustering, and the sparse representations are optimized by minimizing their quantization errors. [40] proposes to learn compact hash code by computing a sort of soft assignment within the k-means framework, which is called "multi-k-means", to void the expensive memory and computing requirements. Latent Semantic Minimal Hashing (LSMH) [41] refines latent semantic feature embedding in the image feature to refine original feature based on matrix decomposition, and a minimum encoding loss is combined with latent semantic feature learning process simultaneously to get discriminative obtained binary codes.…”
Section: Related Workmentioning
confidence: 99%
“…Ercoli et al [32] via obtaining multi k-means technique have premeditated hash codes and used them for repossession of visual descriptors. Krichevsky [16] on the CIFAR-10 dataset uses a 2-layer Convolutional Deep Belief Network (CDBN).…”
Section: Related Workmentioning
confidence: 99%
“…Ercoli et al [11] propose a quantization process that is based on the distance of a feature vector to precomputed cluster centroids which are obtained by k-means. They evaluate the threshold measures as geometric mean, arithmetic mean, and n th -nearest distance.…”
Section: Related Workmentioning
confidence: 99%