2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01196
|View full text |Cite
|
Sign up to set email alerts
|

Deep Spherical Quantization for Image Search

Abstract: Hashing methods, which encode high-dimensional images with compact discrete codes, have been widely applied to enhance large-scale image retrieval. In this paper, we put forward Deep Spherical Quantization (DSQ), a novel method to make deep convolutional neural networks generate supervised and compact binary codes for efficient image search. Our approach simultaneously learns a mapping that transforms the input images into a low-dimensional discriminative space, and quantizes the transformed data points using … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(14 citation statements)
references
References 36 publications
0
14
0
Order By: Relevance
“…Besides, we use the pre-trained Word2Vec [17] model to embed each tag as a 300-dimensional vector. Following [10,11,28], we adopt K = 256 codewords for each codebook, thus the binary quantization code for each image of all M codebooks requires B = M log 2 K = 8M bits (i.e., M bytes).…”
Section: Methodsmentioning
confidence: 99%
“…Besides, we use the pre-trained Word2Vec [17] model to embed each tag as a 300-dimensional vector. Following [10,11,28], we adopt K = 256 codewords for each codebook, thus the binary quantization code for each image of all M codebooks requires B = M log 2 K = 8M bits (i.e., M bytes).…”
Section: Methodsmentioning
confidence: 99%
“…) • For the class of m-rectifiable random variables [40, Definition 11], Proposition 3.1 recovers [40, Theorem 55]. In this case, h µ (X) is the m-dimensional entropy [40, Definition18] of the m-rectifiable random variable X.…”
mentioning
confidence: 90%
“…In this section, we particularize our results on R-D theory and quantization to random variables taking values in compact manifolds, specifically hyperspheres and Grassmannians. Hyperspheres are prevalent in many areas of data science including spherical quantization [58,48,18], hypersphere learning [34, Section 4], and directional statistics [47]. Grassmannians find application in, e.g., code design [65,30,13,51], computer vision [59], deep neural network theory [32], and the completion of low-rank matrices [7].…”
Section: Measurable Space and Consider The Distortion Functionmentioning
confidence: 99%
“…Consequently, it is highly suggested to develop a new federated learning scheme to take into consideration the trade-off between utilization of communication resources and convergence rate throughout the training phase. Numerous quantization methods might be applied for federated learning across IoT networks including hyper-sphere quantization [78], low precision quantizer [79,80], and universal vector quantization [81].…”
Section: • Sparsification-empowered Federated Learningmentioning
confidence: 99%