2016
DOI: 10.1007/978-3-319-46475-6_9
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting Additive Quantization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
38
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(38 citation statements)
references
References 24 publications
0
38
0
Order By: Relevance
“…Thanks to both high compression quality and computational efficiency PQ-based methods are currently the top choice for compact representations of large datasets. PQ gave rise to active research on high-dimensional vectors compression in computer vision and machine learning community [11,12,13,14,15,16,17,18,19]. IVFADC [1] is one of the first retrieval systems capable of dealing with billion-scale datasets efficiently.…”
Section: Related Workmentioning
confidence: 99%
“…Thanks to both high compression quality and computational efficiency PQ-based methods are currently the top choice for compact representations of large datasets. PQ gave rise to active research on high-dimensional vectors compression in computer vision and machine learning community [11,12,13,14,15,16,17,18,19]. IVFADC [1] is one of the first retrieval systems capable of dealing with billion-scale datasets efficiently.…”
Section: Related Workmentioning
confidence: 99%
“…To the best of our knowledge, sparse DSQ is the first attempt to explore supervised sparse multicodebook quantization for semantic similarity search. Nevertheless, we compare our technique with two unsupervised sparse quantization techniques, SCQ [47] and SLSQ [29] applied to the deep features of the fc7 layer of the deep model in [10].…”
Section: Resultsmentioning
confidence: 99%
“…Since the former criterion imposes a harder sparsity constraint on the codebooks, we would naturally expect to achieve lower search accuracy but better query time. We compare against SCQ1 and SCQ2 from [47] and SLSQ1 and SLSQ2 from [29]. Figure 3 shows the performance of different techniques against three different datasets.…”
Section: Resultsmentioning
confidence: 99%
“…Method VOC2007 Caltech-101 ImageNet PQ [24] 0.4965 0.3089 0.1650 CKM [38] 0.4995 0.3179 0.1737 LSQ [37] 0.4993 0.3372 0.1882 DSH-64 [33] 0.4914 0.2852 0.1665 SUBIC 2-layer 0.5600 0.3923 0.2543 SUBIC 3-layer 0.5588 0.4033 0.2810 Table 3: Cross-domain category retrieval. Performance (mAP) using 64-bit encoders across three different datasets using VGG-128 as base feature extractor.…”
Section: Methodsmentioning
confidence: 99%