DOI: 10.1007/978-3-540-85891-1_16
|View full text |Cite
|
Sign up to set email alerts
|

Experiments on Selection of Codebooks for Local Image Feature Histograms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 20 publications
(20 citation statements)
references
References 7 publications
0
20
0
Order By: Relevance
“…We determine the histogram bins by clustering a sample of the interest point SIFT descriptors (20 per image) with the Linde-Buzo-Gray (LBG) algorithm. In our earlier experiments [8] we have found such codebooks to perform reasonably well while the computational cost associated with the clustering still remains manageable. The LBG algorithm produces codebooks with sizes in powers of two.…”
Section: Baseline Systemmentioning
confidence: 86%
See 1 more Smart Citation
“…We determine the histogram bins by clustering a sample of the interest point SIFT descriptors (20 per image) with the Linde-Buzo-Gray (LBG) algorithm. In our earlier experiments [8] we have found such codebooks to perform reasonably well while the computational cost associated with the clustering still remains manageable. The LBG algorithm produces codebooks with sizes in powers of two.…”
Section: Baseline Systemmentioning
confidence: 86%
“…Earlier [8] we have performed category detection experiments where we have compared ways to select a codebook for a single histogram representation, with varying histogram sizes. For the experiments we used the images and category detection tasks of the publicly available VOC2007 benchmark.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, we emphasize that although the idea of vector quantization by randomly sampled centers was already discussed in [7,24], to the best of our knowledge, this is the first work that presents its statistical consistency analysis.…”
Section: Empirically Estimated Density Function For Q I ðXþmentioning
confidence: 95%
“…Note that since both vector quantization algorithms do not rely on the clustering algorithms to identify visual words, they are in general computationally more efficient. In addition, both algorithms have error bounds decreases at the rate of Oð1= ffiffiffi ffi m p Þ when the number of key points n is large, indicating that they are robust to the number of visual words m. We emphasize that although similar random algorithms for vector quantization have been discussed in [5,17,22,14,24], the purpose of this empirical study is to verify that • simple random algorithms deliver similar performance of object recognition as the clustering based algorithm, and • the random algorithms are robust to the number of visual words, as predicted by the statistical consistency analysis.…”
Section: Empirical Studymentioning
confidence: 96%
See 1 more Smart Citation