CVPR 2011 2011
DOI: 10.1109/cvpr.2011.5995518
|View full text |Cite
|
Sign up to set email alerts
|

Compact hashing with joint optimization of search accuracy and time

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 97 publications
(63 citation statements)
references
References 17 publications
0
63
0
Order By: Relevance
“…To some extend, the hash bits of each local feature can also be viewed as the visual word index, however, the advantages of using hash bits are: 1) In "bag of words" representation, the distance between "word index" of local features is meaningless. For example, word index 4 is not "meaningfully" closer to word index 5 than word index 200, since word index are just clustering labels; however, the hamming distance between the hash bits is actually meaningful when we use similarity preserving hash functions, like PCA hashing, SPICA hashing [9] or LSH [5]. The hamming distance among hash bits is often designed to approximate the original feature distance, and hence is helpful for matching local features much more accurately.…”
Section: Motivations and Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…To some extend, the hash bits of each local feature can also be viewed as the visual word index, however, the advantages of using hash bits are: 1) In "bag of words" representation, the distance between "word index" of local features is meaningless. For example, word index 4 is not "meaningfully" closer to word index 5 than word index 200, since word index are just clustering labels; however, the hamming distance between the hash bits is actually meaningful when we use similarity preserving hash functions, like PCA hashing, SPICA hashing [9] or LSH [5]. The hamming distance among hash bits is often designed to approximate the original feature distance, and hence is helpful for matching local features much more accurately.…”
Section: Motivations and Contributionsmentioning
confidence: 99%
“…Use compact hashing (e.g., PCA hashing (PCH) or SPICA hashing [9]) instead of random hash functions like Locality Sensitive Hashing (LSH) [5].…”
Section: Motivations and Contributionsmentioning
confidence: 99%
“…Spectral hashing (SH) [7] and anchor graph hashing (AGH) [8], which are formulated based on a graph partitioning problem, are regarded as being an unsupervised learning method. Other unsupervised methods have been proposed such as [9], [10], which construct the binary hash function that uniformly assigns the data to each binary pattern to the extent possible. On the other hand, there are supervised-learning-based methods that use the information of the similar or dissimilar label of pairwise data.…”
Section: Construction Of the Binary Hash Functionmentioning
confidence: 99%
“…For example, Torralba et al [2008] showed that both stacked-restricted boltzmann machine (stacked-RBM for short) method and similarity sensitive coding (SSC) method work significantly better than LSH-based methods while applying to real applications containing tens of millions of data points. He et al [2011] developed a new hashing algorithm to explicitly optimize search accuracy as well as search time. Liu et al [2011] proposed a novel anchor graph hashing method to automatically discover the neighborhood structure inherent in the data, aim at learning appropriate compact codes.…”
Section: Related Workmentioning
confidence: 99%