Procedings of the British Machine Vision Conference 2015 2015
DOI: 10.5244/c.29.172
|View full text |Cite
|
Sign up to set email alerts
|

Latent Structure Preserving Hashing

Abstract: Aiming at efficient similarity search, hash functions are designed to embed high-dimensional feature descriptors to low-dimensional binary codes such that similar descriptors will lead to binary codes with a short distance in the Hamming space. It is critical to effectively maintain the intrinsic structure and preserve the original information of data in a hashing algorithm. In this paper, we propose a novel hashing algorithm called Latent Structure Preserving Hashing (LSPH), with the target of finding a well-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
5
3

Relationship

7
1

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 4 publications
0
9
0
Order By: Relevance
“…Due to that basic ECE and RBPL-ECE are both inspired by GP, which is always initialized randomly, all the experiments with our methods have been repetitively car-ried out ten times and the final results shown are the averages of the ten runs with a degree of uncertainty. All of the above methods in our experiments are evaluated on six different lengths of codes (16,32,48,64,80, and 96). MAP of all the algorithms on SIFT 1M and GIST 1M data sets.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to that basic ECE and RBPL-ECE are both inspired by GP, which is always initialized randomly, all the experiments with our methods have been repetitively car-ried out ten times and the final results shown are the averages of the ten runs with a degree of uncertainty. All of the above methods in our experiments are evaluated on six different lengths of codes (16,32,48,64,80, and 96). MAP of all the algorithms on SIFT 1M and GIST 1M data sets.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…self-taught hashing (STH) [15], latent structure preserving hashing [16], spherical hashing (SpherH) [5], iterative quantization (ITQ) [4], compressed hashing (CH) [17], and so on have also been effectively applied for large-scale data retrieval tasks.…”
Section: Introductionmentioning
confidence: 99%
“…In [2], Akata et al propose an embedding-based framework that regards all of the defined attributes as a whole representation. Many recent approaches adopt such an embedding manner and achieve promising results [13,4,33,15,7,19,39,8,23]. Besides, similarity-based frameworks also adopt the embedding approach [24,40,41,34,8,25].…”
Section: Related Workmentioning
confidence: 99%
“…[3,17,31] aim to remove the visual-semantic ambiguity through an intermediate embedding space. [35,4] proposes bilinear joint embeddings to mitigate the distribution difference between visual and semantic spaces. In [5], classifiers of unseen classes are directly estimated by aligning the manifolds of seen classes.…”
Section: Related Workmentioning
confidence: 99%