2014
DOI: 10.1109/tip.2014.2332764
|View full text |Cite
|
Sign up to set email alerts
|

A Sparse Embedding and Least Variance Encoding Approach to Hashing

Abstract: Abstract-Hashing is becoming increasingly important in large-scale image retrieval for fast approximate similarity search and efficient data storage. Many popular hashing methods aim to preserve the kNN graph of high dimensional data points in the low dimensional manifold space, which is however difficult to achieve when the number of samples is big. In this paper, we propose an effective and efficient hashing approach by sparsely embedding a sample in the training sample space and encoding the sparse embeddin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
63
0
1

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 216 publications
(64 citation statements)
references
References 37 publications
0
63
0
1
Order By: Relevance
“…We systematically compare our method with eight state-of-the-art non-deep methods: ITQ [22], PCAH [20], LSH [13], DSH [55], SpH [44], SH [19], AGH [21], and SELVE [56], and three deep methods: DH [39], Deepbit [40], UH-BDNN [41] for retrieval task, all these eleven methods are unsupervised. All of the non-deep methods and UH-BDNN [41] in our experiments use the same VGG [58] fc7 feature as that in our method, and DH [39] and Deepbit [40] are based the same settings like that in their original papers.…”
Section: B Experimental Settings 1) Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…We systematically compare our method with eight state-of-the-art non-deep methods: ITQ [22], PCAH [20], LSH [13], DSH [55], SpH [44], SH [19], AGH [21], and SELVE [56], and three deep methods: DH [39], Deepbit [40], UH-BDNN [41] for retrieval task, all these eleven methods are unsupervised. All of the non-deep methods and UH-BDNN [41] in our experiments use the same VGG [58] fc7 feature as that in our method, and DH [39] and Deepbit [40] are based the same settings like that in their original papers.…”
Section: B Experimental Settings 1) Baselinesmentioning
confidence: 99%
“…Since the NUS-WIDE is a multi-label dataset, and the number of latent classes is fixed to be 5 (the reason why we choose 5 latent classes will be analysed in the following subsection), longer codes make the intermediate representation non-compact and result in redundant dimensions, which is same as that in PCAH [20]. In addition, this phenomenon of mAP reduction with code length growing also appears in AGH [21] and SELVE [56].…”
Section: B Experimental Settings 1) Baselinesmentioning
confidence: 99%
“…Moreover, a set of high-dimensional data is often drawn from multiple low-dimensional subspaces [7], such as the face images, the point trajectories of moving objects [8] and the texture features of pixel on an image [9][10][11]. Recently, subspace clustering [12] processes this kind of data by following their underlying subspaces to attract increasing attentions.…”
Section: Introductionmentioning
confidence: 99%
“…For example, many companies usually encounter the troubles for handling tremendous amounts of complex information. It is a necessary factor for they organize and analyze these data [23,25,28,29]. Data warehousing and online analytical processing (OLAP) applications, as very hot technologies, have been effectively contributed to the application of decision support systems (DSS).…”
Section: Introductionmentioning
confidence: 99%