2018
DOI: 10.1007/978-3-319-91458-9_37
|View full text |Cite
|
Sign up to set email alerts
|

Index and Retrieve Multimedia Data: Cross-Modal Hashing by Learning Subspace Relation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Our proposed DCMHGMS method is compared with eight state-of-the-art cross-modal hashing methods: SePH [10], DCH [11], FSH [7], CCQ [4], SRLCH [17], SCRATCH [36], SMFH [19], DCMH [23]. FSH is an unsupervised method, and other seven methods are supervised ones.…”
Section: B Baseline Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Our proposed DCMHGMS method is compared with eight state-of-the-art cross-modal hashing methods: SePH [10], DCH [11], FSH [7], CCQ [4], SRLCH [17], SCRATCH [36], SMFH [19], DCMH [23]. FSH is an unsupervised method, and other seven methods are supervised ones.…”
Section: B Baseline Methodsmentioning
confidence: 99%
“…Semantics-Preserving Hashing (SePH) [10] transforms semantic affinities of training data into a probability distribution and approximates it with tobe-learnt hash codes in hamming space via minimizing the Kullback-Leibler divergence. Subspace Relation Learning for Cross modal Hashing (SRLCH) [17] method exploits correlation information with semantic label and preserves the nonlinear structure. Supervised Matrix Factorization Hashing for Cross-Modal Retrieval (SMFH) [19] is a supervised cross modal hashing method based on collective matrix factorization, which considers both the label consistency across different modalities and the local geometric consistency in each modality.Fast Discrete Cross-Modal Hashing (FDCH) [15] is a method with regressing from semantic labels to take advantage of supervised labels to improve retrieval performance, which enhances the discriminative capability of hash codes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, one common limitation of them is that they relax the discrete constraints within the optimization resulting in suboptimal binary codes. Therefore, a number of cross-modal hashing methods based on discrete optimization are studied [7], [8], [21]. In [22], Locally Linear Embedding is used to extract the manifold information as similarity matrix for learning unified hash codes where the binary codes are learned directly without relaxation.…”
Section: Related Workmentioning
confidence: 99%
“…It is worth noting that the above methods generate binary codes by relaxing the discrete constraints, which leads to a large quantization error. To address this issue, many studies propose to learn binary codes with discrete optimization [7], [20], [21]. For instance, Discrete cross-modal hashing (DCH) [5] was proposed where binary codes are directly learned without relaxation, and label information is used to enhance the discrimination of binary codes through linear classifiers.…”
Section: Introductionmentioning
confidence: 99%