Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 2012
DOI: 10.1145/2339530.2339678
|View full text |Cite
|
Sign up to set email alerts
|

A probabilistic model for multimodal hash function learning

Abstract: In recent years, both hashing-based similarity search and multimodal similarity search have aroused much research interest in the data mining and other communities. While hashing-based similarity search seeks to address the scalability issue, multimodal similarity search deals with applications in which data of multiple modalities are available. In this paper, our goal is to address both issues simultaneously. We propose a probabilistic model, called multimodal latent binary embedding (MLBE), to learn hash fun… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
104
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 177 publications
(105 citation statements)
references
References 31 publications
1
104
0
Order By: Relevance
“…In our experiments, since few works focus on the local feature representation-based hashing scheme for cross-modal retrieval, we can only systematically compare the proposed BSE method with six prevailing global hashing methods for cross-modal retrieval tasks: CVH [11], MLBE [22], IMH [15], CMSSH [21], CHMIS [14], CMFH [16], and QCH [39]. For fair comparison, all the methods are implemented on the same SIFT features and word vectors in the image and text domains, respectively.…”
Section: B Compared Methods and Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…In our experiments, since few works focus on the local feature representation-based hashing scheme for cross-modal retrieval, we can only systematically compare the proposed BSE method with six prevailing global hashing methods for cross-modal retrieval tasks: CVH [11], MLBE [22], IMH [15], CMSSH [21], CHMIS [14], CMFH [16], and QCH [39]. For fair comparison, all the methods are implemented on the same SIFT features and word vectors in the image and text domains, respectively.…”
Section: B Compared Methods and Experimental Settingsmentioning
confidence: 99%
“…With extended SpH [18], Kumar and Udupa [11] proposed cross-view hashing (CVH) to generate binary codes for each modality via canonical correlation analysis (CCA). Multimodal latent binary embedding (MLBE) [22] is another cross-modal hashing method considering both the intermodal and intramodal similarity via a probabilistic model. To learn the hash function with good generalization, co-regularized hashing [13] was proposed to project data far from zero.…”
mentioning
confidence: 99%
“…Other recent works include CMSSH [1], MLBE [25] and LSCMR [12]. CMSSH uses a boosting method to learn the projection function for each dimension of the latent space.…”
Section: Related Workmentioning
confidence: 99%
“…There has been a long stream of research on multi-modal retrieval [27,1,15,9,25,12,26,20]. These works share a similar query processing strategy which consists of two major steps.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation