2021
DOI: 10.1016/j.cosrev.2020.100336
|View full text |Cite
|
Sign up to set email alerts
|

Comparative analysis on cross-modal information retrieval: A review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(19 citation statements)
references
References 135 publications
(209 reference statements)
0
18
0
Order By: Relevance
“…As said in Section 1, there are two challenges in CS-CBRSIR, i.e., the heterogeneity gap issue [22] and the data shift problem [23]. Overall, the above two challenges can be regarded as the modality discrepancy problem.…”
Section: The Overview Of the Frameworkmentioning
confidence: 98%
See 1 more Smart Citation
“…As said in Section 1, there are two challenges in CS-CBRSIR, i.e., the heterogeneity gap issue [22] and the data shift problem [23]. Overall, the above two challenges can be regarded as the modality discrepancy problem.…”
Section: The Overview Of the Frameworkmentioning
confidence: 98%
“…Thus, cross-source CBRSIR (CS-CBRSIR) is proposed. CS-CBRSIR can be seen as a member of the cross-modal family [17][18][19][20][21], and it is confronted with the challenge of heterogeneity gaps [22] when measuring the resemblance between different types of HRRS data. Another challenge in CS-CBRSIR is the data shift problem [23] where the data distributions are different as the source and target images are acquired with various sensors.…”
Section: Introductionmentioning
confidence: 99%
“…The information overload triggered by the big data era has motivated researchers and practitioners to develop numerous automated information retrieval methods by using different yet often complementary approaches [25,18,32]. Such methods have been widely used in fields as digital libraries [16], information filtering and recommender systems [6,15], media search [26] and search engines [2].…”
Section: Related Workmentioning
confidence: 99%
“…Since the heterogeneous gap between modalities limits the similarly computation of parings, existing cross-modal learning methods [25,34,53] aim to learn a common space to generate modality gap-free representations so that eventually the transformed data can be compared using distance metrics, such as cosine similarity distance and Euclidean distance. Many models [4] have been proposed to learn such a shared space, and it is difficult to summarize these approaches with well-defined categories.…”
Section: Related Workmentioning
confidence: 99%