2021
DOI: 10.3390/app112210803
|View full text |Cite
|
Sign up to set email alerts
|

TDCMR: Triplet-Based Deep Cross-Modal Retrieval for Geo-Multimedia Data

Abstract: Mass multimedia data with geographical information (geo-multimedia) are collected and stored on the Internet due to the wide application of location-based services (LBS). How to find the high-level semantic relationship between geo-multimedia data and construct efficient index is crucial for large-scale geo-multimedia retrieval. To combat this challenge, the paper proposes a deep cross-modal hashing framework for geo-multimedia retrieval, termed as Triplet-based Deep Cross-Modal Retrieval (TDCMR), which utiliz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 51 publications
0
1
0
Order By: Relevance
“…Deep multiscale fusion hashing for cross-modal retrieval (DMFH) [31] extracted convolution features at different scales for each image data to represent the image data more accurately. Triplet-based deep cross-modal retrieval for geo-multimedia data (TDCMR) [32] applies the improved triplet constraint to generate more accurate hash codes. Semantics-preserving hashing based on multi-scale fusion for cross-modal retrieval (SPHMF) [33] constructs pairwise loss and inter-modal loss of tag generation network to guide hash code learning.…”
Section: Related Workmentioning
confidence: 99%
“…Deep multiscale fusion hashing for cross-modal retrieval (DMFH) [31] extracted convolution features at different scales for each image data to represent the image data more accurately. Triplet-based deep cross-modal retrieval for geo-multimedia data (TDCMR) [32] applies the improved triplet constraint to generate more accurate hash codes. Semantics-preserving hashing based on multi-scale fusion for cross-modal retrieval (SPHMF) [33] constructs pairwise loss and inter-modal loss of tag generation network to guide hash code learning.…”
Section: Related Workmentioning
confidence: 99%