Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413736
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal Multi-relational Feature Aggregation Network for Medical Knowledge Representation Learning

Abstract: Representation learning of medical Knowledge Graph (KG) is an important task and forms the fundamental process for intelligent medical applications such as disease diagnosis and healthcare question answering. Therefore, many embedding models have been proposed to learn vector presentations for entities and relations but they ignore three important properties of medical KG: multimodal, unbalanced and heterogeneous. Entities in the medical KG can carry unstructured multi-modal content, such as image and text. At… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 32 publications
(26 reference statements)
0
7
0
Order By: Relevance
“…MKHAN [51] proposes a hierarchical atention network on multi-modal medical knowledge graphs to deal with the explainable medical question answering task. MMRFAN [50] proposes an adversarial feature learning model to map the textual and image information of the entity into the same vector space and capture the multimodal information with a multi-relational feature aggregation network. Sun et al [36] propose a multi-modal graph atention network to optimize the recommendation system with multi-modal knowledge graph.…”
Section: Multi-modal Knowledge Graphmentioning
confidence: 99%
See 4 more Smart Citations
“…MKHAN [51] proposes a hierarchical atention network on multi-modal medical knowledge graphs to deal with the explainable medical question answering task. MMRFAN [50] proposes an adversarial feature learning model to map the textual and image information of the entity into the same vector space and capture the multimodal information with a multi-relational feature aggregation network. Sun et al [36] propose a multi-modal graph atention network to optimize the recommendation system with multi-modal knowledge graph.…”
Section: Multi-modal Knowledge Graphmentioning
confidence: 99%
“…• MMRFAN [50]: MMRFAN is a GNN-based model which learns multi-modal information for entities with an adversarial feature learning module, and then uses relational graph convolution operation for multimodel knowledge graph embedding.…”
Section: Baselinesmentioning
confidence: 99%
See 3 more Smart Citations