2021
DOI: 10.1016/j.inffus.2021.05.015
|View full text |Cite
|
Sign up to set email alerts
|

Pay attention to doctor–patient dialogues: Multi-modal knowledge graph attention image-text embedding for COVID-19 diagnosis

Abstract: The sudden increase in coronavirus disease 2019 (COVID-19) cases puts high pressure on healthcare services worldwide. At this stage, fast, accurate, and early clinical assessment of the disease severity is vital. In general, there are two issues to overcome: (1) Current deep learning-based works suffer from multimodal data adequacy issues; (2) In this scenario, multimodal (e.g., text, image) information should be taken into account together to make accurate inferences. To address these challenges, we propose a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(18 citation statements)
references
References 88 publications
(92 reference statements)
0
18
0
Order By: Relevance
“…Since 2020, knowledge graphs have also been explored in COVID-19–related research and shown noticeable performance improvements. Zheng et al pointed out that current deep learning methods suffered from data adequacy issues and that multimodal information should be considered together to make accurate inferences [ 51 ]. To solve this, they proposed a multimodal graph attention embedding mechanism to assist diagnosing COVID-19.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Since 2020, knowledge graphs have also been explored in COVID-19–related research and shown noticeable performance improvements. Zheng et al pointed out that current deep learning methods suffered from data adequacy issues and that multimodal information should be considered together to make accurate inferences [ 51 ]. To solve this, they proposed a multimodal graph attention embedding mechanism to assist diagnosing COVID-19.…”
Section: Resultsmentioning
confidence: 99%
“…To solve this, they proposed a multimodal graph attention embedding mechanism to assist diagnosing COVID-19. Their method learned the relational embeddings in a constituted knowledge graph and, at the same time, improved the classifier through the medical knowledge attention mechanism [ 51 ]. According to Mudiyanselage et al, the poor performance for unseen data in COVID-19 classification can result from the limited correlation between the pretrained model and a specific imaging domain (e.g., X-ray) and the possibility of overfitting [ 52 ].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Combining both modalities (LUS and clinical information), they achieved an accuracy of 0.75 for four-level severity scoring and 0.88 for the binary severe/non-severe classification. On the other hand, Zheng et al built multimodal knowledge graphs from fused CT, X-ray, ultrasound, and text modalities, reaching a classification accuracy of 0.98 [ 99 ]. A multimodal channel and receptive field attention network combined with ResNeXt was proposed to process multicenter and multimodal data and achieved 0.94 accuracy [ 100 ].…”
Section: Machine Learning In Covid-19 Lusmentioning
confidence: 99%
“…The model allowed the user to gain a better understanding of the drug properties from a drug similarity perspective and insights that were not easily observed in individual drugs. Zheng et al [ 32 ] took advantage of 4 kinds of modality data (X-ray images, computed tomography [CT] images, ultrasound images, and text descriptions of diagnoses) to construct a KG. The model leveraged multimodal KG attention embedding for diagnosis of COVID-19.…”
Section: Introductionmentioning
confidence: 99%