The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2022
DOI: 10.1109/tcsvt.2021.3063952
|View full text |Cite
|
Sign up to set email alerts
|

Cohesive Multi-Modality Feature Learning and Fusion for COVID-19 Patient Severity Prediction

Abstract: The outbreak of coronavirus disease has been a nightmare to citizens, hospitals, healthcare practitioners, and the economy in 2020. The overwhelming number of confirmed cases and suspected cases put forward an unprecedented challenge to the hospital's capacity of management and medical resource distribution. To reduce the possibility of crossinfection and attend a patient according to his severity level, expertly diagnosis and sophisticated medical examinations are often required but hard to fulfil during a p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 56 publications
1
10
0
Order By: Relevance
“…The same model behavior was observed in the attention ResNet by Zhou et al. that registered a 69.1% R 2 score [41] . Compared to these SC-attention modules, the proposed Squeeze-Channel attention layers form a fusion of various feature scales resulting in a global encoded representation.…”
Section: Resultssupporting
confidence: 74%
See 2 more Smart Citations
“…The same model behavior was observed in the attention ResNet by Zhou et al. that registered a 69.1% R 2 score [41] . Compared to these SC-attention modules, the proposed Squeeze-Channel attention layers form a fusion of various feature scales resulting in a global encoded representation.…”
Section: Resultssupporting
confidence: 74%
“… [37] Encoder-Decoder CNN 0.636 0.457 0.209 0.206 5 Naeem et al. [30] CNN-LSTM autoencoder on SIFT, GIST features 0.684 0.441 0.195 0.190 6 Zhou et al [41] Spatial channel attention residual network 0.691 0.437 0.191 0.188 7 Mohammed et al. [42] Spatial channel attention CNN-LSTM 0.720 0.427 0.183 0.182 8 Chatzitofis et al.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The proposed methods outperformed the conventional CCA methods by learning the non-linear features and the supervised correlated space. Zhou et al [39] designed two similarity losses to enforce the learning of modality-shared information. Specifically, a cosine similarity loss was used to supervise the features learned from these two modalities, and a loss of hetero-center distance was designed to penalize the distance between the center of clinical features and CT features belonging to each class.…”
Section: Subspace-based Fusion Methodsmentioning
confidence: 99%
“…Radiology imaging supports medical decisions by providing visible image contrasts inside the human body with radiant energy, including MRI, CT, positron emission tomography (PET), fMRI and x-ray, etc. To embed the intensity standardized 2D or 3D radiology images into feature representations with learning-based encoders [16,24,[34][35][36]87] or conventional radiomics methods [24,34,35] or both [34,35], skull-stripping [38], affine registration [38], foreground extraction [39], lesion segmentation [20,34,35,38] were used correspondingly in some reviewed works to define the ROIs at first. And then, the images were resized or cropped to a smaller size for feature extraction.…”
Section: Radiology Imagesmentioning
confidence: 99%