2018
DOI: 10.1007/978-981-13-1132-1_5
|View full text |Cite
|
Sign up to set email alerts
|

Feature Learning Using Stacked Autoencoder for Shared and Multimodal Fusion of Medical Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…6 Structure of a bimodal AE across modalities. For example, the authors of [210][211][212] designed multimodal systems based on SAEs, where the encoder side of the architecture represents and compresses each unimodal feature separately, and the decoder side constructs the latent (shared) representation of the inputs in a unsupervised manner. Figure 6 shows the coupling mechanism of two separate AEs (bimodal AE) for both modalities (audio and video) into a jointly shared representation hierarchy where the encoder and decoder components are independent of each other.…”
Section: Shared Representationmentioning
confidence: 99%
“…6 Structure of a bimodal AE across modalities. For example, the authors of [210][211][212] designed multimodal systems based on SAEs, where the encoder side of the architecture represents and compresses each unimodal feature separately, and the decoder side constructs the latent (shared) representation of the inputs in a unsupervised manner. Figure 6 shows the coupling mechanism of two separate AEs (bimodal AE) for both modalities (audio and video) into a jointly shared representation hierarchy where the encoder and decoder components are independent of each other.…”
Section: Shared Representationmentioning
confidence: 99%
“…The building block of deep learning models for multimodality fusion is its ability to extract relevant features for image fusion. Recently, researchers have implemented deep learning for biomedical imaging problems associated with MRI, CT, and PET images, respectively 83 . Kaur and Singh proposed a deep belief network for image fusion, where the proposed technique evaluates the fusion dataset acquired from the initial feature extraction procedure 84 .…”
Section: Ecs Transactions 107 (1) 3649-3673 (2022)mentioning
confidence: 99%
“…The stacked sparse auto-encoder [10] is a neural network that learns features from unlabeled data. It includes many layers of sparse auto-encoders where the outputs of each layer will be linked to the inputs of the following layer.…”
Section: Stacked Sparse Auto-encodersmentioning
confidence: 99%