2016 IEEE Symposium Series on Computational Intelligence (SSCI) 2016
DOI: 10.1109/ssci.2016.7849978
|View full text |Cite
|
Sign up to set email alerts
|

X-CNN: Cross-modal convolutional neural networks for sparse datasets

Abstract: In this paper we propose cross-modal convolutional neural networks (X-CNNs), a novel biologically inspired type of CNN architectures, treating gradient descent-specialised CNNs as individual units of processing in a larger-scale network topology, while allowing for unconstrained information flow and/or weight sharing between analogous hidden layers of the networkthus generalising the already well-established concept of neural network ensembles (where information typically may flow only between the output layer… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 22 publications
(29 citation statements)
references
References 18 publications
(18 reference statements)
0
25
0
Order By: Relevance
“…In this paper, we developed a feature extractor sub-network (referred to as the multi-modal feature extractor in fig. 2), inspired by the parameter-efficient separable and grouped convolutional layers presented in AlexNet (Krizhevsky et al, 2012) and Xception (Chollet, 2017, Velickovic et al, 2016. In detail, the layers of the feature extractor are shared between two tasks -MCIto-AD conversion prediction and AD/HC classification (see fig.…”
Section: Architecture Overviewmentioning
confidence: 99%
“…In this paper, we developed a feature extractor sub-network (referred to as the multi-modal feature extractor in fig. 2), inspired by the parameter-efficient separable and grouped convolutional layers presented in AlexNet (Krizhevsky et al, 2012) and Xception (Chollet, 2017, Velickovic et al, 2016. In detail, the layers of the feature extractor are shared between two tasks -MCIto-AD conversion prediction and AD/HC classification (see fig.…”
Section: Architecture Overviewmentioning
confidence: 99%
“…From a computational perspective, novel solutions must be devised to combine multi-modal imaging data [79]. In the case of DNNs, a topology explicitly designed for information exchange-between sub-networks processing the data from a single modality-through cross-connections, such as in the case of cross-modal CNNs (X-CNNs) [80], might be suitable for combining multi-modal imaging data.…”
Section: Discussionmentioning
confidence: 99%
“…Experiments have shown that a fully-connected graph allows learning appropriate connections between each pair of modalities and sharing information between all. This leads to better performance than a more restricted variant used in XKerasNet and XFitNet [4].…”
Section: Connectivity (Step 7)mentioning
confidence: 99%