2016 IEEE Symposium Series on Computational Intelligence (SSCI) 2016
DOI: 10.1109/ssci.2016.7849863
|View full text |Cite
|
Sign up to set email alerts
|

Dimensionality reduction of mass spectrometry imaging data using autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
35
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 47 publications
(35 citation statements)
references
References 42 publications
0
35
0
Order By: Relevance
“…Last, we discuss ways to interpret the learned model and analyze if biologically plausible effects are visible, a crucial step for applying it to tumor diagnostic. Deep learning has been introduced to IMS data prior to this work, but with a focus on unsupervised dimension reduction methods, see (Thomas et al, 2016) where autoencoders were used to reduce rat brain IMS data. Moreover, (Inglese et al, 2017) introduced a neural network based dimension reduction to find metabolic regions within tumors.…”
Section: Introductionmentioning
confidence: 99%
“…Last, we discuss ways to interpret the learned model and analyze if biologically plausible effects are visible, a crucial step for applying it to tumor diagnostic. Deep learning has been introduced to IMS data prior to this work, but with a focus on unsupervised dimension reduction methods, see (Thomas et al, 2016) where autoencoders were used to reduce rat brain IMS data. Moreover, (Inglese et al, 2017) introduced a neural network based dimension reduction to find metabolic regions within tumors.…”
Section: Introductionmentioning
confidence: 99%
“…119 Autoencoders have also been useful for unsupervised nonlinear dimensionality reduction of imaging data by reducing each pixel one at a time to its core features. 120 Once the size of data has been reduced, it can be more easily processed in subsequent steps of the processing pipeline.…”
Section: Resultsmentioning
confidence: 99%
“…Activation functions such as ReLU, σ (z) = max(0, z), are popular in convolutional neural networks due to their success in image analysis problems [16]. Sigmoid activation functions, σ (z) = 1 + e −z −1 , have also been shown to be useful in reduction and visualization of complex and high dimensional data [22], and identification of low dimensional patterns in the data for segmentation and classification tasks [16,18]. Here we compare both ReLU and sigmoid activation functions for reduction of the CMR data.…”
Section: Deep Autoencodermentioning
confidence: 99%