2020
DOI: 10.1101/2020.06.16.155937
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Representation Learning of Resting State fMRI with Variational Autoencoder

Abstract: 20Resting state functional magnetic resonance imaging (rs-fMRI) data exhibits complex 21 but structured patterns. However, the underlying origins are unclear and entangled in rs-22 fMRI data. Here we establish a variational auto-encoder, as a generative model 23 trainable with unsupervised learning, to disentangle the unknown sources of rs-fMRI 24 activity. After being trained with large data from the Human Connectome Project, the 25 model has learned to represent and generate patterns of cortical activity and… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(11 citation statements)
references
References 101 publications
(156 reference statements)
0
11
0
Order By: Relevance
“…From the literature, it is observed that statistical models [6], traditional machine learning models like K-NN [7], and SVM perform well for small datasets [8] and successfully extract the region of interest, but when experiments or number of fMRI scans are increased, the amount of data received from the fMRI imaging for multisubjects becomes relatively large which results in model overfitting and increased classification errors. Even existing deep learning models like VAE [9,10], transfer learning techniques, LSTM [11], and reconstructed fc7 layers [12] take more training time which increases computational cost. So, to overcome this, we will use a denser convolutional neural network to train high-level features.…”
Section: Introductionmentioning
confidence: 99%
“…From the literature, it is observed that statistical models [6], traditional machine learning models like K-NN [7], and SVM perform well for small datasets [8] and successfully extract the region of interest, but when experiments or number of fMRI scans are increased, the amount of data received from the fMRI imaging for multisubjects becomes relatively large which results in model overfitting and increased classification errors. Even existing deep learning models like VAE [9,10], transfer learning techniques, LSTM [11], and reconstructed fc7 layers [12] take more training time which increases computational cost. So, to overcome this, we will use a denser convolutional neural network to train high-level features.…”
Section: Introductionmentioning
confidence: 99%
“…Our VAE model was able to learn a non-linear feature set (or "latent space") effectively using large-scale adult rsfMRI data. Our preliminary results in adults 29 demonstrated that the fully trained VAE model could disentangle generative factors of rsfMRI data and encode the learned representations as latent variables. Noteworthy, generated VAE representations in the latent space were robust over varying signal quality of rsfMRI.…”
Section: Introductionmentioning
confidence: 85%
“…The surface is thus essentially a graph with a fixed structure, and only the values associated with the voxels change over time. Although cortical surface data has previously been mapped to a sphere and then been mapped to a 2D image using polar coordinates [24], in this work we view the location of each vertex as a graph to retain as much distance information as possible.…”
Section: Biological Datamentioning
confidence: 99%
“…Classically, latent factor analysis for fMRI data is done with some form of matrix factorization, such as principal component analysis [39], ICA [2,4,31], or dictionary learning [28]. Recently these matrix factorizations have been extended to tensor factorizations/analysis [30], restricted Boltzmann machines (RBM)s [19], and static autoencoders [13,24]. In the field of neuronal populations, however, a recent approach finds latent factors using a recurrent autoencoder [33].…”
Section: Introductionmentioning
confidence: 99%