2023
DOI: 10.1088/1741-2552/acef94
|View full text |Cite
|
Sign up to set email alerts
|

Localized estimation of electromagnetic sources underlying event-related fields using recurrent neural networks

Jamie A O’Reilly,
Judy D Zhu,
Paul F Sowman

Abstract: Objective: To use a recurrent neural network (RNN) to reconstruct neural activity responsible for generating noninvasively measured electromagnetic signals.
Approach: Output weights of an RNN were fixed as the lead field matrix from volumetric source space computed using the boundary element method with co-registered structural magnetic resonance images and magnetoencephalography (MEG). Initially, the network was trained to minimize mean-squared-error loss between its outputs and MEG signals, causing a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 42 publications
(78 reference statements)
0
8
0
Order By: Relevance
“…As with previous RNN models of event-related neural signals (O'Reilly, 2022b;O'Reilly, Angsuwatanakul, et al, 2022;O'Reilly, Zhu, et al, 2023), the architecture consisted of an input, four hidden layers, and an output layer. Time-domain signals were passed to the input layer, transformed sequentially through four simple recurrent layers, and then fed through a feed-forward output layer.…”
Section: Rnn Architecture and Training Parametersmentioning
confidence: 99%
See 3 more Smart Citations
“…As with previous RNN models of event-related neural signals (O'Reilly, 2022b;O'Reilly, Angsuwatanakul, et al, 2022;O'Reilly, Zhu, et al, 2023), the architecture consisted of an input, four hidden layers, and an output layer. Time-domain signals were passed to the input layer, transformed sequentially through four simple recurrent layers, and then fed through a feed-forward output layer.…”
Section: Rnn Architecture and Training Parametersmentioning
confidence: 99%
“…The backpropagation through time algorithm (Werbos, 1990) with adaptive moment estimation optimizer (Kingma & Ba, 2015) and mean-squared error loss were used to train the RNN, all using default hyperparameter settings in TensorFlow (Abadi et al, 2016). As described previously (O'Reilly, Zhu, et al, 2023;Srivastava et al, 2023), to derive source estimates, the RNN was pre-trained to fit labels without additional constraints (training step 1) before discarding the bias parameter and introducing L1-norm activity regularization to the fourth recurrent layer (training step 2). This L1norm penalty was weighted by 10 −4 .…”
Section: Rnn Architecture and Training Parametersmentioning
confidence: 99%
See 2 more Smart Citations
“…RNNs for analysing ERP waveforms have recently been developed for modelling auditory evoked potentials from mice [19], [20], human ERPs [21], [22], and combining with a convolutional neural network (CNN) to study visual ERPs [23]. RNNs can also be used for distributed source reconstruction from MEG [24], EEG [25], and simultaneously recorded MEG-EEG [26]. These previous studies demonstrate some of the ways that RNNs can be used for analysing event-related neural signals.…”
Section: Introductionmentioning
confidence: 99%