2020
DOI: 10.1101/2020.10.30.362558
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Generalized neural decoders for transfer learning across participants and recording modalities

Abstract: Advances in neural decoding have enabled brain-computer interfaces to perform increasingly complex and clinically-relevant tasks. However, such decoders are often tailored to specific participants, days, and recording sites, limiting their practical long-term usage. Therefore, a fundamental challenge is to develop neural decoders that can robustly train on pooled, multi-participant data and generalize to new participants. We introduce a new decoder, HTNet, which uses a convolutional neural network with two inn… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(14 citation statements)
references
References 91 publications
0
14
0
Order By: Relevance
“…Pose trajectories were obtained from concurrent video recordings using computer vision to automate the often-tedious annotation procedure that has previously precluded the creation of similar datasets 30,31 . Along with these two core datastreams, we have added extensive metadata, including thousands of wrist movement initiation events previously used for neural decoding 32,33 , 10 quantitative event-related features describing the type of movement performed and any relevant context 18 , coarse labels describing the participant's behavioral state based on visual inspection of videos 34 , and 14 different electrode-level features 18 . This dataset, which we call AJILE12 (Annotated Joints in Long-term Electrocorticography for 12 human participants), builds on our previous AJILE dataset 35 and is depicted in Fig.…”
Section: Background and Summarymentioning
confidence: 99%
See 4 more Smart Citations
“…Pose trajectories were obtained from concurrent video recordings using computer vision to automate the often-tedious annotation procedure that has previously precluded the creation of similar datasets 30,31 . Along with these two core datastreams, we have added extensive metadata, including thousands of wrist movement initiation events previously used for neural decoding 32,33 , 10 quantitative event-related features describing the type of movement performed and any relevant context 18 , coarse labels describing the participant's behavioral state based on visual inspection of videos 34 , and 14 different electrode-level features 18 . This dataset, which we call AJILE12 (Annotated Joints in Long-term Electrocorticography for 12 human participants), builds on our previous AJILE dataset 35 and is depicted in Fig.…”
Section: Background and Summarymentioning
confidence: 99%
“…We have included 55 days of semi-continuous intracranial neural recordings along with thousands of verified wrist movement events, which both greatly exceed the size of typical ECoG datasets from controlled experiments 36 as well as other long-term naturalistic ECoG datasets 34,35,37,38 . Such a wealth of data improves statistical power and enables large-scale exploration of more complex behaviors than previously possible, especially with modern machine learning techniques such as deep learning 32,[39][40][41][42] . In addition, AJILE12 contains comprehensive metadata, including coarse behavior labels, quantitative event features, and localized electrode positions in group-level coordinates that enable cross-participant comparisons of neural activity.…”
Section: Background and Summarymentioning
confidence: 99%
See 3 more Smart Citations