2022
DOI: 10.1101/2022.06.10.495595
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers

Abstract: Complex time-varying systems are often studied by abstracting away from the dynamics of individual components to build a model of the population-level dynamics from the start. However, when building a population-level description, it can be easy to lose sight of each individual and how each contributes to the larger picture. In this paper, we present a novel transformer architecture for learning from time-varying data that build descriptions of both the individual as well as the collective population dynamics.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 65 publications
(102 reference statements)
0
2
0
Order By: Relevance
“…Given little training data, special approaches help enable BCI decoder use in new, shifted contexts. For example, decoders can be designed to be robust to hypothesized variability in recorded populations by promoting invariant representations through model or objective design [32,34,35]. Alternatively, decoders can be adapted to a novel context with further data collection, which is reasonable especially if only unsupervised neural data are required.…”
Section: Modalitymentioning
confidence: 99%
See 1 more Smart Citation
“…Given little training data, special approaches help enable BCI decoder use in new, shifted contexts. For example, decoders can be designed to be robust to hypothesized variability in recorded populations by promoting invariant representations through model or objective design [32,34,35]. Alternatively, decoders can be adapted to a novel context with further data collection, which is reasonable especially if only unsupervised neural data are required.…”
Section: Modalitymentioning
confidence: 99%
“…Yet across contexts, the meaning of individual neurons may change, so operations to learn spatial representations may provide benefits. For example, Le and Shlizerman [38] and Liu et al [35] add spatial attention to NDT's temporal attention. Yet separate space-time attention can impair performance [39] and requires padding in both space and time when training over heterogeneous data.…”
Section: Designing Transformers For Unsupervised Scaling On Neural Datamentioning
confidence: 99%