2019
DOI: 10.1038/s41593-018-0310-2
|View full text |Cite
|
Sign up to set email alerts
|

Task representations in neural networks trained to perform many cognitive tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

20
429
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 416 publications
(485 citation statements)
references
References 38 publications
20
429
1
Order By: Relevance
“…Artificial neural network models (ANNs) have been shown to replicate several properties observed in the LPFC during working memory tasks (Compte et al, 2000;Wimmer et al, 2014;Murray et al, 2017;Yang et al, 2019). However, the observation of the existence of a stable subspace in the presence of code-morphing imposes new constraints on these models.…”
Section: Fig 1 | Experimental Design and Code-morphing A Behavioramentioning
confidence: 99%
See 1 more Smart Citation
“…Artificial neural network models (ANNs) have been shown to replicate several properties observed in the LPFC during working memory tasks (Compte et al, 2000;Wimmer et al, 2014;Murray et al, 2017;Yang et al, 2019). However, the observation of the existence of a stable subspace in the presence of code-morphing imposes new constraints on these models.…”
Section: Fig 1 | Experimental Design and Code-morphing A Behavioramentioning
confidence: 99%
“…Network models have been shown to replicate several properties of LPFC activity, including code stability (Compte et al, 2000;Wimmer et al, 2014;Murray et al, 2017;Yang et al, 2019). However, it is not known whether these models can exhibit code-morphing while retaining a stable subspace.…”
Section: Introductionmentioning
confidence: 99%
“…However, even if we happened upon a special, albeit common, scenario using these two tasks, it 373 is remarkable to observe a situation in which the large attention-related change in behavioral 374 performance can be accomplished without changing information coding or weights between 375 areas. In contrast, theoretical models and machine learning techniques accomplish flexibility in 376 computation almost solely by changing weights [44][45][46][47] . Our results constitute an existence proof: an 377 example of a situation in which flexibility can be mediated by simple changes within sensory 378 cortex.…”
mentioning
confidence: 99%
“…Although reinforcement learning has provided invaluable insight into the mechanism the PFC may use, it remains unclear how the PFC is able to encode multiple schemas, building on each other, without interference, and persisting so they may be accessed again in the future. The majority of models capable of solving multi-strategy problems require specially curated training regimens, most often by interleaving examples of different problem types [10]. Models learn successfully due to the balanced presentation of examples in training; if the training regimen is altered-for example, problem types appear in sequence rather than interleaved, as often happens in the world-the unbalanced models fail miserably [11].…”
Section: Introductionmentioning
confidence: 99%