2018
DOI: 10.1038/s41586-018-0102-6
|View full text |Cite
|
Sign up to set email alerts
|

Vector-based navigation using grid-like representations in artificial agents

Abstract: Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go. Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space and is cr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

20
579
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 518 publications
(601 citation statements)
references
References 43 publications
20
579
0
1
Order By: Relevance
“…This form of eigendecomposition is similar to other dimensionality reduction techniques that have been used to generate grid cells from populations of idealised place cells (Dordek, Soudry, Meir, & Derdikman, 2016). Previously, low dimensional encodings such as these have been show to accelerate learning and facilitate vector-based navigation (Gustafson & Daw, 2011;Banino et al, 2018).…”
Section: Discussionmentioning
confidence: 96%
“…This form of eigendecomposition is similar to other dimensionality reduction techniques that have been used to generate grid cells from populations of idealised place cells (Dordek, Soudry, Meir, & Derdikman, 2016). Previously, low dimensional encodings such as these have been show to accelerate learning and facilitate vector-based navigation (Gustafson & Daw, 2011;Banino et al, 2018).…”
Section: Discussionmentioning
confidence: 96%
“…We employ the most basic and widely used training rule for neural networks: stochastic gradient descent with backpropagation (through time). While not biologically plausible in its simplest form, the characteristics of networks trained by this algorithm can still resemble dynamics in the brain [38,53,4,41,5].…”
Section: Introductionmentioning
confidence: 99%
“…If this could be achieved, then it would be intriguing to investigate what kinds of coding mechanisms naturally emerge when a recurrent network of sigma‐chi neurons is trained to perform path integration. Phenomena such as oscillatory rhythms that support dial coding might emerge spontaneously from the training of such a network, in a manner similar to the way that grid cells emerge spontaneously when a network of linear neurons is trained to perform path integration (Banino et al, ). The emergent firing properties and connectivity patterns in a trained network of recurrently connected sigma‐chi neurons might bear closer resemblance to real hippocampal and entorhinal networks than those that emerge from networks of linear neurons.…”
Section: Discussionmentioning
confidence: 99%
“…Reservoir computing models of path integration are recurrent neural networks trained from example data (via gradient descent methods) to convert time‐varying velocity signals into time‐varying position signals (Abbott, DePasquale, & Memmesheimer, ; Denève & Machens, ). It has recently been shown that neurons with periodic spatial tuning—similar to entorhinal grid cells—can emerge spontaneously in a nonspiking recurrent network trained to perform spatial path integration (Banino et al, ). Like attractor models, most reservoir computing models of path integration use recurrent networks composed from linear neurons, so the inputs and outputs to these networks encode velocity and position signals solely as vectors of neural firing rates (not spike train correlations).…”
Section: Discussionmentioning
confidence: 99%