2019
DOI: 10.1038/s41593-019-0548-3
|View full text |Cite
|
Sign up to set email alerts
|

Context-dependent representations of objects and space in the primate hippocampus during virtual navigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
102
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 88 publications
(115 citation statements)
references
References 54 publications
6
102
0
1
Order By: Relevance
“…We have also started using this method to differentiate gaze, head, and reach transformations in frontal cortex during coordinated eye‐head‐hand reaches (Arora et al., 2019; Nacher et al, 2019). There is no reason to not take this further afield, such as the analysis of activity in areas involved in spatial navigation and spatial memory, including the hippocampus and entorhinal cortex, against ego‐ and allocentric models during complex tasks such as natural viewing and free‐moving navigation (Gulli et al., 2020; Meister & Buffalo, 2018).…”
Section: Theoretical Implications: a New Conceptual Model For Gaze Comentioning
confidence: 99%
“…We have also started using this method to differentiate gaze, head, and reach transformations in frontal cortex during coordinated eye‐head‐hand reaches (Arora et al., 2019; Nacher et al, 2019). There is no reason to not take this further afield, such as the analysis of activity in areas involved in spatial navigation and spatial memory, including the hippocampus and entorhinal cortex, against ego‐ and allocentric models during complex tasks such as natural viewing and free‐moving navigation (Gulli et al., 2020; Meister & Buffalo, 2018).…”
Section: Theoretical Implications: a New Conceptual Model For Gaze Comentioning
confidence: 99%
“…3) Place Cells, PC in Fig.1. A hallmark of spatial navigation, these cells are commonly found in the CA1 hippocampal region (reviewed in Danjo, 2019); they fire when an animal is in a specific spatial location, and do not represent just space but also contextual information (Gulli et al, 2020). In our network, they are responsible for generating a signal when an object is observed with a specific head direction (HD); this output is used to control robot movement.…”
Section: Resultsmentioning
confidence: 99%
“…c) complex scenes, such as those including multiple known objects and contextual cues, cannot be properly processed with the current implementation; they require a mechanism binding together objects and context information, known to be in effect in the hippocampus to represent a scene (see e.g. Gulli et al, 2020); this aspect was outside the scope of this work.…”
Section: Discussionmentioning
confidence: 99%
“…An X maze is a double-ended version of the classic Y-choice maze (similar to the T choice maze) from rodent literature (Biggan et al, 1991;Botwinick et al, 1963;Ingles et al, 1993;Redish, 2016). It has been shown that spatial tasks in the X maze are associated hippocampal activation (Gulli et al, 2020). To examine SWR distribution in this task, we divided the maze into zones ( Figure 1B).…”
Section: System Setupmentioning
confidence: 99%
“…The structure of the associative learning task has been described elsewhere. (Gulli et al, 2020) Brie y, the NHP needed to associate the context on the maze walls with a hierarchy of visible target colours and choose the target associated with the highest reward. For our analysis, a trial was "correct" if the target associated with higher reward was chosen.…”
Section: Associative Learning Taskmentioning
confidence: 99%