2019
DOI: 10.1038/s41598-019-42735-4
|View full text |Cite
|
Sign up to set email alerts
|

Facilitation of allocentric coding by virtue of object-semantics

Abstract: In the field of spatial coding it is well established that we mentally represent objects for action not only relative to ourselves, egocentrically, but also relative to other objects (landmarks), allocentrically. Several factors facilitate allocentric coding, for example, when objects are task-relevant or constitute stable and reliable spatial configurations. What is unknown, however, is how object-semantics facilitate the formation of these spatial configurations and thus allocentric coding. Here we demonstra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 42 publications
1
8
0
Order By: Relevance
“…These results support previous research on allocentric coding using static scenes that showed that humans encode objects for action to some extent relative to contextual cues in the environment. The allocentric weights we found here are quite comparable to previous virtual reality studies varying between approximately 0.10 and 0.50 (Karimpur et al 2019 ; Klinghammer et al 2016 ). More importantly, these numbers were found despite the change in response mode.…”
Section: Discussionsupporting
confidence: 90%
See 1 more Smart Citation
“…These results support previous research on allocentric coding using static scenes that showed that humans encode objects for action to some extent relative to contextual cues in the environment. The allocentric weights we found here are quite comparable to previous virtual reality studies varying between approximately 0.10 and 0.50 (Karimpur et al 2019 ; Klinghammer et al 2016 ). More importantly, these numbers were found despite the change in response mode.…”
Section: Discussionsupporting
confidence: 90%
“…Reaching end points systematically deviated in the direction of object shift, suggesting the use of allocentric information. Allocentric coding was found to be facilitated when the objects were task relevant (Fiehler et al 2014 ; Klinghammer et al 2015 ), coherently shifted (Klinghammer et al 2017 ) and semantically similar (Karimpur et al 2019 ). While these studies advanced our understanding of potential factors facilitating allocentric coding, little is known about the generalizability of these results.…”
Section: Introductionmentioning
confidence: 99%
“…Many previous studies have sought to measure shape similarity for both familiar and unfamiliar objects [38,40,[44][45][46][47][48][49][50][51][52][53]. Despite this, the representation of shape in the human visual system remains elusive, and the basis for shape similarity judgments remains unclear.…”
Section: Discussionmentioning
confidence: 99%
“…In only two studies, these findings were extended into visual space by means of virtual reality (Karimpur, Morgenstern, & Fiehler, 2019;Klinghammer, Schütz, Blohm, & Fiehler, 2016). The results imply that allocentric coding is comparable in visual and pictorial space.…”
Section: Introductionmentioning
confidence: 89%