2019
DOI: 10.1038/s41598-019-48379-8
|View full text |Cite
|
Sign up to set email alerts
|

No single, stable 3D representation can explain pointing biases in a spatial updating task

Abstract: People are able to keep track of objects as they navigate through space, even when objects are out of sight. This requires some kind of representation of the scene and of the observer’s location but the form this might take is debated. We tested the accuracy and reliability of observers’ estimates of the visual direction of previously-viewed targets. Participants viewed four objects from one location, with binocular vision and small head movements then, without any further sight of the targets, they walked to … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 59 publications
0
8
0
Order By: Relevance
“…A mix of these two, e.g., a world-based, laser-supplemented task as a calibration method and a reproduction with an unclear reference system can create large systematic errors. A poor separation of different solution strategies for a pointing task might explain the difficulty of 3D-representation of the performance of participants in other studies as well [ 30 ].…”
Section: Problems Of An Earlier Approachmentioning
confidence: 99%
“…A mix of these two, e.g., a world-based, laser-supplemented task as a calibration method and a reproduction with an unclear reference system can create large systematic errors. A poor separation of different solution strategies for a pointing task might explain the difficulty of 3D-representation of the performance of participants in other studies as well [ 30 ].…”
Section: Problems Of An Earlier Approachmentioning
confidence: 99%
“…While there is evidence that this updating is achieved in the absence of visual feedback [20], at least to some extent [21], there are no detailed proposals for transferring firing rates wholesale to an area with a different coordinate frame as illustrated in figure 1b and as described for the case of head rotation by Byrne et al [19]. One possible simplification is to update only a few objects (discussed by [20,[22][23][24]) rather than carrying out a wholesale transformation of all the firing rates that describe the visual scene in retinotopic coordinates.…”
Section: D Coordinate Transformations In a Moving Observermentioning
confidence: 99%
“…It is not sufficient simply to have a separate mapping for each retinotopic location and each preferred disparity of V1 neurons: there also needs to be a different mapping for each direction of movement the observer could make. While there is evidence that this updating is achieved in the absence of visual feedback [ 20 ], at least to some extent [ 21 ], there are no detailed proposals for transferring firing rates wholesale to an area with a different coordinate frame as illustrated in figure 1 b and as described for the case of head rotation by Byrne et al [ 19 ].…”
Section: Three-dimensional Coordinate Transformations In a Moving Obs...mentioning
confidence: 99%
See 1 more Smart Citation
“…Yet, there is an ongoing need to design visualizations that carry rich insight without exceeding the cognitive and ergonomic capabilities of the human user. There are diverging findings around the human abilities to process 2D versus 3D visualizations (Amini et al 2014;Cockburn 2004;Cockburn and McKenzie 2002;Oulasvirta, Estlander, and Nurminen 2009;Seipel 2013;Vuong, Fitzgibbon, and Glennerster 2019). Nevertheless, augmented, and virtual reality technologies need to rely on 2D and 3D spaces to communicate an environment and conditions of a scenario.…”
Section: Introductionmentioning
confidence: 99%