2019
DOI: 10.1016/j.visres.2019.03.009
|View full text |Cite
|
Sign up to set email alerts
|

Perceived speed of motion in depth modulates misjudgements of approaching trajectories consistently with a slow prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 36 publications
1
8
0
Order By: Relevance
“…This study aimed to shed light on the extent to which visually evoked self-motion influences perceived lateral object speed in a naturalistic setting. This is particularly relevant as the visual system has been shown to use velocity information to extrapolate object trajectories to compensate for noisy online information and neural delays (Aguado & López-Moliner, 2019 ; Aguilar-Lleyda et al, 2018 ; Jörges & López-Moliner, 2019 ; López-Moliner et al, 2010 ). The aim of this project is thus to verify the impact of visually simulated observer motion on accuracy and precision for object speed judgments during lateral translation, which will further our understanding of flow parsing and help us understand the conditions under which flow parsing is incomplete.…”
Section: Introductionmentioning
confidence: 99%
“…This study aimed to shed light on the extent to which visually evoked self-motion influences perceived lateral object speed in a naturalistic setting. This is particularly relevant as the visual system has been shown to use velocity information to extrapolate object trajectories to compensate for noisy online information and neural delays (Aguado & López-Moliner, 2019 ; Aguilar-Lleyda et al, 2018 ; Jörges & López-Moliner, 2019 ; López-Moliner et al, 2010 ). The aim of this project is thus to verify the impact of visually simulated observer motion on accuracy and precision for object speed judgments during lateral translation, which will further our understanding of flow parsing and help us understand the conditions under which flow parsing is incomplete.…”
Section: Introductionmentioning
confidence: 99%
“…This pattern does not bode well with participants only misjudging the direction in depth (Aguado & López-Moliner, 2019;Harris & Dean, 2003;Lages, 2006;Rokers et al, 2018;Welchman et al, 2004), that is underestimating , because the target position would have been perceived more advanced than it actually was, leading to earlier responses. However, not only is speed in depth underestimated with respect to lateral movement (Brenner, Van Den Berg, & Van Damme, 1997;Brooks & Stone, 2006;Rushton & Duke, 2009;Welchman, Lam, & Bulthoff, 2008), but its discrimination thresholds are also known to be higher relative to fronto-parallel speed (Aguado & López-Moliner, 2019) which makes more difficult to discriminate the different speeds for larger angles of approach. This explanation would be consistent with the fact that position variability becomes increasingly similar across speeds when MID is more present in the delayed effect condition.…”
Section: Discussionmentioning
confidence: 90%
“…Moreover, speed discrimination thresholds are usually higher for MID than for lateral motion (Aguado & López-Moliner, 2019;Rushton & Duke, 2009)). It has also been shown that differences in perceived speed depend on which part of the retina is stimulated (Brooks & Mather, 2000;Murdison, Leclercq, Lefèvre, & Blohm, 2019) and well known biases in the perceived spatial trajectories (Aguado & López-Moliner, 2019;Harris & Dean, 2003;Lages, 2006;Murdison et al, 2019;Rokers, Fulvio, Pillow, & Cooper, 2018;Welchman, Tuck, & Harris, 2004) or in motion extent in depth (Lages, 2006). The variability in the perception of MID, including speed and direction, makes it worth studying the performance of response timing when dealing with objects moving in depth.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In principle, there are many different ways these binocular and monocular signals could be combined with each other (and with other non-visual sources of self-motion informationsee Ernst & Banks, 2002;Fetsch et al, 2010;Landy et al, 1995;Perrone, 2018;van den Berg & Brenner, 1994). For example, there have been several recent attempts to model how various binocular, or monocular and binocular, motion-in-depth signals are integrated using Bayesian or maximum likelihood estimation frameworks (Allen et al, 2015;Aguado & López-Moliner, 2019;Welchman, Lam, & Bulthoff, 2008;Thompson, Rokers, & Rosenberg, 2019). While these particular studies were focussed on object-motion perception, their findingsthat motion-in-depth cues combine according to cue reliability under some conditionsmay generalize to self-motion perception as well.…”
Section: Introductionmentioning
confidence: 99%