2016
DOI: 10.1007/s00422-016-0706-6
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of congruence judgment and auditory localization tasks for assessing the spatial limits of visual capture

Abstract: Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
12
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 39 publications
3
12
0
Order By: Relevance
“…On average, responses in the pre-disparity block showed a small but significant leftward bias of −1.6° across all experimental sessions (which is negative for the rightward arrows and positive for the leftward arrows plotted in Fig 4 ), as confirmed with a one-sample t-test ( t (23) = −2.78, p = 0.01). A small leftward bias has been previously observed in our lab [ 27 ] and in other labs [ 29 ]. As a result, the distribution of pre-disparity bias is segregated by the direction of the session in Fig 4 , which subsequently makes Δ Encoded uneven across leftward and rightward sessions.…”
Section: Resultssupporting
confidence: 82%
See 2 more Smart Citations
“…On average, responses in the pre-disparity block showed a small but significant leftward bias of −1.6° across all experimental sessions (which is negative for the rightward arrows and positive for the leftward arrows plotted in Fig 4 ), as confirmed with a one-sample t-test ( t (23) = −2.78, p = 0.01). A small leftward bias has been previously observed in our lab [ 27 ] and in other labs [ 29 ]. As a result, the distribution of pre-disparity bias is segregated by the direction of the session in Fig 4 , which subsequently makes Δ Encoded uneven across leftward and rightward sessions.…”
Section: Resultssupporting
confidence: 82%
“…As shown in Fig 2 , localization responses are subject to systematic inaccuracies (perfect performance would be a flat line with zero intercept). Specifically, localization responses are subject to both uniform errors across space (referred to as bias , μ A ) and a tendancy to overestimate target eccentricity (referred to as positive spatial gain , SG ) [ 9 , 15 , 26 , 27 ]. As a consequence, the disparity between the auditory and visual locations encoded by the subject’s sensory systems can differ from the physical disparity, as shown in Eq 1 .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…14). Thus, our study indicates that priors about the location of events and objects in the world depend on the sense through which we perceive them, which is consistent with previous reports on discrepancies between visual and auditory spatial priors (Odegaard, Wozny, & Shams, 2015;Bosen et al, 2016). The modality-specificity of spatial priors is consistent with the variability of the common-cause prior revealed by our study, both results suggest priors to be situation-dependent.…”
Section: Tactile Biasessupporting
confidence: 92%
“…Thus, these two measures, localization shifts and alignment judgments, provided related but distinct information. Consistently, in previous studies, in which both responses were collected and modeled separately, parameter estimates diverged with the measure that was fitted (Bosen et al, 2016;Acerbi, Dokka, Angelaki, & Ma, 2018) even though qualitatively the two measures agree widely (Wallace et al, 2004;Hairston et al, 2003;Rohe & Noppeney, 2015b). In the best-fitting model, spatial-alignment responses were based on comparisons of the optimal location estimates of the two sensory signals (a decision rule not included in previous models).…”
Section: Spatial-alignment Responsessupporting
confidence: 67%