2022
DOI: 10.1038/s41598-022-19041-7
|View full text |Cite
|
Sign up to set email alerts
|

Repeated exposure to either consistently spatiotemporally congruent or consistently incongruent audiovisual stimuli modulates the audiovisual common-cause prior

Abstract: To estimate an environmental property such as object location from multiple sensory signals, the brain must infer their causal relationship. Only information originating from the same source should be integrated. This inference relies on the characteristics of the measurements, the information the sensory modalities provide on a given trial, as well as on a cross-modal common-cause prior: accumulated knowledge about the probability that cross-modal measurements originate from the same source. We examined the p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(19 citation statements)
references
References 68 publications
(105 reference statements)
1
18
0
Order By: Relevance
“…The model assumes that observers establish two intermediate estimates of the to-be-judged feature, one based on optimal cue integration of visual and haptic sensory signals, and one based on their favourite modality, the modality they would choose if visual and haptic signals were from different sources. Analogous to previous implementations of Bayesian multisensory causal inference [5,19,20,31], these two intermediate estimates are averaged, weighted by the posterior probability of a common cause. Thus, if the inferred probability that the signals share a common cause is 1, the observer fully bases their perceptual decisions on the integrated estimate and the variance across visual–haptic trials is identical to that predicted by optimal cue integration (the denominator of the integration index).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The model assumes that observers establish two intermediate estimates of the to-be-judged feature, one based on optimal cue integration of visual and haptic sensory signals, and one based on their favourite modality, the modality they would choose if visual and haptic signals were from different sources. Analogous to previous implementations of Bayesian multisensory causal inference [5,19,20,31], these two intermediate estimates are averaged, weighted by the posterior probability of a common cause. Thus, if the inferred probability that the signals share a common cause is 1, the observer fully bases their perceptual decisions on the integrated estimate and the variance across visual–haptic trials is identical to that predicted by optimal cue integration (the denominator of the integration index).…”
Section: Discussionmentioning
confidence: 99%
“…We further assumed that the sensory signals, also called measurements, mv,i,mh,i were corrupted by Gaussian-distributed noise with variance σv2 and σh2. We additionally allowed the sensory signals to be biased [19,20,31], as modality-specific biases in the sensory signals are a root cause of reduced cross-modal integration effects [21].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, the differences between the results of our two experiments suggest that the coincidence of the timing of the acquisition of each piece of information is crucial for optimal magnitude integration. Interestingly, spatialtemporal discrepancies in the acquisition of information have been shown to affect optimal integration, even in the multisensory integration of single physical quantities such as spatial location and orientation (35)(36)(37). For example, Plaisier et al (37) reported that, in the visual-haptic integration of surface orientation, optimal integration breaks down when there is a discrepancy between the visual and haptic exploration modes (i.e., the instantaneous perception of the surface orientation by touching/seeing two spots vs. sequential perception by tracing the surface).…”
Section: Discussionmentioning
confidence: 99%
“…(b) Spatiotemporal modulation of MSI by experience and learning during the task. Left, spatial information of each modality before and after experience or training [158][159][160][161]. Right, the proportion of simultaneity reports before and after the training at different stimulus-onset asynchronies (SOAs) [42,162,163].…”
Section: (A) Development/ageingmentioning
confidence: 99%