2010
DOI: 10.1371/journal.pcbi.1000871
|View full text |Cite
|
Sign up to set email alerts
|

Probability Matching as a Computational Strategy Used in Perception

Abstract: The question of which strategy is employed in human decision making has been studied extensively in the context of cognitive tasks; however, this question has not been investigated systematically in the context of perceptual tasks. The goal of this study was to gain insight into the decision-making strategy used by human observers in a low-level perceptual task. Data from more than 100 individuals who participated in an auditory-visual spatial localization task was evaluated to examine which of three plausible… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

18
330
1

Year Published

2013
2013
2024
2024

Publication Types

Select...
3
3
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 212 publications
(349 citation statements)
references
References 37 publications
18
330
1
Order By: Relevance
“…Assuming the causal inference model of multisensory processing (Kording et al, 2007;Shams et al, 2005;Wozny et al, 2010), the largest cross-modal facilitation should be obtained when the image and the sound are highly congruent (e.g., sound and image of a barking dog), as this would result in perception of a common cause for the two and integration of the two signals which will in turn lead to the updating of unisensory posterior by the bisensory posterior. If the image and sound are highly incongruent (e.g., sound of a dog bark paired with the image of a flower), then the brain would likely infer independent causes.…”
Section: Discussionmentioning
confidence: 99%
“…Assuming the causal inference model of multisensory processing (Kording et al, 2007;Shams et al, 2005;Wozny et al, 2010), the largest cross-modal facilitation should be obtained when the image and the sound are highly congruent (e.g., sound and image of a barking dog), as this would result in perception of a common cause for the two and integration of the two signals which will in turn lead to the updating of unisensory posterior by the bisensory posterior. If the image and sound are highly incongruent (e.g., sound of a dog bark paired with the image of a flower), then the brain would likely infer independent causes.…”
Section: Discussionmentioning
confidence: 99%
“…These models have been applied to visuo-tactile integration (Ernst and Banks, 2002;Ernst and Bü lthoff, 2004) to explain object perception during haptic manipulation and to audiovisual integration to explain phenomena such as the ventriloquism effect or sound-induced visual illusions (Alais and Burr, 2004;Magnotti et al, 2013;Wozny et al, 2010;Wozny and Shams, 2011). The integration of visuo-vestibular stimuli for translational and rotational self-motion perception has also been studied by several investigators (Fetsch et al, 2012(Fetsch et al, , 2013Prsa et al, 2012Prsa et al, , 2015.…”
Section: Models Of Multisensory Integration and Bscmentioning
confidence: 99%
“…Beyond altering the coarsening dynamics [27,28], the social nonlinearity can qualitatively change the equilibrium magnetization of the system. First, in contrast with the mean-field model, the boundary between the þ1 or −1 dominated regions of the parameter space depends on the concentration of unbiased individuals.…”
mentioning
confidence: 99%