Oxford Handbooks Online 2014
DOI: 10.1093/oxfordhb/9780199686858.013.007
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Models of Perceptual Organization

Abstract: One of the central ideas in the study of perception is that the proximal stimulus-the pattern of energy that impinges on sensory receptors, such as the visual image-is not sufficient to specify the actual state of the world outside (the distal stimulus). That is, while the image of your grandmother on your retina might look like your grandmother, it also looks like an infinity of other arrangements of matter, each having a different combination of three-dimensional structures, surface properties, colour proper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
19
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
8

Relationship

4
4

Authors

Journals

citations
Cited by 19 publications
(19 citation statements)
references
References 65 publications
0
19
0
Order By: Relevance
“…The close connection between Occam's razor and Bayes’ rule can be appreciated most directly simply by observing that the hypothesis with the highest posterior is, ipso facto, also the hypothesis with the minimum DL in Shannon's sense. In a Bayesian framework, the posterior belief in hypothesis H after considering data D , notated p ( H | D ), is proportional to the product of its prior p ( H ) and its likelihood p ( D | H ), p () H false| D p () H p () D false| H , (see Refs for tutorial introductions). The hypothesis that maximizes this quantity, sometimes called the maximum a posteriori or MAP, is the hypothesis that is the most probable, in that it maximizes the trade‐off between prior plausibility and fit to the observed data.…”
Section: Mathematical Definitions Of Simplicity and Complexitymentioning
confidence: 99%
“…The close connection between Occam's razor and Bayes’ rule can be appreciated most directly simply by observing that the hypothesis with the highest posterior is, ipso facto, also the hypothesis with the minimum DL in Shannon's sense. In a Bayesian framework, the posterior belief in hypothesis H after considering data D , notated p ( H | D ), is proportional to the product of its prior p ( H ) and its likelihood p ( D | H ), p () H false| D p () H p () D false| H , (see Refs for tutorial introductions). The hypothesis that maximizes this quantity, sometimes called the maximum a posteriori or MAP, is the hypothesis that is the most probable, in that it maximizes the trade‐off between prior plausibility and fit to the observed data.…”
Section: Mathematical Definitions Of Simplicity and Complexitymentioning
confidence: 99%
“…The goal of these models, broadly speaking, is to quantify the degree of belief that ought to be assigned to each potential interpretation of image data. In these models each possible interpretation c j ∈ C = { c 1 … c J } of an image D is associated with posterior probability p ( C | D ), which according to Bayes’ rule is proportional to the product of a prior probability p ( C ) and likelihood p ( D | C ) (for introductions see Feldman, 2014; Kersten et al, 2004; Mamassian & Landy, 2002). Similary we propose a framework in which perceptual grouping can be viewed as a rational Bayesian procedure by regarding it as a kind of mixture estimation (see also Feldman et al, 2014; Froyen et al, under review).…”
Section: The Computational Frameworkmentioning
confidence: 99%
“…Such views are often developed in a Bayesian framework, but we needn't assume that perceptual systems are optimal. For an introduction to Bayesian modeling of perception, see Feldman (forthcoming). The Bayesian tide in cognitive science is not without its critics.…”
mentioning
confidence: 99%