2002
DOI: 10.1109/tpami.2002.1046170
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Bayes factors for image segmentation: the Pseudolikelihood Information Criterion (PLIC)

Abstract: Abstract-We propose a method for choosing the number of colors or true gray levels in an image; this allows fully automatic segmentation of images. Our underlying probability model is a hidden Markov random field. Each number of colors considered is viewed as corresponding to a statistical model for the image, and the resulting models are compared via approximate Bayes factors. The Bayes factors are approximated using BIC (Bayesian Information Criterion), where the required maximized likelihood is approximated… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
34
0

Year Published

2004
2004
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(34 citation statements)
references
References 30 publications
0
34
0
Order By: Relevance
“…The terms of involving the priors and the posteriors (and, therefore, and ) are (29) We show now the derivation involving the posteriors. The terms of (29) involving only are (30) which, ignoring terms independent of , reads (31) where (32) The mixture appears in the last term of (31) for all pixels that are neighbors of a pixel . To make the M-step tractable, we bound these terms using Jensen's inequality (33) Using (31) and (33) and noting that , we finally get (ignoring again terms independent of ) (34) where the distribution is (35) An identical derivation holds for the priors producing a term (36) In total, the terms of [actually a lower bound of it since we employed (33)] involving the prior and the posterior are…”
Section: M-mentioning
confidence: 99%
See 1 more Smart Citation
“…The terms of involving the priors and the posteriors (and, therefore, and ) are (29) We show now the derivation involving the posteriors. The terms of (29) involving only are (30) which, ignoring terms independent of , reads (31) where (32) The mixture appears in the last term of (31) for all pixels that are neighbors of a pixel . To make the M-step tractable, we bound these terms using Jensen's inequality (33) Using (31) and (33) and noting that , we finally get (ignoring again terms independent of ) (34) where the distribution is (35) An identical derivation holds for the priors producing a term (36) In total, the terms of [actually a lower bound of it since we employed (33)] involving the prior and the posterior are…”
Section: M-mentioning
confidence: 99%
“…Additionally, the true value of the image label is not known. In the following experiments, we use the same images as in [21], [30], and [31], where was estimated using (approximations of) the Bayesian information criterion (BIC). Fig.…”
Section: Graylevel Imagesmentioning
confidence: 99%
“…Hence, we consider an unsupervised image segmentation based on the BIC. The BIC is actually a commonly used criterion, and lots of works have been proposed as for the usage of the BIC for the model selection in the context of image segmentation [3,7,8]. The BIC for an independent Gaussian mixture model was discussed to be less efficient in the context of MRF-based segmentation and a Pseudo-Likelihood Criterion (PLIC), which is a mean-field-based form of the BIC, was proposed in Ref.…”
Section: Introductionmentioning
confidence: 99%
“…The BIC for an independent Gaussian mixture model was discussed to be less efficient in the context of MRF-based segmentation and a Pseudo-Likelihood Criterion (PLIC), which is a mean-field-based form of the BIC, was proposed in Ref. [7]. Later a more accurate mean-field-based form of the BIC was derived by using the Gibbs-Bogoliubov-Feynman (GBF) inequality in Ref.…”
Section: Introductionmentioning
confidence: 99%
“…Model-based clustering (McLachlan 1982;McLachlan and Basford 1988;Banfield and Raftery 1993;Fraley and Raftery 2002;McLachlan and Peel 2000) is one of the more recent developments, and has shown very good performance in a number of fields (Mukherjee, Feigelson, Babu, Murtagh, Fraley, and Raftery 1998;Dasgupta and Raftery 1998;Yeung, Fraley, Murua, Raftery, and Ruzzo 2001;Wang and Raftery 2002), including image analysis (McLachlan, Ng, Galloway, and Wang 1996;Campbell, Fraley, Murtagh, and Raftery 1997;Campbell, Fraley, Stanford, Murtagh, and Raftery 1999;Stanford and Raftery 2002;Wehrens, Simonetti, and Buydens 2002). As implemented in these applications, and in available software McLachlan, Peel, Basford, and Adams 1999), model-based clustering consists of fitting a mixture of multivariate normal distributions to a data set by maximum likelihood using the EM algorithm, possibly with geometric constraints on the covariance matrices, and an additional component to allow for outliers or noise.…”
Section: Introductionmentioning
confidence: 99%