1989
DOI: 10.1016/0031-3203(89)90011-3
|View full text |Cite
|
Sign up to set email alerts
|

An iterative Gibbsian technique for reconstruction of m-ary images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
71
0
1

Year Published

1997
1997
2013
2013

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 121 publications
(73 citation statements)
references
References 10 publications
1
71
0
1
Order By: Relevance
“…Starting from a positive solution, the algorithm iterates until convergence, although the rate of converge is slow. Considering the unsatisfactory results of the maximum likelihood estimation, [3,13,21,23,25] shows that the noise in the reconstructions occurs when climbing the likelihood hill towards the maximum, a result which agrees with the experiments presented by [4,5]. An extension of their work was the iterative algorithm suggested by [2] which includes a penalty function controlling the smoothness of the reconstruction.…”
Section: Estimation Algorithmsupporting
confidence: 62%
See 1 more Smart Citation
“…Starting from a positive solution, the algorithm iterates until convergence, although the rate of converge is slow. Considering the unsatisfactory results of the maximum likelihood estimation, [3,13,21,23,25] shows that the noise in the reconstructions occurs when climbing the likelihood hill towards the maximum, a result which agrees with the experiments presented by [4,5]. An extension of their work was the iterative algorithm suggested by [2] which includes a penalty function controlling the smoothness of the reconstruction.…”
Section: Estimation Algorithmsupporting
confidence: 62%
“…Previously, various authors [3][4][5] had adopted maximum likelihood approaches with only moderate success. The prior model and data likelihood are combined to give the posterior distributions on which estimation is based.…”
Section: Introductionmentioning
confidence: 99%
“…The most widely used general method is the so-called ''Expectation-Maximization'' (EM) method [20]; however, its implementation in the hidden Markov fields context is difficult [6,20,24] and some alternative methods have been proposed [3,9,15,17,24,34,35]. In particular, one may consider Stochastic Gradient (SG [35]), whose aim is to approach the maximum of the likelihood p (y) in a stochastic manner to remedy the difficulties encountered by EM.…”
Section: Parameter Estimationmentioning
confidence: 99%
“…A major difficulty in applying the EM algorithm to MRFs is in the calculation of the conditional expectation, which is generally intractable because it requires summing over all possible configurations. Therefore, approximation techniques such as the mean field approximation (Celeux et al, 2003;Tonazzini et al, 2006;Zhang, 1992) and pseudo-likelihood method (Chalmond, 1989;Zhang et al, 1994) are used. The EM algorithm has been extended for parameter estimation on a quadtree (Bouman and Shapiro, 1994;Laferté et al, 2000).…”
Section: Related Workmentioning
confidence: 99%