2017 IEEE International Conference on Image Processing (ICIP) 2017
DOI: 10.1109/icip.2017.8296949
|View full text |Cite
|
Sign up to set email alerts
|

Performance comparison of Bayesian iterative algorithms for three classes of sparsity enforcing priors with application in computed tomography

Abstract: Performance comparison of Bayesian iterative algorithms for three classes of sparsity enforcing priors with application in computed tomography. ABSTRACTThe piecewise constant or homogeneous image reconstruction in the context of X-ray Computed Tomography is considered within a Bayesian approach. More precisely, the sparse transformation of such images is modelled with heavy tailed distributions expressed as Normal variance mixtures marginals. The derived iterative algorithms (via Joint Maximum A Posteriori) ha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…In this context, as future work, we are investigating the causes of the relatively weak influence of the hyper-parameters and the theoretical foundation of the corresponding robust interval, extending the discussion to the same approach using sparsity-enforcing priors expressed as normal variance mixtures, but for other mixing distributions (Gamma, generalized inverse Gaussian) [ 66 ].…”
Section: Discussionmentioning
confidence: 99%
“…In this context, as future work, we are investigating the causes of the relatively weak influence of the hyper-parameters and the theoretical foundation of the corresponding robust interval, extending the discussion to the same approach using sparsity-enforcing priors expressed as normal variance mixtures, but for other mixing distributions (Gamma, generalized inverse Gaussian) [ 66 ].…”
Section: Discussionmentioning
confidence: 99%
“…For the sake of keeping conjugacy between prior and posterior distributions of u , a Gamma distribution scriptGfalse(ufalsefalse|αk,βkfalse) with shape and rate parameters false(αk,βkfalse) is chosen for pkfalse(ufalse)=pfalse(ufalsefalse|z=kfalse)=scriptGfalse(ufalsefalse|αk,βkfalse). Integrating (13) out u , the resulting marginal pfalse(bold-italicxfalsefalse|z,boldΘ,scriptKfalse) is a heavy‐tailed distribution known as the Student‐ t distribution [38].…”
Section: Modelmentioning
confidence: 99%
“…For the sake of keeping conjugacy between prior and posterior distributions of u, a Gamma distribution G(u|α k , β k ) with shape and rate parameters (α k , β k ) is chosen for p k (u). Integrating (7) out u, the resulting marginal p(x|z, Θ, K) is a heavy-tailed distribution known as the Student-t distribution [5].…”
Section: Latent Variable Model a Gmmmentioning
confidence: 99%