2019
DOI: 10.1088/1361-6420/ab15a3
|View full text |Cite
|
Sign up to set email alerts
|

Expectation propagation for Poisson data

Abstract: The Poisson distribution arises naturally when dealing with data involving counts, and it has found many applications in inverse problems and imaging. In this work, we develop an approximate Bayesian inference technique based on expectation propagation for approximating the posterior distribution formed from the Poisson likelihood function and a Laplace type prior distribution, e.g., the anisotropic total variation prior. The approach iteratively yields a Gaussian approximation, and at each iteration, it updat… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 17 publications
(17 citation statements)
references
References 46 publications
0
17
0
Order By: Relevance
“…Section 4 provides background information on conditional variational autoencoder, and describes our proposed framework, including the network architecture, and the training and inference phases. In Section 5, we showcase our framework on an established medical imaging modalitypositron emission tomography (PET) [45], for which uncertainty quantification has long been desired yet still very challenging to achieve [54,17,57], and confirm that the generated samples are indeed of high quality in terms of both point estimation and uncertainty quantification, when compared with several state-of-the-art benchmarks. Finally, in Section 6, we conclude the paper with additional discussions.…”
Section: Introductionmentioning
confidence: 76%
“…Section 4 provides background information on conditional variational autoencoder, and describes our proposed framework, including the network architecture, and the training and inference phases. In Section 5, we showcase our framework on an established medical imaging modalitypositron emission tomography (PET) [45], for which uncertainty quantification has long been desired yet still very challenging to achieve [54,17,57], and confirm that the generated samples are indeed of high quality in terms of both point estimation and uncertainty quantification, when compared with several state-of-the-art benchmarks. Finally, in Section 6, we conclude the paper with additional discussions.…”
Section: Introductionmentioning
confidence: 76%
“…Section 4 provides background information on the conditional variational autoencoder and describes our proposed framework, including the network architecture and the training and inference phases. In Section 5, we showcase our framework on an established medical imaging modalitypositron emission tomography (PET) [28]-for which uncertainty quantification has long been desired yet is still very challenging to achieve [29][30][31], and confirm that the generated samples are indeed of high quality in terms of both point estimation and uncertainty quantification when compared with several state-of-the-art benchmarks. Finally, in Section 6, we conclude the paper with additional discussions.…”
Section: Introductionmentioning
confidence: 78%
“…More recently, learning-based approaches have been proposed. While these techniques have been successful, they still lack the capability to provide uncertainty estimates; see the work of [29][30][31]37] for several recent studies on UQ in PET reconstruction, although none of them is based on deep learning.…”
Section: Numerical Experiments and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the seminal work of Vardi, Shepp, and Kaufman (1985), Vardi and Lee (1993), Fredholm equations have also been widely used in positron emission tomography. In this and similar contexts, f corresponds to an image which needs to be inferred from noisy measurements (Snyder, Schulz, and O'Sullivan 1992;Aster, Borchers, and Thurber 2018;Clason, Kaltenbacher, and Resmerita 2019;Zhang, Arridge, and Jin 2019).…”
Section: Introductionmentioning
confidence: 99%