2023
DOI: 10.1109/tci.2023.3248949
|View full text |Cite
|
Sign up to set email alerts
|

Conditional Injective Flows for Bayesian Imaging

Abstract: Deep learning is the current de facto state of the art in tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a convolutional neural network (CNN) which then computes the reconstruction. Despite strong results on "in-distribution" test data similar to the training data, backprojection from sparse-view data delocalizes singularities, so these approaches require a large receptive field to perform well. As a consequence, they overfit to certain gl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 70 publications
0
5
0
Order By: Relevance
“…We assess the performance of the proposed methods for MAP estimation and posterior modeling on synthetic and experimental data. We train the model on two synthetic largescale datasets: 1) MNIST [64] with 60000 training samples in the resolution N = 32, and 2) a more challenging dataset we generated comprising 60000 training samples with resolution N = 64 of overlapping ellipses used in [14]. Figure 5 shows example test contrasts, their projections on the learned manifold, and the samples generated by the injective network, verifying the ability of the model to produce outputs of good quality.…”
Section: Computational Experimentsmentioning
confidence: 77%
See 4 more Smart Citations
“…We assess the performance of the proposed methods for MAP estimation and posterior modeling on synthetic and experimental data. We train the model on two synthetic largescale datasets: 1) MNIST [64] with 60000 training samples in the resolution N = 32, and 2) a more challenging dataset we generated comprising 60000 training samples with resolution N = 64 of overlapping ellipses used in [14]. Figure 5 shows example test contrasts, their projections on the learned manifold, and the samples generated by the injective network, verifying the ability of the model to produce outputs of good quality.…”
Section: Computational Experimentsmentioning
confidence: 77%
“…Most deep learning models employed for inverse scattering adopt a supervised learning approach, which trains a deep neural network to regress the permittivity pattern. Some studies [12]- [14] have utilized scattered fields as the input of the neural network. Despite the satisfactory reconstructions [14], these methods are sensitive to changes in the experimental configuration, such as frequency, the number of transmitters and receivers or other real-world factors.…”
mentioning
confidence: 99%
See 3 more Smart Citations