2020
DOI: 10.48550/arxiv.2009.00574
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inverse problems on low-dimensional manifolds

Abstract: We consider abstract inverse problems between infinite-dimensional Banach spaces. These inverse problems are typically nonlinear and ill-posed, making the inversion with limited and noisy measurements a delicate process. In this work, we assume that the unknown belongs to a finitedimensional manifold: this assumption arises in many real-world scenarios where natural objects have a low intrinsic dimension and belong to a certain submanifold of a much larger ambient space. We prove uniqueness and Hölder and Lips… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(12 citation statements)
references
References 39 publications
0
12
0
Order By: Relevance
“…More precisely, we show that an injective CGNN yields a Lipschitz stability result for the inverse problem (13); in other words, the inverse map is Lipschitz continuous, and noise in the data is not amplified in the reconstruction. For simplicity, we consider the 1D case with stride 1 2 and non-expansive convolutional layers, but the result can be extended to the 2D case and arbitrary stride as done in Appendix A.4 for Theorem 1.…”
Section: Stability Of Inverse Problems With Generative Modelsmentioning
confidence: 99%
See 4 more Smart Citations
“…More precisely, we show that an injective CGNN yields a Lipschitz stability result for the inverse problem (13); in other words, the inverse map is Lipschitz continuous, and noise in the data is not amplified in the reconstruction. For simplicity, we consider the 1D case with stride 1 2 and non-expansive convolutional layers, but the result can be extended to the 2D case and arbitrary stride as done in Appendix A.4 for Theorem 1.…”
Section: Stability Of Inverse Problems With Generative Modelsmentioning
confidence: 99%
“…The training is done with the Adam optimizer using a learning rate of 0.005 and the loss function, commonly used for VAEs, given by the weighted sum of two terms: the Binary Cross Entropy (BCE) between the original and the generated images and the Kullback-Leibler Divergence (KLD) between the standard Gaussian distribution and the one generated by the encoder in the latent space 1 .…”
Section: Trainingmentioning
confidence: 99%
See 3 more Smart Citations