2023
DOI: 10.1088/1361-6420/ad149e
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic linear regularization methods: random discrepancy principle and applications

Ye Zhang,
Chuchu Chen

Abstract: The a posteriori stopping rule plays a significant role in the design of efficient stochastic algorithms for various tasks in computational mathematics, such as inverse problems, optimization, and machine learning. Through the lens of classical regularization theory, this paper describes a novel analysis of Morozov’s discrepancy principle for the stochastic generalized Landweber iteration and its continuous analog of generalized stochastic asymptotical regularization. Unlike existing results relating to conver… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 27 publications
(54 reference statements)
0
1
0
Order By: Relevance
“…It is well known that when employing a nonparametric Bayesian approach with standard Gaussian priors, the posterior-based reconstruction corresponds to a Tikhonov regularizer with a reproducing kernel Hilbert space (RKHS) norm penalty [22, section 7.1]. Recently, by combining conventional regularization and statistical formulation, some stochastic regularization methods with novel features have been discussed in [45,46]. Therefore, although the posterior x, obtained by setting prior variance based on the knowledge of the noise level, does not belong to the conventional Bayesian paradigm, it can serve as a good estimation of x † as interpreted above.…”
Section: Introductionmentioning
confidence: 99%
“…It is well known that when employing a nonparametric Bayesian approach with standard Gaussian priors, the posterior-based reconstruction corresponds to a Tikhonov regularizer with a reproducing kernel Hilbert space (RKHS) norm penalty [22, section 7.1]. Recently, by combining conventional regularization and statistical formulation, some stochastic regularization methods with novel features have been discussed in [45,46]. Therefore, although the posterior x, obtained by setting prior variance based on the knowledge of the noise level, does not belong to the conventional Bayesian paradigm, it can serve as a good estimation of x † as interpreted above.…”
Section: Introductionmentioning
confidence: 99%