2016
DOI: 10.1515/jiip-2016-0014
|View full text |Cite
|
Sign up to set email alerts
|

Sequential subspace optimization for nonlinear inverse problems

Abstract: In this work we discuss a method to adapt sequential subspace optimization (SESOP), which has so far been developed for linear inverse problems in Hilbert and Banach spaces, to the case of nonlinear inverse problems. We start by revising the well-known technique for Hilbert spaces. In a next step, we introduce a method using multiple search directions that are especially designed to fit the nonlinearity of the forward operator. To this end, we iteratively project the initial value onto stripes whose shape is d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
30
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 27 publications
(31 citation statements)
references
References 24 publications
1
30
0
Order By: Relevance
“…A substantially different classical category of methods for nonlinear inverse problems are gradient-type methods, in particular, the Landweber method, which can be applied to linear (see Landweber 1951) and nonlinear inverse problems (see Hanke et al 1995). Furthermore, (direct) Tikhonov regularization methods (see Tikhonov and Glasko 1965), multilevel methods (see Kaltenbacher et al 2008, Chapter 5), and sequential subspace optimization methods (see Wald and Schuster 2017) have been developed for nonlinear inverse problems. Especially, we want to mention level set methods (see, for example, the survey by Burger and Osher 2005), since these are methods that are often used for problems like the nonlinear inverse gravimetric problem, where a domain is the unknown.…”
Section: Comparison To Other Methodsmentioning
confidence: 99%
“…A substantially different classical category of methods for nonlinear inverse problems are gradient-type methods, in particular, the Landweber method, which can be applied to linear (see Landweber 1951) and nonlinear inverse problems (see Hanke et al 1995). Furthermore, (direct) Tikhonov regularization methods (see Tikhonov and Glasko 1965), multilevel methods (see Kaltenbacher et al 2008, Chapter 5), and sequential subspace optimization methods (see Wald and Schuster 2017) have been developed for nonlinear inverse problems. Especially, we want to mention level set methods (see, for example, the survey by Burger and Osher 2005), since these are methods that are often used for problems like the nonlinear inverse gravimetric problem, where a domain is the unknown.…”
Section: Comparison To Other Methodsmentioning
confidence: 99%
“…As mentioned earlier, 3MG is an instance of a subspace optimization algorithm [4], [5] which combines the memory gradient subspace reminescent from the conjugate gradient approach [11] with a low complexity stepsize rule based on the Majoration-Minimization (MM) principle. At each iteration k ∈ N, the current solution x k is moved along a subspace, so generating…”
Section: B Majoration-minimization Memory Gradient Algorithm (3mg)mentioning
confidence: 99%
“…In the context of the resolution of unconstrained differentiable problems (i.e. Problem P with C = R N ), subspace acceleration [4]- [9] is a well known strategy to speed-up iterative descent methods. A famous subspace minimization approach consists in updating, at each iteration, the current vector in a low dimensional affine space of R N , spanned by the gradient direction and few additional vectors such as the difference between two past iterates (also called momentum term, and used for instance in the classical NLCG solver [10], [11]) and/or the difference between past gradients (see, for e.g., limited-memory quasi-Newton schemes such as L-BFGS [12]).…”
Section: Introductionmentioning
confidence: 99%
“…where the second term converges to 0 for l → ∞, since the sequence {x n k } k∈N is weakly convergent due to Proposition 4.3. The absolute value of the first term is estimated by using similar arguments as in the Hilbert space setting ( [20], Theorem 4.4, and [5], Theorem 2.3). In particular, we make use of the recursion for the iterates x nj , j = l + 1, ..., k, and obtain…”
Section: Convergence and Regularizationmentioning
confidence: 99%
“…and [16] and for nonlinear operators in Hilbert spaces [20]. These algorithms are based on the observation that the Bregman projection of x ∈ X onto the intersection of two halfspaces can be uniquely determined by at most two projections onto (intersections of ) the bounding hyperplanes if x is already contained in one of the halfspaces.…”
Section: A Numerical Examplementioning
confidence: 99%