2019
DOI: 10.1007/978-3-030-04161-8_9
|View full text |Cite
|
Sign up to set email alerts
|

Optimization Methods for Inverse Problems

Abstract: Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(21 citation statements)
references
References 93 publications
0
17
0
Order By: Relevance
“…An interesting alternative approach can be the application of stochastic optimization techniques motivated by machine learning, such as the stochastic gradient descent (SGD) algorithm or one of its descendants with reduced variance (Shalev-Shwartz and Zhang 2013; Ye et al 2019). These iterative algorithms have been developed to learn from large data sets.…”
Section: Discussion On Alternative Solution Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…An interesting alternative approach can be the application of stochastic optimization techniques motivated by machine learning, such as the stochastic gradient descent (SGD) algorithm or one of its descendants with reduced variance (Shalev-Shwartz and Zhang 2013; Ye et al 2019). These iterative algorithms have been developed to learn from large data sets.…”
Section: Discussion On Alternative Solution Approachesmentioning
confidence: 99%
“…Many of the above papers highlight the relation of inverse optimization to (or its potential applications in) other fields of computer science, including parameter identification and machine learning. The recent survey (Ye et al 2019) highlights the similarities between the challenges faced by the optimization and the machine learning communities in solving inverse problems, and investigates the possibilities of crossfertilization. From among optimization approaches, this survey stresses the common application of iterative, gradient-based methods for solving non-linear problems.…”
Section: Inverse Optimizationmentioning
confidence: 99%
“…To eliminate the shortcomings inherent in the known methods, an approach to solving the problem of building an indicator based on the representation of the problem in the form of a conditional optimization problem was devised [14]. Paper [15] discusses the solution to inverse problems represented as a nonlinear programming problem using the variable substitution method, the Lagrange multiplier method. Classical methods for solving the conditional optimization problem are laborious: in the method of Lagrange multipliers [16], additional parameters are determined, which increases the dimensionality of the problem; in the penalty method, multiple optimization is required with a sequential change in the parameter; in the simplex method, a multiple transition from one basic solution to the constraint system of the linear programming problem to another is required.…”
Section: Fig 2 Classification Of Problems On Building An Indicatormentioning
confidence: 99%
“…Unlike GD, Newton-type methods prescribe small updates to sensitive functions and large updates to insensitive ones. The resulting update steps typically progress towards optima much faster than first-order methods (Ye et al, 2019). However, the difficulty of approximating the Hessian for batched data sets (Schraudolph et al, 2007) and the high cost of evaluating the Hessian directly have largely prevented second-order methods from being utilized in machine learning research (Goodfellow et al, 2016).…”
Section: Iterative Optimizationmentioning
confidence: 99%
“…Here we describe a new method for training deep learning models that is vastly more efficient than state-of-the-art training techniques for a broad class of inverse problems that involve physical processes. The improvement is based on the fact that nearly all modern deep learning methods only use first-order information (Goodfellow et al, 2016) while optimization algorithms in computational science make use of higher-order information to accelerate convergence (Ye et al, 2019). Using second-order optimizers such as Newton's method to train deep learning models typically imposes severe limitations, either significantly restricting the size of the training set or requiring the evaluation of second derivatives, which is computationally demanding (Goodfellow et al, 2016).…”
Section: Introductionmentioning
confidence: 99%