2015
DOI: 10.1137/130936725
|View full text |Cite
|
Sign up to set email alerts
|

Composing Scalable Nonlinear Algebraic Solvers

Abstract: Abstract. Most efficient linear solvers use composable algorithmic components, with the most common model being the combination of a Krylov accelerator and one or more preconditioners.A similar set of concepts may be used for nonlinear algebraic systems, where nonlinear composition of different nonlinear solvers may significantly improve the time to solution. We describe the basic concepts of nonlinear composition and preconditioning and present a number of solvers applicable to nonlinear partial differential … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
120
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 130 publications
(120 citation statements)
references
References 65 publications
0
120
0
Order By: Relevance
“…In analogy with the linear case where scriptPfalse(boldg;boldxfalse)=boldPboldgfalse(boldxfalse), we interpret scriptPfalse(boldg;boldxfalse) as the preconditioned gradient direction. By applying an optimization method with iteration xk+1=scriptMfalse(boldg;xkfalse) to scriptPfalse(boldg;boldxfalse)=boldxscriptQfalse(boldg;boldxfalse)=bold0 instead of to g ( x ) = 0 , we obtain the nonlinearly left‐preconditioned optimization update boldxk+1=MP(g;·);boldxk. This means, in practice, that all occurrences of g ( x ) in scriptM are replaced by scriptPfalse(boldg;boldxfalse) in the LP approach, as in the case of nonlinear left‐preconditioning for nonlinear equation systems . An important difference in the optimization context, however, is that we continue using the original f ( x ) and g ( x ) in determining the line search step α for methods like CG or L‐BFGS, so the gradients g ( x ) used in the line search are not replaced by scriptPfalse(boldg;boldxfalse).…”
Section: Nonlinear Preconditioning Strategiesmentioning
confidence: 99%
“…In analogy with the linear case where scriptPfalse(boldg;boldxfalse)=boldPboldgfalse(boldxfalse), we interpret scriptPfalse(boldg;boldxfalse) as the preconditioned gradient direction. By applying an optimization method with iteration xk+1=scriptMfalse(boldg;xkfalse) to scriptPfalse(boldg;boldxfalse)=boldxscriptQfalse(boldg;boldxfalse)=bold0 instead of to g ( x ) = 0 , we obtain the nonlinearly left‐preconditioned optimization update boldxk+1=MP(g;·);boldxk. This means, in practice, that all occurrences of g ( x ) in scriptM are replaced by scriptPfalse(boldg;boldxfalse) in the LP approach, as in the case of nonlinear left‐preconditioning for nonlinear equation systems . An important difference in the optimization context, however, is that we continue using the original f ( x ) and g ( x ) in determining the line search step α for methods like CG or L‐BFGS, so the gradients g ( x ) used in the line search are not replaced by scriptPfalse(boldg;boldxfalse).…”
Section: Nonlinear Preconditioning Strategiesmentioning
confidence: 99%
“…The method based on the inverse Jacobian, Equation (22), is usually called the "bad" Broyden method because in its original applications, it did not perform as well as the "good" Broyden update, Equation (21) [66,[68][69][70][71][72]. It is more convenient to have an update to the inverse Jacobian than the Jacobian itself since (cf.…”
Section: Methodsmentioning
confidence: 99%
“…For problems with high nonlinearities, preconditioning techniques on the nonlinear level, such as the additive Schwarz preconditioned inexact Newton (ASPIN) methods [9,25,39], the nonlinear restricted Schwarz preconditioners [10,16], the nonlinear dual-domain decomposition methods [48], the nonlinear balancing domain decomposition by constraints methods [30], the nonlinear elimination methods [26,27,29], and the composite nonlinear algebraic methods [6], have received increasing attention in recent years. In particular, some efforts have been made in applying the ASPIN method to solve the two-phase flow problems [51,53].…”
Section: B595mentioning
confidence: 99%
“…Moreover, the subspace problem (2.6), which is a reduced-space nonlinear system, can be solved using any classical nonlinear iterative method, such as the inexact Newton method with backtracking and its many variations [7,31,46,47], and other composite solvers [6]. A key element of the NE step is the choice of S b and S g to build the subspaces V g and V b .…”
Section: Subspace Correction Phasementioning
confidence: 99%