2014
DOI: 10.1109/tnnls.2013.2277601
|View full text |Cite
|
Sign up to set email alerts
|

A Constrained Backpropagation Approach for the Adaptive Solution of Partial Differential Equations

Abstract: This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the opt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
43
0
2

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(45 citation statements)
references
References 19 publications
0
43
0
2
Order By: Relevance
“…In the context of PDEs single hidden layer ANNs have traditionally been used to solve PDEs since one hidden layer with sufficiently many neurons is sufficient for approximating any function and as all gradients that are needed can be computed in analytical closed form [21,22,26]. More recently there is a limited but emerging literature on the use of deep ANNs to solve PDEs [31,30,7,4,29,8,33]. In general ANNs have the benefits that they are smooth, analytical functions which can be evaluated at any point inside, or even outside, the domain without reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…In the context of PDEs single hidden layer ANNs have traditionally been used to solve PDEs since one hidden layer with sufficiently many neurons is sufficient for approximating any function and as all gradients that are needed can be computed in analytical closed form [21,22,26]. More recently there is a limited but emerging literature on the use of deep ANNs to solve PDEs [31,30,7,4,29,8,33]. In general ANNs have the benefits that they are smooth, analytical functions which can be evaluated at any point inside, or even outside, the domain without reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…In the first phase, a multi-layered perceptron (MLP) network based on a gradient descent learning algorithm is used by the neuro-fuzzy model to adapt the parameters of the fuzzy model [16]. Learning from data and approximate reasoning is simplifies by this architecture, as well as knowledge acquisition.…”
Section: A Hybrid Neural Fuzzy Inference Systemmentioning
confidence: 99%
“…In the learning phase, the neuro-fuzzy model in the HyFIS uses a multilayered perceptron (MLP) network based on a gradient descent learning algorithm for adapting the parameters of the fuzzy model [13]. The architecture simplifies learning from data and approximate reasoning, as well as knowledge acquisition.…”
Section: ) Hybrid Neural Fuzzy Inference System (Hyfis)mentioning
confidence: 99%