2022
DOI: 10.1016/j.jcp.2022.111080
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive deep density approximation for Fokker-Planck equations

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(10 citation statements)
references
References 38 publications
0
9
0
1
Order By: Relevance
“…Looking more closely, the large errors are concentrated around the location of singularity of u, i.e., the curve of active constraints {x : u(x(µ)) = µ}, except for the case µ = 20 where the inequality constraint is nonactive, keeping the smoothness of the optimal control function. Note that adaptive sampling strategies [51,50,13] may be used to improve the accuracy in the singularity region, which will be left for future study.…”
Section: Testmentioning
confidence: 99%
“…Looking more closely, the large errors are concentrated around the location of singularity of u, i.e., the curve of active constraints {x : u(x(µ)) = µ}, except for the case µ = 20 where the inequality constraint is nonactive, keeping the smoothness of the optimal control function. Note that adaptive sampling strategies [51,50,13] may be used to improve the accuracy in the singularity region, which will be left for future study.…”
Section: Testmentioning
confidence: 99%
“…KRnet provides a more expressive density model than real NVP for the same model size. More details about KRnet can be found in [23,24].…”
Section: Krnet Mapmentioning
confidence: 99%
“…A certain structure needs to be intro-duced to determine a map. Some typical structures include the Knothe-Rosenblatt (K-R) rearrangement [19], neural ODE [20] and the composition of many simple maps used in flow-based deep generative models such as NICE [21], real NVP [22], KRnet [23,24,25], to name a few.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…While ELM emerged nearly two decades ago, the investigation of this technique for the numerical solution of differential equations has appeared only quite recently, alongside the proliferation of deep neural network (DNN) based PDE solvers in the past few years (see e.g. [50,47,16,61,22,8,30,57,55,11,38,53,34,56], among many others). In [62,52,36,37] the ELM technique has been used for solving linear ordinary or partial differential equations (ODEs/PDEs) with single hidden-layer feedforward neural networks, in which certain polynomials (e.g.…”
Section: Introductionmentioning
confidence: 99%