2017 American Control Conference (ACC) 2017
DOI: 10.23919/acc.2017.7963427
|View full text |Cite
|
Sign up to set email alerts
|

Deep reinforcement learning for partial differential equation control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
34
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(35 citation statements)
references
References 16 publications
1
34
0
Order By: Relevance
“…These issues can be partially addressed by building a hybrid architecture of machine learning and PDE, in which machine learning helps predict or resolve such unknowns. For example, PDE control can be formulated as a reinforcement learning problem ( Farahmand et al., 2017 ). The most extreme condition is that the form of coefficient/equation is unknown, but it can still be learned by machine learning purely from harnessing the experimental and simulation data.…”
Section: Knowledge and Its Representationsmentioning
confidence: 99%
“…These issues can be partially addressed by building a hybrid architecture of machine learning and PDE, in which machine learning helps predict or resolve such unknowns. For example, PDE control can be formulated as a reinforcement learning problem ( Farahmand et al., 2017 ). The most extreme condition is that the form of coefficient/equation is unknown, but it can still be learned by machine learning purely from harnessing the experimental and simulation data.…”
Section: Knowledge and Its Representationsmentioning
confidence: 99%
“…Recently, several new stochastic approximation methods for certain classes of high-dimensional nonlinear PDEs have been proposed and studied in the scientific literature. In particular, we refer, e.g., to [11,12,26,29,30,53] for BSDE-based approximation methods for PDEs in which nested conditional expectations are discretized through suitable regression methods, we refer, e.g., to [10,39,41,42] for branching diffusion approximation methods for PDEs, we refer, e.g., to [1][2][3][6][7][8]13,14,16,17,21,24,25,31,[34][35][36]40,43,48,50,52,[54][55][56][57][58]60,62,63] for deep learning based approximation methods for PDEs, and we refer to [4,5,20,28,46,47] for numerical simulations, approximation results, and extensions of the in…”
Section: Introductionmentioning
confidence: 99%
“…For MLP approximation methods it has been recently shown in [4,45,46] that such algorithms do indeed overcome the curse of dimensionality for certain classes of gradient-independent PDEs. Numerical simulations for deep learning based approximation methods for nonlinear PDEs in high dimensions are very encouraging (see, e.g., the above named references [1][2][3][6][7][8]13,14,16,17,21,24,25,31,[34][35][36]40,43,48,50,52,[54][55][56][57][58]60,62,63]) but so far there is only partial error analysis available for such algorithms (which, in turn, is strongly based on the above-mentioned error analysis for the MLP approximation method; cf. [44] and, e.g., [9,23,32,33,36,49,51,61,62]).…”
Section: Introductionmentioning
confidence: 99%
“…The majority of recent methods for control of spatio-temporal systems typically reduce PDEs into a finite set of Ordinary Differential Equations (ODEs) through Reduced Order Models (ROMs), and apply standard finite-dimensional optimization methods which result in algorithms specific to the ROM used. Within this paradigm, deep learning methods have successfully been applied on policy networks in the finite dimensional setting for controlling Navier-Stokes systems [3,4,5,6], for soft robotic systems [7,8], as well as for many other systems [9]. These methods are often specific to a discretization scheme and represent a discretize-then-optimize approach.…”
Section: Introductionmentioning
confidence: 99%