2022
DOI: 10.1098/rspa.2022.0297
|View full text |Cite
|
Sign up to set email alerts
|

Data-driven control of spatiotemporal chaos with reduced-order neural ODE-based models and reinforcement learning

Abstract: Deep reinforcement learning (RL) is a data-driven method capable of discovering complex control strategies for high-dimensional systems, making it promising for flow control applications. In particular, the present work is motivated by the goal of reducing energy dissipation in turbulent flows, and the example considered is the spatiotemporally chaotic dynamics of the Kuramoto–Sivashinsky equation (KSE). A major challenge associated with RL is that substantial training data must be generated by repeatedly inte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 49 publications
0
7
0
Order By: Relevance
“…Another notable example is Ref. [ 48 ], where a Couette flow is controlled by means of two streamwise parallel slots. Note that in the latter case the training of the DRL agent is performed in a reduced-order model of the problem and then applied to the actual case.…”
Section: Related Workmentioning
confidence: 99%
“…Another notable example is Ref. [ 48 ], where a Couette flow is controlled by means of two streamwise parallel slots. Note that in the latter case the training of the DRL agent is performed in a reduced-order model of the problem and then applied to the actual case.…”
Section: Related Workmentioning
confidence: 99%
“…Another notable example is Ref. [42], where a Couette flow is controlled by means of two streamwise parallel slots. Note that in the latter case the training of the DRL agent is performed in a reduced-order model of the problem and then applied to the actual case.…”
Section: Reinforcement Learning In Fluid Mechanicsmentioning
confidence: 99%
“…To circumvent this bottleneck, we employ a method denoted "Data-driven Manifold Dynamics for RL" [42], or "DManD-RL" for short, with some modification. This framework consists of two main learning objectives, which can be broken down into five steps, illustrated in Fig.…”
Section: Dmand Modeling Frameworkmentioning
confidence: 99%
“…In order to overcome the high computational cost of RL training in this environment, in this work we replace the high-resolution simulation with an accurate low-dimensional surrogate model, aiming to dramatically reduce the time required to train the control policy. We showed in [42] that this data-driven model-based RL approach, which we refer to as "Data-Driven Manifold Dynamics" RL (DManD-RL), works well for controlling spatiotemporal chaotic dynamics in the Kuramoto-Sivashinksy Equation. For further discussion on the various types of model-based RL, we refer the reader to Zeng et al [42].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation