2022
DOI: 10.1016/j.compfluid.2021.105239
|View full text |Cite
|
Sign up to set email alerts
|

A hybrid partitioned deep learning methodology for moving interface and fluid–structure interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…Typically, the high-dimensional fluid and structure dynamics are reduced into low-dimensional latent spaces using techniques like POD or convolutional auto-encoders, and the latent fluid and structure dynamics are learned by separate DNNs [23]. Physical interface constraints, such as moving interfaces (solid-to-fluid coupling) and fluid forces (fluid-to-solid coupling), are often used to couple the fluid and structure DNNs, which can be represented using methods like level-set functions [23][24][25], immersed boundary method (IBM) masks [26], or direct forcing terms [27]. By doing so, both the structural responses and the fluid dynamics can be learned in a consistent manner.…”
Section: Introductionmentioning
confidence: 99%
“…Typically, the high-dimensional fluid and structure dynamics are reduced into low-dimensional latent spaces using techniques like POD or convolutional auto-encoders, and the latent fluid and structure dynamics are learned by separate DNNs [23]. Physical interface constraints, such as moving interfaces (solid-to-fluid coupling) and fluid forces (fluid-to-solid coupling), are often used to couple the fluid and structure DNNs, which can be represented using methods like level-set functions [23][24][25], immersed boundary method (IBM) masks [26], or direct forcing terms [27]. By doing so, both the structural responses and the fluid dynamics can be learned in a consistent manner.…”
Section: Introductionmentioning
confidence: 99%
“…The construction of heavily over-parametrized functions by deep neural networks rely on the foundations of the Kolmogorov-Arnold representation theorem [35] and the universal approximation of functions via neural networks [36,37]. For the data-driven modeling of nonlinear PDEs, deep neural network architectures such as convolutional recurrent autoencoder (CRAN) can be efficient and useful for constructing lowdimensional learning models [21,27,38]. CRAN is a fully data-driven approach in which both the lowdimensional representation of the state and its time evolution are learned using deep learning algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…CRAN is a fully data-driven approach in which both the lowdimensional representation of the state and its time evolution are learned using deep learning algorithms. Convolutional recurrent autoencoders have been shown to perform well for unsteady flow and fluidstructure phenomenon [22,27,38]. On the other hand, the ability of current CRAN architecture to learn PDEs with a dominant hyperbolic character relies on learning low-dimensional manifold with convolutional autoencoder and evolving these low-dimensional latent representations in time via RNN-LSTM, which can pose difficulties to generalize for the various physical phenomenon characterized by hyperbolic PDEs.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The methodology alleviates the problem of knowing the underlying differential equation and the computational burden of numerical methods since neural networks with implicit bias can be efficient for inference. The CRAN methodology has been successfully applied for fluid-structure interaction 13 and underwater radiated noise 27 .…”
Section: Introductionmentioning
confidence: 99%