2021
DOI: 10.1007/978-3-030-86365-4_13
|View full text |Cite
|
Sign up to set email alerts
|

Multi-resolution Graph Neural Networks for PDE Approximation

Abstract: Deep Learning algorithms have recently received a growing interest to learn from examples of existing solutions and some accurate approximations of the solution of complex physical problems, in particular relying on Graph Neural Networks applied on a mesh of the domain at hand. On the other hand, state-of-the-art deep approaches of image processing use different resolutions to better handle the different scales of the images, thanks to pooling and up-scaling operations. But no such operators can be easily defi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…In the field of numerical simulations several coarsening algorithms have been developed to guarantee the stability and accuracy of the simulations. Liu et al (2021) and Belbute-Peres et al ( 2020) used coarsening techniques from numerical simulations and then interpolated the nodes attributes into the coarsened set of nodes. We found that our cell-grid coarsening and learnt MP from the high to the low-resolution graph (see section 3.2) performs significantly better than such coarseninginterpolation approach.…”
Section: Coarsening Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…In the field of numerical simulations several coarsening algorithms have been developed to guarantee the stability and accuracy of the simulations. Liu et al (2021) and Belbute-Peres et al ( 2020) used coarsening techniques from numerical simulations and then interpolated the nodes attributes into the coarsened set of nodes. We found that our cell-grid coarsening and learnt MP from the high to the low-resolution graph (see section 3.2) performs significantly better than such coarseninginterpolation approach.…”
Section: Coarsening Comparisonmentioning
confidence: 99%
“…As a comparison toPfaff et al (2021), a GNN with 16 sequential MP-layers (GN-Blocks) results in a MAE of 5.852 × 10 −2 on our NSMidRe dataset; whereas MultiScaleGNN with the same number and type of MP layers, but distributed among 3 scales, results in a lower MAE of 3.081 × 10 −2 . A comparison of our coarsening/pooling algorithm toLiu et al (2021) is included in Appendix C.Table1: MAE ×10 −2 on the advection testing datasets for MultiScaleGNN models with L = 1, 2, 3, 4…”
mentioning
confidence: 99%
“…To the best of our knowledge Alet et al (2019) were the first to explore the use of GNNs to infer continuum physics by solving the Poisson PDE, and subsequently Pfaff et al (2021) proposed a mesh-based GNN to simulate a wide range of continuum dynamics. Multi-resolution graph models were later introduced by Li et al (2020); Lino et al (2021); Liu et al (2021) and Chen et al (2021).…”
Section: Related Workmentioning
confidence: 99%
“…MultiScaleGNNg is a modified version of MultiScaleGNN to follow the pooling and unpooling used by Liu et al (2021). For this model, the low-resolution sets of nodes were generated using Guillard's coarsening algorithm (Guillard, 1993) as in REMuS-GNN.…”
Section: Models Detailsmentioning
confidence: 99%
“…In this work, we use a Cartesian grid to discretize each subdomain. However, GNNs [48,49,50,51,52] as well as FEM use other discretizations such as triangle or polygonal meshes. In future, these ideas can be used to handle unstructured meshes in subdomains.Additionally, other ML methods improve the learning of ML models by compressing PDE solutions on to lower-dimensional manifolds.…”
Section: Significant Contributions Of This Workmentioning
confidence: 99%