2022
DOI: 10.48550/arxiv.2207.01045
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FE${}^\textbf{ANN}$ $-$ An efficient data-driven multiscale approach based on physics-constrained neural networks and automated data mining

Abstract: Herein, we present a new data-driven multiscale framework called FE ANN which is based on two main keystones: the usage of physics-constrained artificial neural networks (ANNs) as macroscopic surrogate models and an autonomous data mining process. Our approach allows the efficient simulation of materials with complex underlying microstructures which reveal an overall anisotropic and nonlinear behavior on the macroscale. Thereby, we restrict ourselves to finite strain hyperelasticity problems for now. By using… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
16
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(16 citation statements)
references
References 65 publications
(116 reference statements)
0
16
0
Order By: Relevance
“…To remedy this weakness, a fairly new trend in NN-based constitutive modeling, and in scientific machine learning in general [41], is to include essential underlying physics in a strong form, e.g., by using adapted network architectures, or in a weak form, e.g., by modifying the loss term for the training [34]. These types of approaches, coined as physics-informed [22], mechanics-informed [4], physics-augmented [25], physics-constrained [19], or thermodynamics-based [37], enable an improvement of the extrapolation capability and the usage of sparse training data [22,27], which is particularly important when constitutive models are to be fitted to experimental data.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations
“…To remedy this weakness, a fairly new trend in NN-based constitutive modeling, and in scientific machine learning in general [41], is to include essential underlying physics in a strong form, e.g., by using adapted network architectures, or in a weak form, e.g., by modifying the loss term for the training [34]. These types of approaches, coined as physics-informed [22], mechanics-informed [4], physics-augmented [25], physics-constrained [19], or thermodynamics-based [37], enable an improvement of the extrapolation capability and the usage of sparse training data [22,27], which is particularly important when constitutive models are to be fitted to experimental data.…”
Section: Introductionmentioning
confidence: 99%
“…However, similar to the approaches [28,31,43] applied to anisotropic problems, the elastic potential is needed directly for training within [29,46]. In the meantime, NNs using invariants as inputs and the hyperelastic potential as output, thus also being a priori thermodynamically consistent, have become a fairly established approach [12,19,24,25,30,32,33,48]. Thereby, a more sophisticated training is applied, which allows the direct calibration of the network by tuples of stress and strain, i.e., the derivative of the energy with respect to the deformation is used in the loss term.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…This motivates the development of computationally feasible approaches in this realm. A recently emerging third alternative in literature to this end is employing data-driven surrogate models devising machine learning, see, e.g., [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33] and the references therein. Deploying the training costs offline materializing simulation or experimental data, these models surpass conventional rule-based approaches by drastically reducing the computational cost required during the prediction phase [34,35,36].…”
Section: Introductionmentioning
confidence: 99%