2021
DOI: 10.1126/sciadv.abf3658
|View full text |Cite
|
Sign up to set email alerts
|

One for all: Universal material model based on minimal state-space neural networks

Abstract: Computational models describing the mechanical behavior of materials are indispensable when optimizing the stiffness and strength of structures. The use of state-of-the-art models is often limited in engineering practice due to their mathematical complexity, with each material class requiring its own distinct formulation. Here, we develop a recurrent neural network framework for material modeling by introducing “Minimal State Cells.” The framework is successfully applied to datasets representing four distinct … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 43 publications
(22 citation statements)
references
References 15 publications
0
14
0
Order By: Relevance
“…This suggests that there is no recognizable specific element of the trained RNN models (e.g., the central internal layers) that may be considered as the heart of the model that encodes the general notion of plasticity (which should feature the same parameters for a large variety of materials). While Bonatti and Mohr 27 demonstrated the interpretability of the model's state space, it appears to be difficult to separate the model architecture into general plasticity and material‐specific parts. We observed during transfer learning that fine‐tuning the first internal layers (which provide a first interpretation of the state variables) is more effective than fine‐tuning the last internal layers.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This suggests that there is no recognizable specific element of the trained RNN models (e.g., the central internal layers) that may be considered as the heart of the model that encodes the general notion of plasticity (which should feature the same parameters for a large variety of materials). While Bonatti and Mohr 27 demonstrated the interpretability of the model's state space, it appears to be difficult to separate the model architecture into general plasticity and material‐specific parts. We observed during transfer learning that fine‐tuning the first internal layers (which provide a first interpretation of the state variables) is more effective than fine‐tuning the last internal layers.…”
Section: Resultsmentioning
confidence: 99%
“…New types of RNNs are also designed for mechanics in applications for which traditional architectures are not suited. Bonatti and Mohr 27 developed minimal state cells (MSCs) that decouple the sizes of the memory state and the total parameter of the RNN cell. To increase robustness of the neural network response and ensure self‐consistency with respect to the input discretization Bonatti and Mohr 28 proposed linearized minimal state cells (LMSCs).…”
Section: Introductionmentioning
confidence: 99%
“…As one of the key subsectors of manufacturing industry [67], the metal forming industry increasingly generates big data which desires effective processing and characterisation for mining more insightful information. Thus, the big data (BD) technology has been extensively studied and implemented in almost all stages of metal forming process, ranging from customer services to supply chains [31], from material preparation [68,69] to failure prediction and quality analysis [70,71].…”
Section: Big Datamentioning
confidence: 99%
“…In general, the state-of-the-art approaches either surrogate or completely bypass material models (Flaschel et al, 2021). Surrogating material models involve learning a mapping between strains and stresses using techniques ranging from piece-wise interpolation (Crespo et al, 2017;Sussman and Bathe, 2009) to Gaussian process regression (Rocha et al, 2021;Fuhg et al, 2022) and artificial neural networks (Ghaboussi et al, 1991;Fernández et al, 2021;Klein et al, 2022;Vlassis and Sun, 2021;Kumar et al, 2020;Bastek et al, 2022;Zheng et al, 2021;Mozaffar et al, 2019;Bonatti and Mohr, 2021;Vlassis et al, 2020;Kumar and Kochmann, 2021); the latter are particularly attractive because of their ability to efficiently and accurately learn from large and high-dimensional data.…”
Section: Introductionmentioning
confidence: 99%