2023
DOI: 10.1016/j.actbio.2023.01.055
|View full text |Cite
|
Sign up to set email alerts
|

Automated model discovery for human brain using Constitutive Artificial Neural Networks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

4
50
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 36 publications
(54 citation statements)
references
References 58 publications
4
50
0
Order By: Relevance
“…When using biaxial extension experiments, the first and second invariant are no longer identical and the model consistently favors the second invariant over the first. Taken together, in agreement with previous observations [43, 56], we find that the second invariant is better suited to capture the isotropic response of biological tissues [34] and describes the experimental data more accurately than the first.…”
Section: Discussionsupporting
confidence: 92%
See 1 more Smart Citation
“…When using biaxial extension experiments, the first and second invariant are no longer identical and the model consistently favors the second invariant over the first. Taken together, in agreement with previous observations [43, 56], we find that the second invariant is better suited to capture the isotropic response of biological tissues [34] and describes the experimental data more accurately than the first.…”
Section: Discussionsupporting
confidence: 92%
“…By constraining the majority of weights to zero and only training for a selective subset of weights [43], we can utilize our neural network to identify the parameters of popular classical constitutive models. As a matter of fact, our neural network in Figure 1 is a generalization of previous invariant-based neural networks for isotropic materials [43] and for transversely isotropic materials [44] and naturally captures all their features as special cases. As such, we can reduce it to represent popular isotropic models including the neo Hooke [68], Blatz Ko [8], Mooney Rivlin [49, 58], or Demiray [12] models, as well as transversely isotropic models including the Lanir [39], Weiss [71], Groves [25], or Holzapfel [31] models.…”
Section: Discussionmentioning
confidence: 99%
“…Constitutive neural networks. Motivated by these kinematic and constitutive considerations, we reverse-engineer our own constitutive neural network that satisfies the conditions of thermodynamic consistency, material objectivity, material symmetry, incompressibility, constitutive restrictions, and polyconvexity by design [28,39]. Yet, instead of building these constraints into the loss function, we hardwire them directly into our network input, output, architecture, and activation functions [40,41] to satisfy the fundamental laws of physics.…”
Section: Methodsmentioning
confidence: 99%
“…Yet, instead of building these constraints into the loss function, we hardwire them directly into our network input, output, architecture, and activation functions [40, 41] to satisfy the fundamental laws of physics. Special members of this family represent well-known constitutive models, including the neo Hooke [31], Blatz Ko [29], Mooney Rivlin [32, 33], and Demiray [30] models, for which the network weights gain a clear physical interpretation [39, 42]. Specifically, our constitutive neural network learns a free energy function that is parameterized in terms of the first and second invariants.…”
Section: Methodsmentioning
confidence: 99%
“…Such models are precious in fields where highly flexible yet physically sensible models are required, such as the simulation of microstructured materials (Gärtner et al, 2021; Kumar and Kochmann, 2022; Kalina et al, 2023). Furthermore, including mechanical conditions improves the model generalization (Klein et al, 2022b), allowing for model calibrations with sparse data usually available from real-world experiments (Linka et al, 2023). For the construction of polyconvex potentials, several approaches exist (Chen and Guilleminot, 2022; Klein et al, 2022a; Tac et al, 2022), where the most noteworthy approaches are based on input-convex neural networks (ICNNs).…”
Section: Introductionmentioning
confidence: 99%