2023
DOI: 10.1101/2023.01.14.524079
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Principal-stretch-based constitutive neural networks autonomously discover a subclass of Ogden models for human brain tissue

Abstract: The soft tissue of the brain deforms in response to external stimuli, which can lead to traumatic brain injury. Constitutive models relate the stress in the brain to its deformation and accurate constitutive modeling is critical in finite element simulations to estimate injury risk. Traditionally, researchers first choose a constitutive model and then fit the model parameters using tension, compression, or shear experiments. In contrast, constitutive artificial neural networks enable automated model discovery … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

5
1

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 30 publications
(61 reference statements)
1
11
0
Order By: Relevance
“…From Figures 2, 3, and 4, we conclude that the network trains well for all three data sets, and successfully discovers models to explain the training data. However, the discovered models are not sparse; they contain a large number of terms and a large set of non-zero parameters [63]. For example, in Figure 3, top row, we observe that, although we only use biaxial extension data for training, the network activates terms and parameters that are not associated with any of the five biaxial extension tests: Although training with biaxial extension in the fn-plane does not provide any information about the shear behavior in the fs- and sn-planes, the weights related to the eighth invariants I 8fs and I 8sn are non-zero and contribute to the shear response.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…From Figures 2, 3, and 4, we conclude that the network trains well for all three data sets, and successfully discovers models to explain the training data. However, the discovered models are not sparse; they contain a large number of terms and a large set of non-zero parameters [63]. For example, in Figure 3, top row, we observe that, although we only use biaxial extension data for training, the network activates terms and parameters that are not associated with any of the five biaxial extension tests: Although training with biaxial extension in the fn-plane does not provide any information about the shear behavior in the fs- and sn-planes, the weights related to the eighth invariants I 8fs and I 8sn are non-zero and contribute to the shear response.…”
Section: Discussionmentioning
confidence: 99%
“…L p -regularization adds the weighted L p -norm of the parameter vector to the loss function and induces sparsity for p-values equal to or smaller than one [28]. Here we induce sparsity using L 1 -regularization or lasso [67] by adding the weighted sum of the network weights to the loss function of our constitutive neural network [63]. This additional term allows us to fine-tune the number of non-zero parameters of our model; yet, at the expense of a reduced goodness-of-fit and at the cost of an additional hyperparameter, the penalty parameter α [48].…”
Section: Motivationmentioning
confidence: 99%
“…Naturally, activating all sixteen nodes is the best strategy to fine-tune the fit to the data and achieve the highest level of accuracy. At the same time, the resulting sixteen-term model is inherently complex and difficult to interpret [52]. Nonetheless, if we are only interested in finding the best-fit model and parameters for a finite element analysis, this is probably just fine.…”
Section: Discussionmentioning
confidence: 99%
“…Yet, instead of building these constraints into the loss function, we hardwire them directly into our network input, output, architecture, and activation functions [40, 41] to satisfy the fundamental laws of physics. Special members of this family represent well-known constitutive models, including the neo Hooke [31], Blatz Ko [29], Mooney Rivlin [32, 33], and Demiray [30] models, for which the network weights gain a clear physical interpretation [39, 42]. Specifically, our constitutive neural network learns a free energy function that is parameterized in terms of the first and second invariants.…”
Section: Methodsmentioning
confidence: 99%