2023
DOI: 10.1021/acs.jpcb.3c05928
|View full text |Cite
|
Sign up to set email alerts
|

Coarse-Graining with Equivariant Neural Networks: A Path Toward Accurate and Data-Efficient Models

Timothy D. Loose,
Patrick G. Sahrmann,
Thomas S. Qu
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 54 publications
0
3
0
Order By: Relevance
“…Remarkably, it is precisely the same technique that provides one possible solution: equivariant neural network potentials, as has recently been demonstrated. , It is not surprising in retrospect that these tools are suitable for this task. The goal of coarse-graining is to find the potential of mean force (PMF) for a reduced set of degrees of freedom.…”
Section: Recursive Coarse-grainingmentioning
confidence: 89%
See 1 more Smart Citation
“…Remarkably, it is precisely the same technique that provides one possible solution: equivariant neural network potentials, as has recently been demonstrated. , It is not surprising in retrospect that these tools are suitable for this task. The goal of coarse-graining is to find the potential of mean force (PMF) for a reduced set of degrees of freedom.…”
Section: Recursive Coarse-grainingmentioning
confidence: 89%
“…Currently, coarse-graining with an NNP is applied to relatively small-scale classical systems, such as simplifying water to a single site or for proteins. ,,, However, they could potentially be iteratively applied to increasingly larger scales, perhaps one day encompassing complexes of multiple proteins, cell membranes, etc.…”
Section: Recursive Coarse-grainingmentioning
confidence: 99%
“…These approaches look promising but, to the best of our knowledge, have not yet been tested extensively. Also, these approaches have general caveats, such as the demand for extensive training data, as described by Loose et al 61 …”
Section: Discussionmentioning
confidence: 99%