2021
DOI: 10.1038/s41524-021-00545-1
|View full text |Cite
|
Sign up to set email alerts
|

Compositionally restricted attention-based network for materials property predictions

Abstract: In this paper, we demonstrate an application of the Transformer self-attention mechanism in the context of materials science. Our network, the Compositionally Restricted Attention-Based network (), explores the area of structure-agnostic materials property predictions when only a chemical formula is provided. Our results show that ’s performance matches or exceeds current best-practice methods on nearly all of 28 total benchmark datasets. We also demonstrate how ’s architecture lends itself towards model inter… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
104
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 121 publications
(121 citation statements)
references
References 64 publications
1
104
0
Order By: Relevance
“…CrabNet [40] Composition-based property regression Predict performance for proxy scores ElMD [54] Composition-based distance metric Supply distance matrix to DensMAP DensMAP [59] Density-aware dimensionality reduction Obtain densities for density proxy HDBSCAN* [57] Density…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…CrabNet [40] Composition-based property regression Predict performance for proxy scores ElMD [54] Composition-based distance metric Supply distance matrix to DensMAP DensMAP [59] Density-aware dimensionality reduction Obtain densities for density proxy HDBSCAN* [57] Density…”
Section: Methodsmentioning
confidence: 99%
“…diamond vs. carbon). We use CrabNet [40] as the regression model for bulk modulus which depends only on composition to generate machine learning features; however, a different composition-based model mentioned in Section 1 could have been used instead.…”
Section: Data and Validationmentioning
confidence: 99%
“…The highest bulk modulus is chosen when considering identical formulae. We use CrabNet [44] as the regression model for bulk modulus which depends only on composition to generate machine learning features; however, one of the other models mentioned in Section 1 could have been used instead.…”
Section: Data and Validationmentioning
confidence: 99%
“…We use CrabNet [44] as the regression model for bulk modulus which depends only on composition to generate machine learning features; however, one of the other models mentioned in Section 1 could have been used instead.…”
Section: Data and Validationmentioning
confidence: 99%
“…A suite of regression models are available for use as the backbone for a mate-rials discovery project. A non-exhaustive list ordered from oldest to newest by journal publication year includes GBM-Locfit [36], CGCNN [37], MEGNet [38], wren [39], GATGNN [40], iCGCNN [27], Automatminer [41], Roost [42], DimeNet++ [43], Compositionally-Restricted Attention-Based Network (CrabNet) [44], and MODNet [45], each with varying advantages and disadvantages.…”
Section: Introductionmentioning
confidence: 99%