As materials data sets grow in size and scope, the role of data mining and statistical learning methods to analyze these materials data sets and build predictive models is becoming more important. This manuscript introduces matminer, an open-source, Python-based software platform to facilitate datadriven methods of analyzing and predicting materials properties. Matminer provides modules for retrieving large data sets from external databases such as the Materials Project, Citrination, Materials Data Facility, and Materials Platform for Data Science. It also provides implementations for an extensive library of feature extraction routines developed by the materials community, with 44 featurization classes that can generate thousands of individual descriptors and combine them into mathematical functions. Finally, matminer provides a visualization module for producing interactive, shareable plots. These functions are designed in a way that integrates closely with machine learning and data analysis packages already developed and in use by the Python data science community. We explain the structure and logic of matminer, provide a description of its various modules, and showcase several examples of how matminer can be used to collect data, reproduce data mining studies reported in the literature, and test new methodologies.
We present a benchmark test suite and an automated machine learning procedure for evaluating supervised machine learning (ML) models for predicting properties of inorganic bulk materials. The test suite, Matbench, is a set of 13 ML tasks that range in size from 312 to 132k samples and contain data from 10 density functional theory-derived and experimental sources. Tasks include predicting optical, thermal, electronic, thermodynamic, tensile, and elastic properties given a material’s composition and/or crystal structure. The reference algorithm, Automatminer, is a highly-extensible, fully automated ML pipeline for predicting materials properties from materials primitives (such as composition and crystal structure) without user intervention or hyperparameter tuning. We test Automatminer on the Matbench test suite and compare its predictive power with state-of-the-art crystal graph neural networks and a traditional descriptor-based Random Forest model. We find Automatminer achieves the best performance on 8 of 13 tasks in the benchmark. We also show our test suite is capable of exposing predictive advantages of each algorithm—namely, that crystal graph methods appear to outperform traditional machine learning methods given ~104 or greater data points. We encourage evaluating materials ML algorithms on the Matbench benchmark and comparing them against the latest version of Automatminer.
Machine learning has emerged as a novel tool for the efficient prediction of material properties, and claims have been made that machine-learned models for the formation energy of compounds can approach the accuracy of Density Functional Theory (DFT). The models tested in this work include five recently published compositional models, a baseline model using stoichiometry alone, and a structural model. By testing seven machine learning models for formation energy on stability predictions using the Materials Project database of DFT calculations for 85,014 unique chemical compositions, we show that while formation energies can indeed be predicted well, all compositional models perform poorly on predicting the stability of compounds, making them considerably less useful than DFT for the discovery and design of new solids. Most critically, in sparse chemical spaces where few stoichiometries have stable compounds, only the structural model is capable of efficiently detecting which materials are stable. The nonincremental improvement of structural models compared with compositional models is noteworthy and encourages the use of structural models for materials discovery, with the constraint that for any new composition, the ground-state structure is not known a priori. This work demonstrates that accurate predictions of formation energy do not imply accurate predictions of stability, emphasizing the importance of assessing model performance on stability predictions, for which we provide a set of publicly available tests.
Half-Heusler materials are strong candidates for thermoelectric applications due to their high weighted mobilities and power factors, which is known to be correlated to valley degeneracy in the electronic band structure. However, there are over 50 known semiconducting half-Heusler phases, and it is not clear how the chemical composition affects the electronic structure. While all the n-type electronic structures have their conduction band minimum at either the Γ- or X-point, there is more diversity in the p-type electronic structures, and the valence band maximum can be at either the Γ-, L-, or W-point. Here, we use high throughput computation and machine learning to compare the valence bands of known half-Heusler compounds and discover new chemical guidelines for promoting the highly degenerate W-point to the valence band maximum. We do this by constructing an “orbital phase diagram” to cluster the variety of electronic structures expressed by these phases into groups, based on the atomic orbitals that contribute most to their valence bands. Then, with the aid of machine learning, we develop new chemical rules that predict the location of the valence band maximum in each of the phases. These rules can be used to engineer band structures with band convergence and high valley degeneracy.
Predictions of high thermoelectric performance in RECuZnP2 were verified by elastic, electrical, and thermal measurements. Low thermal conductivities result from strong anharmonicity, with electron transport limited by polar optical phonons.
There currently exist no quantitative methods to determine
the
appropriate conditions for solid-state synthesis. This not only hinders
the experimental realization of novel materials but also complicates
the interpretation and understanding of solid-state reaction mechanisms.
Here, we demonstrate a machine-learning approach that predicts synthesis
conditions using large solid-state synthesis data sets text-mined
from scientific journal articles. Using feature importance ranking
analysis, we discovered that optimal heating temperatures have strong
correlations with the stability of precursor materials quantified
using melting points and formation energies (Δ
G
f
, Δ
H
f
). In contrast, features derived from the thermodynamics
of synthesis-related reactions did not directly correlate to the chosen
heating temperatures. This correlation between optimal solid-state
heating temperature and precursor stability extends Tamman’s
rule from intermetallics to oxide systems, suggesting the importance
of reaction kinetics in determining synthesis conditions. Heating
times are shown to be strongly correlated with the chosen experimental
procedures and instrument setups, which may be indicative of human
bias in the data set. Using these predictive features, we constructed
machine-learning models with good performance and general applicability
to predict the conditions required to synthesize diverse chemical
systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.