2022
DOI: 10.1021/acs.jcim.2c00872
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Structure-Sensitive Relations for Small Species Adsorption Using Machine Learning

Abstract: Accurate prediction of adsorption energies on heterogeneous catalyst surfaces is crucial to predicting reactivity and screening materials. Adsorption linear scaling relations have been developed extensively but often lack accuracy and apply to one adsorbate and a single binding site type at a time. These facts undermine their ability to predict structure sensitivity and optimal catalyst structure. Using machine learning on nearly 300 density functional theory calculations, we demonstrate that generalized coord… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 52 publications
0
13
0
Order By: Relevance
“…[2][3][4] At the heart of computational catalyst design lies the construction of a suitable reaction network and the determination of reaction and activation energies for each step for a specific set of, for instance, metals. Even though the exact nature of the catalyst termination is mostly unknown, reaction energies, to a first approximation, can be linked to binding energies of atoms or molecules to specific, low index catalytic surfaces via scaling relationships [5][6][7][8] or machine learning based methods, [9][10][11][12][13][14] whereas the activation energy for each reaction step is obtained from BEP relationships. [6,[15][16][17][18][19][20][21][22][23][24] In a subsequent step, interfacing the energetic data for the reaction network with mean-field microkinetic modeling [25] or kinetic Monte Carlo simulations allows the construction of activity and selectivity maps based on specific descriptors, which, in turn, enables screening of a large library of binding energies for identifying the most promising catalysts for a given reaction.…”
mentioning
confidence: 99%
“…[2][3][4] At the heart of computational catalyst design lies the construction of a suitable reaction network and the determination of reaction and activation energies for each step for a specific set of, for instance, metals. Even though the exact nature of the catalyst termination is mostly unknown, reaction energies, to a first approximation, can be linked to binding energies of atoms or molecules to specific, low index catalytic surfaces via scaling relationships [5][6][7][8] or machine learning based methods, [9][10][11][12][13][14] whereas the activation energy for each reaction step is obtained from BEP relationships. [6,[15][16][17][18][19][20][21][22][23][24] In a subsequent step, interfacing the energetic data for the reaction network with mean-field microkinetic modeling [25] or kinetic Monte Carlo simulations allows the construction of activity and selectivity maps based on specific descriptors, which, in turn, enables screening of a large library of binding energies for identifying the most promising catalysts for a given reaction.…”
mentioning
confidence: 99%
“…Our previously developed machine learning (ML) model 47 and GCN scaling relations were employed, with the most accurate of the two methods selected for estimating the adsorption energies of each species (Note S2 † ). Nearly all errors were within ±0.1 eV (Fig.…”
Section: Resultsmentioning
confidence: 99%
“…The extended surfaces and corresponding GCNs for developing GCN scaling relations were adopted from our previous work. 47 Pt nanoparticles were constructed using the Atomic Simulation Environment (ASE). 34 Each Pt atom in the outermost layer constituted a site, and its associated GCN was calculated.…”
Section: Methodsmentioning
confidence: 99%
“…Based on the trained SVR model, several methods such as SHAP, , permutation, and Pearson correlation coefficient have been applied to screen the impact of each input feature on the model output. The SHAP technique is a good feature attribution algorithm with the ability to screen the impact of each data point on the model output . This method assigns equal weights to feature coalitions of all sizes .…”
Section: Methodsmentioning
confidence: 99%
“…The SHAP technique is a good feature attribution algorithm with the ability to screen the impact of each data point on the model output. 39 This method assigns equal weights to feature coalitions of all sizes. 35 Permutation method is another way to perform feature importance analysis.…”
mentioning
confidence: 99%