2019
DOI: 10.26434/chemrxiv.8427776
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Predicting Retrosynthetic Reaction using Self-Corrected Transformer Neural Networks

Abstract: Synthesis planning is the process of recursively decomposing target molecules into available precursors. Computer-aided retrosynthesis can potentially assist chemists in designing synthetic routes, but at present it is cumbersome and provides results of dissatisfactory quality. In this study, we develop a template-free self-corrected retrosynthesis predictor (SCROP) to perform a retrosynthesis prediction task trained by using the Transformer neural network architecture. In the method, the retrosynthesis planni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
3
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 38 publications
1
3
1
Order By: Relevance
“…Schwaller et al [22] recently proposed to ignore reactant and reagent roles for the reaction prediction task. In contrast to previous works [32,33,35,36], the single-step retrosynthetic model presented here predicts reactants and reagents. In an effort to simplify the prediction task, the most common precursors with a length of more than 50 tokens were replaced by molecule tokens.…”
Section: Molecule Representationcontrasting
confidence: 58%
See 4 more Smart Citations
“…Schwaller et al [22] recently proposed to ignore reactant and reagent roles for the reaction prediction task. In contrast to previous works [32,33,35,36], the single-step retrosynthetic model presented here predicts reactants and reagents. In an effort to simplify the prediction task, the most common precursors with a length of more than 50 tokens were replaced by molecule tokens.…”
Section: Molecule Representationcontrasting
confidence: 58%
“…Duan et al [37] increased the batch size and the training time for their Transformer model and were able to achieve a top-1 accuracy of 54.1% on the 50k USPTO data set [44]. Later on, the same architecture was reported to have a top-1 accuracy of 43.8% [36], in line with the three previous transformer-based approaches [32,33,35] but significantly lower than the accuracy previously reported by Duan et al [37]. Interestingly, the transformer model was also trained on a proprietary data set [36], including only reactions with two reactants with a Tanimoto similarity distribution peaked at 0.75, characteristic of an excessive degree of similarity (roughly 2 times higher than the USPTO).…”
Section: Introductionsupporting
confidence: 55%
See 3 more Smart Citations