2021
DOI: 10.1016/j.cej.2021.129845
|View full text |Cite
|
Sign up to set email alerts
|

RetroPrime: A Diverse, plausible and Transformer-based method for Single-Step retrosynthesis predictions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
106
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 70 publications
(118 citation statements)
references
References 32 publications
0
106
0
Order By: Relevance
“…by Lin, Neuralsym a , 2020 6 , 33 , 73 47.8 Dai et al, Graph Logic Network a , 2019 73 39.3 Liu et al,–rep. by Lin, LSTM-based, 2020 16 , 33 46.9 Genheden et al, AiZynthfinder, ANN + MCTS a , 2020 14 , 49 43–72 Transformer-based Zheng et al, SCROP, 2020 34 41.5 Wang et al, RetroPrime, 2021 48 44.1 Tetko et al, Augmented Transformer, 2020 35 46.2 Lin et al, AutoSynRoute, Transformer + MCTS, 2020 33 54.1 RetroTRAE 58.3 RetroTRAE (with SM and DM) 61.6 The results are based on either filtered MIT-full 46 , 47 or MIT-fully atom mapped 15 reaction datasets. a Reaction templates were used.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…by Lin, Neuralsym a , 2020 6 , 33 , 73 47.8 Dai et al, Graph Logic Network a , 2019 73 39.3 Liu et al,–rep. by Lin, LSTM-based, 2020 16 , 33 46.9 Genheden et al, AiZynthfinder, ANN + MCTS a , 2020 14 , 49 43–72 Transformer-based Zheng et al, SCROP, 2020 34 41.5 Wang et al, RetroPrime, 2021 48 44.1 Tetko et al, Augmented Transformer, 2020 35 46.2 Lin et al, AutoSynRoute, Transformer + MCTS, 2020 33 54.1 RetroTRAE 58.3 RetroTRAE (with SM and DM) 61.6 The results are based on either filtered MIT-full 46 , 47 or MIT-fully atom mapped 15 reaction datasets. a Reaction templates were used.…”
Section: Resultsmentioning
confidence: 99%
“…Performance differences in the SMILES-based Transformer models are attributed to improvements in data augmentation (with non-canonical SMILES) 35 , 48 , tokenization scheme (character or atom level) 31 , 33 , and postprocessing (by rectifying invalid SMILES) 32 , 34 . The better prediction accuracy of our model appears to be due to better reaction representation beyond the standard SMILES.…”
Section: Resultsmentioning
confidence: 99%
“…In this work, we strictly categorize generative models which require additional help from RDKit [1] for molecule editing into this method group. Most existing works [34,26,27,33] approach the task by a two-stage procedure. Despite their architecture differences between GNN and Transformer, they follow the same idea: They first convert the product into synthons by breaking the reactive bond via RDKit, then complete the synthons into reactants by either leaving groups selection [27], graph generation [26], or SMILES generation [34,33].…”
Section: Retrosynthesis Predictionmentioning
confidence: 99%
“…Most existing works [34,26,27,33] approach the task by a two-stage procedure. Despite their architecture differences between GNN and Transformer, they follow the same idea: They first convert the product into synthons by breaking the reactive bond via RDKit, then complete the synthons into reactants by either leaving groups selection [27], graph generation [26], or SMILES generation [34,33]. In contrast, MEGAN [20] reframes the generative procedure as a sequence of graph edits that are completed by RDKit.…”
Section: Retrosynthesis Predictionmentioning
confidence: 99%
See 1 more Smart Citation