2020
DOI: 10.48550/arxiv.2002.07717
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reinforcement Learning for Molecular Design Guided by Quantum Mechanics

Abstract: Automating molecular design using deep reinforcement learning (RL) holds the promise of accelerating the discovery of new chemical compounds. A limitation of existing approaches is that they work with molecular graphs and thus ignore the location of atoms in space, which restricts them to 1) generating single organic molecules and 2) heuristic reward functions. To address this, we present a novel RL formulation for molecular design in Cartesian coordinates, thereby extending the class of molecules that can be … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 19 publications
0
9
0
Order By: Relevance
“…The result showed that this model can generate molecules with high predicted binding affinity, and the generated molecules had similar interaction modes and predicted binding affinity as compared to known inhibitors. 97 Simm et al proposed a novel RL formulation by using quantum mechanics to guide molecular design, 61 where the reward function is based on the electronic energy and is approximated by the semiempirical Parametrized Method 6 98 in SPARROW. 99,100 Considering that the properties of molecules are invariant under translation and rotation, the internal coordinates of atoms with respect to existing atoms, like the distance, angle, and dihedral angle, are learned by the agent first, and then, these internal coordinates are mapped to Cartesian coordinates.…”
Section: Designmentioning
confidence: 99%
See 1 more Smart Citation
“…The result showed that this model can generate molecules with high predicted binding affinity, and the generated molecules had similar interaction modes and predicted binding affinity as compared to known inhibitors. 97 Simm et al proposed a novel RL formulation by using quantum mechanics to guide molecular design, 61 where the reward function is based on the electronic energy and is approximated by the semiempirical Parametrized Method 6 98 in SPARROW. 99,100 Considering that the properties of molecules are invariant under translation and rotation, the internal coordinates of atoms with respect to existing atoms, like the distance, angle, and dihedral angle, are learned by the agent first, and then, these internal coordinates are mapped to Cartesian coordinates.…”
Section: Designmentioning
confidence: 99%
“…In the molecular generation process, the agent tries to take atoms from a given bag and place them on a 3D canvas. 61 This sequential generation of atoms in Cartesian coordinates to obtain molecules expands the class of molecules that can be generated and allows the generation of systems consisting of multiple molecules. Currently, this model is limited to designing molecules with known molecular formulas and further exploration is needed to increase its scalability.…”
Section: Designmentioning
confidence: 99%
“…All of the generative models discussed above generate molecules in the form of 2D graphs, or SMILES strings. Models to generate molecules directly in the form of 3D coordinates have also recently gained attention [105,106,107]. Such generated 3D coordinates can be directly used for further simulation using quantum mechanics or by using docking methods.…”
Section: Inverse Molecular Designmentioning
confidence: 99%
“…Reinforcement learning (RL) frames optimization problems as Markov decision processes for which an agent learns an optimal policy [55]. It has recently been applied to various optimization problems in structured input spaces [30], notably in chemical design [64,65,16,43,45,51]. While RL is undoubtedly effective at optimization, it is generally extremely sample inefficient, and consequently its biggest successes are in virtual environments where function evaluations are inexpensive [30].…”
Section: Related Workmentioning
confidence: 99%