2020
DOI: 10.1088/2632-2153/abae75
|View full text |Cite|
|
Sign up to set email alerts
|

Improving the generative performance of chemical autoencoders through transfer learning

Abstract: Generative models are a sub-class of machine learning models that are capable of generating new samples with a target set of properties. In chemical and materials applications, these new samples might be drug targets, novel semiconductors, or catalysts constrained to exhibit an application-specific set of properties. Given their potential to yield high-value targets from otherwise intractable design spaces, generative models are currently under intense study with respect to how predictions can be improved thro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(11 citation statements)
references
References 33 publications
1
10
0
Order By: Relevance
“…In previous work, we have demonstrated that the use of chemical autoencoders for targeted structure searching is effective for properties within the GDB19 dataset 39 , namely internal energy, zero-point vibrational energy, and HOMO-LUMO gap. 40 To examine the generality of this approach, we investigated three models trained to individually predict VIP, EA, and DM, and sampled 100,000 structures in property ranges poorly represented in the training data. The targeted ranges for VIP, EA, and DM were 10.0 to 11.0 eV, -2.0 to -1.0 eV, and 0.0 to 1.0 Debye, respectively (Fig.…”
Section: Resultsmentioning
confidence: 99%
“…In previous work, we have demonstrated that the use of chemical autoencoders for targeted structure searching is effective for properties within the GDB19 dataset 39 , namely internal energy, zero-point vibrational energy, and HOMO-LUMO gap. 40 To examine the generality of this approach, we investigated three models trained to individually predict VIP, EA, and DM, and sampled 100,000 structures in property ranges poorly represented in the training data. The targeted ranges for VIP, EA, and DM were 10.0 to 11.0 eV, -2.0 to -1.0 eV, and 0.0 to 1.0 Debye, respectively (Fig.…”
Section: Resultsmentioning
confidence: 99%
“…It has been shown capable of learning underlying relationships from a diverse set of molecular data by letting multiple data tasks and domains interact adaptively while generating the joint embeddings. Joint training is also a type of joint embedding that has been successfully applied to improve deep learning‐based molecule generation and enable transfer learning [25–29] . Joint training incorporates a property prediction task into a variational autoencoder (VAE) [30] and has been shown to organize points in the VAE latent space, making the latent space amenable to inverse molecular design and optimization [25,29] .…”
Section: Introductionmentioning
confidence: 99%
“…jointly trained a VAE using drug likeliness and synthetic accessibility, then performed Bayesian optimization in the resulting latent space to identify novel drug‐like molecules [25] . In addition to latent space organization, joint training provides a platform for knowledge transfer between abundant and scarce data tasks [27,29] . When more than one property is used to develop the model, this constitutes a multitask transfer learning approach.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations