2022
DOI: 10.48550/arxiv.2201.11872
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Local Latent Space Bayesian Optimization over Structured Inputs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Figure S1 shows optimization of latent space has continuous improvement, but after decoding to an actual sequence and evaluating g( u) there is a plateau. This may be unique to UniRep or because or be task specific because other recent work has successfully used latent spaces for BayesOpt 68,69 and work on improving extrapolation from latent spaces. 70 Nevertheless, avoiding latent space negates this potential agreement problem between u and x.…”
Section: B Bayesian Optimizationmentioning
confidence: 99%
“…Figure S1 shows optimization of latent space has continuous improvement, but after decoding to an actual sequence and evaluating g( u) there is a plateau. This may be unique to UniRep or because or be task specific because other recent work has successfully used latent spaces for BayesOpt 68,69 and work on improving extrapolation from latent spaces. 70 Nevertheless, avoiding latent space negates this potential agreement problem between u and x.…”
Section: B Bayesian Optimizationmentioning
confidence: 99%
“…The shortcomings of objectives like logP and QED appear to be well-known (Nigam et al, 2019;Tripp et al, 2021;Fu et al, 2020;Maus et al, 2022), but a superior alternative has not yet been accepted by the research community. For example, at the time of writing the only molecule generation benchmark in TorchDrug is maximization of QED and logP of ZINC-like molecules.…”
Section: Appendicesmentioning
confidence: 99%
“…Numerous deep learning-based methods have been developed for de novo drug design, including approaches based on reinforcement learning (Olivecrona et al, 2017;Zhou et al, 2019;You et al, 2018;Jin et al, 2020;Yang et al, 2021;Horwood & Noutahi, 2020;Gottipati et al, 2020;Neil et al, 2018) and variational autoencoders (Gómez-Bombarelli et al, 2018;Maus et al, 2022;Jin et al, 2018;Bradshaw et al, 2020). These approaches use several different ways to encode molecules into something that the model can learn, such as fingerprint-, string-and graph-based encodings.…”
Section: Introductionmentioning
confidence: 99%