2017
DOI: 10.48550/arxiv.1703.01925
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Grammar Variational Autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
89
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(92 citation statements)
references
References 0 publications
0
89
0
Order By: Relevance
“…We cannot provide a similar discussion for the GG-GAN comparison as the paper does not release the values for the individual metrics. 97.7 2.5 GG-GAN 7 16.6 3G-GAN (ours) 78.5 53.9 62.0 26.2…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We cannot provide a similar discussion for the GG-GAN comparison as the paper does not release the values for the individual metrics. 97.7 2.5 GG-GAN 7 16.6 3G-GAN (ours) 78.5 53.9 62.0 26.2…”
Section: Methodsmentioning
confidence: 99%
“…By definition, auto-regressive models are not permutation invariant. Some use a canonical sequential representation such as the SMILES representation (for instance: [20,17,5,7,2]). Others are generating directly some graph objects such nodes or edges [25,18,9,21,10,8,6].…”
Section: Related Workmentioning
confidence: 99%
“…Jin et al, [18] used a model called Junction Tree VAE (JT-VAE), which first generated a junction tree that serves as a scaffold for the molecule and then specifies the subgraphs as nodes of the junction tree. Their method of coarse-to-fine generation of molecules significantly outperformed previous SMILES-based approaches such as Gram-marVAE [16], Character VAE [22], or graph generating GraphVAE in validity of generated molecules. Another model , DeLinker, which is a gated graph neural network (GGNN) that links two molecular fragments together to form more complex compounds [23] also incorporates 3D structural information to constrain the linking process.…”
Section: Small Molecule Designmentioning
confidence: 99%
“…One possible approach for correction is sequence-to-sequence learning with attention, where the input is a wrong sequence and the target is the corrected one [15]. Grammar VAE (GVAE) [16] was also used to generate SMILES that follow syntactic constraints given by a context-free grammar. Finally, syntaxdirected VAE (SDVAE) [17]) can make use of attribute grammar to enforce syntactic and semantic constraints on generated SMILES.…”
Section: Small Molecule Designmentioning
confidence: 99%
“…Modern deep neural network (DNN) models have been used in various molecular applications, such as high-throughput screening for drug discovery, [1][2][3][4] de novo molecular design [5][6][7][8][9][10][11][12] and planning chemical reaction. [13][14][15] DNNs show comparable or sometimes better performance than traditional approaches grounded on quantum chemical theories in predicting some molecular properties, [16][17][18][19][20] if a vast amount of well-qualified data is secured.…”
Section: Introductionmentioning
confidence: 99%