2020
DOI: 10.1109/tnnls.2019.2919948
|View full text |Cite
|
Sign up to set email alerts
|

Dual Adversarial Autoencoders for Clustering

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(22 citation statements)
references
References 22 publications
0
22
0
Order By: Relevance
“…The other is to evaluate the generated data through the classification scores of specific functions, such as inception score [35]. This type of method [36]- [39] evaluates on the basis of a specific pretrained model, without considering the effect of real data and lacking the authenticity evaluation of generated images.…”
Section: B Evaluation Methods and Indicator 1) Evaluation Methodsmentioning
confidence: 99%
“…The other is to evaluate the generated data through the classification scores of specific functions, such as inception score [35]. This type of method [36]- [39] evaluates on the basis of a specific pretrained model, without considering the effect of real data and lacking the authenticity evaluation of generated images.…”
Section: B Evaluation Methods and Indicator 1) Evaluation Methodsmentioning
confidence: 99%
“…Autoencoder (AE)-based methods employ encoders to project high-dimensional input molecules into lowdimensional representations and decoders to reconstruct the original inputs via these low-dimensional features. Enhanced by additional latent variables and discriminator neural networks [44], VAE-based [20], [27], and AAE-based [30], [31] approaches were proposed. Comprehensive work has been conducted where various AE models are compared and assessed [28].…”
Section: A Generative Model Of Moleculesmentioning
confidence: 99%
“…The first term on right hand side of eq(1) measures how accurate is the generator, the second term denotes the divergence loss (KL divergence being a popular choice) that measures how closely the encoded latent variables match a unit Gaussian and λ is the scaling parameter [8]. In GANs, this divergence term is replaced by an adversarial network which tries to discriminate between original and generated samples [13], [26], [27], [28].…”
Section: A Backgroundmentioning
confidence: 99%