2020 10th Annual Computing and Communication Workshop and Conference (CCWC) 2020
DOI: 10.1109/ccwc47524.2020.9031239
|View full text |Cite
|
Sign up to set email alerts
|

An Evolutionary Approach to Variational Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
2
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(15 citation statements)
references
References 11 publications
0
2
0
1
Order By: Relevance
“…However, over the past years little research has been undertaken to find the optimal depth and width in an unsupervised manner. To the best of our knowledge, no prior research has been carried out to address the automatic learning of AutoEncoders' depth in an unsupervised fashion except the evolutionary and genetic algorithms presented in [26,27]. The encoder and decoder layers of a stacked AutoEncoder can be fully connected or convolutional layers.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, over the past years little research has been undertaken to find the optimal depth and width in an unsupervised manner. To the best of our knowledge, no prior research has been carried out to address the automatic learning of AutoEncoders' depth in an unsupervised fashion except the evolutionary and genetic algorithms presented in [26,27]. The encoder and decoder layers of a stacked AutoEncoder can be fully connected or convolutional layers.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, the problem of learning the width and the depth of deep AEs has been addressed. As such, the topology of the AE is learned while training the model through dynamic expansion and\or pruning of neurons and/or the layers [18,[26][27][28][29][30][31]. These approaches are referred to as dynamic or evolving AE architecture in the literature.…”
Section: Related Workmentioning
confidence: 99%
“…[107]- [110], [112], [114], [115], [118], [120], [121], [123], [125], [126], [128]- [133], [135], [137], [141], [149], [156], [158], [161], [162], [172], [173], [175], [183], [192], [194]- [197] 1…”
Section: B Comparison On Cifar-10 and Cifar-100unclassified
“…Such computational resources are not available for anyone who is interested in NAS. Retain the best group [26], [36], [38], [42], [49], [55], [73], [77], [84], [97], [111], [114], [132], [156], [157] Discard the worst or the oldest [56], [76], [86], [130], [131], [155], [184] Roulette [29], [30], [44], [60], [62], [107], [113], [116], [133] Tournament selection [9], [19], [20], [31], [59], [63], [80], [93], [99], [108], [117], [120], [125], [127], [128], [184] Others [57], [105] Almost all o...…”
Section: Shorten the Evaluation Timementioning
confidence: 99%
“…In today's literature, this problem is thus generally circumvented by basing the reward on a second model that judges the generated architecture through a supervised proxy objective, i.e. scoring the generated images according to a classifier (Gong et al, 2019), or by refraining from reinforcement learning and adopting techniques from evolutionary computing (Hajewski and Oliveira, 2020).…”
Section: Short-term Prospectsmentioning
confidence: 99%