2022
DOI: 10.48550/arxiv.2209.12590
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…27) can be in an adversarial form. Miladinović et al [32] demonstrated the potential of learning a dropout model via a GAN-like formulation. We modify the objective in an analogous way by creating a max-min game between the Selector and the predictor:…”
Section: Objective Functionsmentioning
confidence: 99%
“…27) can be in an adversarial form. Miladinović et al [32] demonstrated the potential of learning a dropout model via a GAN-like formulation. We modify the objective in an analogous way by creating a max-min game between the Selector and the predictor:…”
Section: Objective Functionsmentioning
confidence: 99%
“…Often these single-hop questions are combined to form a multi-hop question that requires complex reasoning to solve it (Pan et al, 2021). Controllable text generation has been studied in the past for text generation (Hu et al, 2017;Miladinović et al, 2022;Carlsson et al, 2022), Wikipedia texts (Liu et al, 2018Prabhumoye et al, 2018) and data-to-text generation (Puduppully and Lapata, 2021;Su et al, 2021). Controlled text generation is particularly useful for ensuring that the information is correct or the numbers are encapsulated properly (Gong et al, 2020).…”
Section: Related Workmentioning
confidence: 99%