2022
DOI: 10.48550/arxiv.2205.05943
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploiting Inductive Bias in Transformers for Unsupervised Disentanglement of Syntax and Semantics with VAEs

Abstract: We propose a generative model for text generation, which exhibits disentangled latent representations of syntax and semantics. Contrary to previous work, this model does not need syntactic information such as constituency parses, or semantic information such as paraphrase pairs. Our model relies solely on the inductive bias found in attention-based architectures such as Transformers.In the attention of Transformers, keys handle information selection while values specify what information is conveyed. Our model,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 31 publications
(42 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?