Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.367
|View full text |Cite
|
Sign up to set email alerts
|

Autoencoding Pixies: Amortised Variational Inference with Graph Convolutions for Functional Distributional Semantics

Abstract: Functional Distributional Semantics provides a linguistically interpretable framework for distributional semantics, by representing the meaning of a word as a function (a binary classifier), instead of a vector. However, the large number of latent variables means that inference is computationally expensive, and training a model is therefore slow to converge. In this paper, I introduce the Pixie Autoencoder, which augments the generative model of Functional Distributional Semantics with a graphconvolutional neu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 59 publications
0
6
0
Order By: Relevance
“…My own recent work in this direction has been to develop the Pixie Autoencoder (Emerson, 2020a), and I look forward to seeing alternative approaches from other authors, as the field of distributional semantics continues to grow. I hope that this survey paper will help other researchers to develop the field in a way that keeps long-term goals in mind.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…My own recent work in this direction has been to develop the Pixie Autoencoder (Emerson, 2020a), and I look forward to seeing alternative approaches from other authors, as the field of distributional semantics continues to grow. I hope that this survey paper will help other researchers to develop the field in a way that keeps long-term goals in mind.…”
Section: Discussionmentioning
confidence: 99%
“…In my own work, I have learnt classifiers Copestake, 2016, 2017a,b), but with a computationally expensive model that is difficult to train. The computational challenge is partially resolved in my most recent work (Emerson, 2020a), but there is still work to be done in scaling up the model to make full use of the corpus data. The best way to design such a model, so that it can both make full use of the data and can be trained efficiently, is an open question.…”
Section: Concepts and Referentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Alongside count-based models, a variety of neural ones have been proposed to encode syntactic structure, focusing on different depths of the graph (Levy and Goldberg, 2014;Komninos and Manandhar, 2016;Marcheggiani and Titov, 2017;Vashishth et al, 2019;Emerson, 2020)). Of particular note here, Levy and Goldberg (2014) and Komninos and Manandhar (2016) each proposed models (DEP and EXT, respectively) which learn from local dependency relations, by extending the Skip-Gram with Negative sampling (SGNS) architecture from word2vec (Mikolov et al, 2013).…”
Section: Introductionmentioning
confidence: 99%
“…Alongside count-based models, a variety of neural ones have been proposed to encode syntactic structure, focusing on different depths of the graph (Levy and Goldberg, 2014;Komninos and Manandhar, 2016;Marcheggiani and Titov, 2017;Vashishth et al, 2019;Emerson, 2020)). Of particular note here, Levy and Goldberg (2014) and Komninos and Manandhar (2016) each proposed models (DEP and EXT, respectively) which learn from local dependency relations, by extending the Skip-Gram with Negative sampling (SGNS) architecture from word2vec (Mikolov et al, 2013).…”
Section: Introductionmentioning
confidence: 99%