Proceedings of the 24th Conference on Computational Natural Language Learning 2020
DOI: 10.18653/v1/2020.conll-1.24
|View full text |Cite
|
Sign up to set email alerts
|

Representation Learning for Type-Driven Composition

Abstract: This paper is about learning word representations using grammatical type information. We use the syntactic types of Combinatory Categorial Grammar to develop multilinear representations, i.e. maps with n arguments, for words with different functional types. The multilinear maps of words compose with each other to form sentence representations. We extend the skipgram algorithm from vectors to multilinear maps to learn these representations and instantiate it on unary and binary maps for transitive verbs. These … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
34
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(36 citation statements)
references
References 26 publications
(40 reference statements)
2
34
0
Order By: Relevance
“…To the best of our knowledge, the score achieved by SBERT is, by far, the highest correlation value reported on this dataset. Our results coincide exactly with those reported in [36] for this system and this dataset. It should be noted, however, that SBERT is a supervised approach as it relies on more than 1 million annotated sentence pairs.…”
Section: Discussionsupporting
confidence: 93%
See 1 more Smart Citation
“…To the best of our knowledge, the score achieved by SBERT is, by far, the highest correlation value reported on this dataset. Our results coincide exactly with those reported in [36] for this system and this dataset. It should be noted, however, that SBERT is a supervised approach as it relies on more than 1 million annotated sentence pairs.…”
Section: Discussionsupporting
confidence: 93%
“…In English, the best configuration is also achieved with a right-to-left strategy, 53, just considering the contextualized head but using lexico-syntactic units instead of lemmas. To the best of our knowledge, this value is very close to the highest score, 54, obtained by a compositional system on the English dataset [36], and outperforms other compositional methods whose values for the English dataset are also shown in Table 1 (last rows in left side), namely [37,38], and the neural network method reported in [39].…”
Section: Resultssupporting
confidence: 72%
“…In the present paper we analyse the Gaussianity of the matrices in [20] using the permutation invariant matrix model of [8]. We again find strong evidence for approximate Gaussianity.…”
Section: Introductionmentioning
confidence: 62%
“…The construction of matrices in [1] was based on first constructing vectors for nouns and noun phrases, followed by using a linear regression method to construct the matrices as in earlier literature [13]. In the recent paper [20] the machine learning algorithm of [7] was extended to construct matrices for verbs.…”
Section: Introductionmentioning
confidence: 99%
“…To interpret text fragments taking into account their grammatical features while staying in the vector framework, the dimension of the representation quickly scales up with the complexity of the syntactic type, which has been a limiting feature in distributional semantics implementations [32]. This motivates a representation of words as quantum states, counting on the potential of quantum computers to outperform the limitations of classical computation both in terms of memory use [33] and in processing efficiency [34].…”
Section: A Quantum States As Inputs Of a Quantum Circuitmentioning
confidence: 99%