Proceedings of the 5th Workshop on Representation Learning for NLP 2020
DOI: 10.18653/v1/2020.repl4nlp-1.23
|View full text |Cite
|
Sign up to set email alerts
|

Supertagging with CCG primitives

Abstract: In CCG and other highly lexicalized grammars, supertagging a sentence's words with their lexical categories is a critical step for efficient parsing. Because of the high degree of lexicalization in these grammars, the lexical categories can be very complex. Existing approaches to supervised CCG supertagging treat the categories as atomic units, even when the categories are not simple; when they encounter words with categories unseen during training, their guesses are accordingly unsophisticated.In this paper, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
21
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 12 publications
(23 citation statements)
references
References 28 publications
(15 reference statements)
1
21
0
Order By: Relevance
“…Two different methods of sequential decoding have been explored by Kogkalidis et al (2019) (hereafter 'K+19') and Bhargava and Penn (2020) ('BP20'). K+19 used a sequence-to-sequence model, with a single target sequence consisting of all serialized supertags for a sentence (Figure 3c).…”
Section: Constructivity In Supertaggingmentioning
confidence: 99%
See 1 more Smart Citation
“…Two different methods of sequential decoding have been explored by Kogkalidis et al (2019) (hereafter 'K+19') and Bhargava and Penn (2020) ('BP20'). K+19 used a sequence-to-sequence model, with a single target sequence consisting of all serialized supertags for a sentence (Figure 3c).…”
Section: Constructivity In Supertaggingmentioning
confidence: 99%
“…In this paper, we confront the long-tail problem head-on by proposing a constructive framework in which supertags are built from scratch rather than predicted as opaque labels (Kogkalidis et al, 2019). In contrast to prior constructive supertaggers (Kogkalidis et al, 2019;Bhargava and Penn, 2020), our model builds upon the observation that supertags are themselves tree-structured, and hence can be generated top-down. 1 Our experiments on the English CCGbank and its rebanked version show that constructing supertags as trees improves our ability to predict rare and even unseen tags, without sacrificing performance on the more common ones.…”
Section: Introductionmentioning
confidence: 99%
“…A key aspect of our parser is that it makes use of a structured decomposition of lexical categories in categorial grammars. In this sense, our work follows up on the intuition of recent "constructive" supertaggers, which have been explored for a type-logical grammar (Kogkalidis et al, 2019) and for CCG (Bhargava and Penn, 2020;Prange et al, 2021). Such supertaggers construct categories out of the atomic categories of the grammar; this challenges the classical approach to supertagging, where lexical categories are treated as opaque, rendering the task of supertagging equivalent to large-tagset POS tagging.…”
Section: Related Workmentioning
confidence: 99%
“…Such supertaggers construct categories out of the atomic categories of the grammar; this challenges the classical approach to supertagging, where lexical categories are treated as opaque, rendering the task of supertagging equivalent to large-tagset POS tagging. With this view, it becomes possible for novel categories to be produced; furthermore, the supertaggers are better able to incorporate prediction history and thereby produce grammatical outputs (Bhargava and Penn, 2020). Recently, Kogkalidis et al (2020) proposed a system for parsing a "type-logical" grammar that is essentially a modal, non-directional extension of LCG.…”
Section: Related Workmentioning
confidence: 99%
“…Combinatory Categorial Grammar (CCG) (Steedman, 2000) is a mildly context-sensitive grammar formalism. Several neural CCG parsing methods have been proposed so far (Lewis and Steedman, 2014;Xu et al, 2015;Vaswani et al, 2016;Xu, 2016;Yoshikawa et al, 2017;Steedman, 2019, 2020;Bhargava and Penn, 2020;Tian et al, 2020;Prange et al, 2021;Liu et al, 2021). Currently, neural span-based models (Cross and Huang, 2016;Stern et al, 2017;Gaddy et al, 2018;Kitaev and Klein, 2018) have been successful in the field of constituency parsing.…”
Section: Introductionmentioning
confidence: 99%