Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.189
|View full text |Cite
|
Sign up to set email alerts
|

A Structured Span Selector

Abstract: Many natural language processing tasks, e.g., coreference resolution and semantic role labeling, require selecting text spans and making decisions about them. A typical approach to such tasks is to score all possible spans and greedily select spans for task-specific downstream processing. This approach, however, does not incorporate any inductive bias about what sort of spans ought to be selected, e.g., that selected spans tend to be syntactic constituents. In this paper, we propose a novel grammar-based struc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 62 publications
0
14
0
Order By: Relevance
“…As shown in Table 4, Bohnet and others [26] and Liu and others [25] reported average F1 scores of 83.3 and 82.3 for CR, respectively. However, to achieve such a high performance, they used mT5 XXL (LM with a parameter size of 13 billion) and T0 3B (LM with a parameter size of 3 billion).…”
Section: Resultsmentioning
confidence: 94%
“…As shown in Table 4, Bohnet and others [26] and Liu and others [25] reported average F1 scores of 83.3 and 82.3 for CR, respectively. However, to achieve such a high performance, they used mT5 XXL (LM with a parameter size of 13 billion) and T0 3B (LM with a parameter size of 3 billion).…”
Section: Resultsmentioning
confidence: 94%
“…On ACE 05, ATG provide a competitive performance, securing the second-highest scores. The reported top-performing model, ASP (Liu et al 2022), operates under a relaxed, undirected relation evaluation, thereby limiting a fair comparison of results (Taillé et al 2021). On the CoNLL 2004 dataset, ATG exhibits its superiority by outperforming the second-best result by 2.2 in terms of REL+.…”
Section: Resultsmentioning
confidence: 99%
“…Recent advancements in generative Information Extraction (IE) emphasize the use of language models (LMs) to produce entities and relations, either as text or as a sequence of actions (Paolini et al 2021;Lu et al 2022;Nayak and Ng 2020;Liu et al 2022;Fei et al 2022;Wan et al 2023). Typically, these models employ pretrained encoder-decoder architectures, such as T5 (Raffel et al 2019) or BART (Lewis et al 2020), to encode an input text and subsequently decode it into a structured output.…”
Section: Generative Iementioning
confidence: 99%
See 2 more Smart Citations