Proceedings of the Fourth Workshop on Structured Prediction for NLP 2020
DOI: 10.18653/v1/2020.spnlp-1.9
|View full text |Cite
|
Sign up to set email alerts
|

Reading the Manual: Event Extraction as Definition Comprehension

Abstract: We ask whether text understanding has progressed to where we may extract event information through incremental refinement of bleached statements derived from annotation manuals. Such a capability would allow for the trivial construction and extension of an extraction framework by intended end-users through declarations such as, Some person was born in some location at some time. We introduce an example of a model that employs such statements, with experiments illustrating we can extract events under closed ont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(20 citation statements)
references
References 23 publications
0
20
0
Order By: Relevance
“…Chen et al [45] use bleached statements giving models acquire to information included in annotation manuals. Du et al [7] apply machine reading comprehension method employs event extraction and enhance data by constructing multiple question for each argument.…”
Section: Comparisonsmentioning
confidence: 99%
“…Chen et al [45] use bleached statements giving models acquire to information included in annotation manuals. Du et al [7] apply machine reading comprehension method employs event extraction and enhance data by constructing multiple question for each argument.…”
Section: Comparisonsmentioning
confidence: 99%
“…t ← TRUTHPREDICTOR(y) 13: P ← P ∪ (@truth, t) 14: return P 15: end function Single subsection We follow the paradigm of Chen et al (2020), where we iteratively modify the text of the subsection by inserting argument values, and predict values for uninstantiated arguments. Throughout the following, we refer to Algorithm 1 and to its notation.…”
Section: Algorithm 1 Argument Instantiation For a Single Subsectionmentioning
confidence: 99%
“…We concatenate the text of the case c with the modified text of the subsection r, and embed it using BERT (line 5), yielding a sequence of contextual subword embeddings y = {y i ∈ R 768 | i = 1...n}. Keeping with the notation of Chen et al (2020), assume that the embedded case is represented by the sequence of vectors t 1 , ..., t m and the embedded subsection by s 1 , ..., s n . For a given argument a, compute its attentive representations 1 , ...,s m and its augmented feature vectors x 1 , ..., x m .…”
Section: Algorithm 1 Argument Instantiation For a Single Subsectionmentioning
confidence: 99%
See 1 more Smart Citation
“…The ability to answer "IDK" allows one to address more realistic situations in reading comprehension, both as an end task and as an intermediary step for other NLP applications, such as QA-based event extraction (Chen et al, 2020;Lyu et al, 2021) or QA-based summarization evaluation (Deutsch et al, 2021).…”
Section: Introductionmentioning
confidence: 99%