Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1080
|View full text |Cite
|
Sign up to set email alerts
|

Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction

Abstract: Most work in relation extraction forms a prediction by looking at a short span of text within a single sentence containing a single entity pair mention. This approach often does not consider interactions across mentions, requires redundant computation for each mention pair, and ignores relationships expressed across sentence boundaries. These problems are exacerbated by the document-(rather than sentence-) level annotation common in biological text. In response, we propose a model which simultaneously predicts… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
232
0
18

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 265 publications
(268 citation statements)
references
References 41 publications
4
232
0
18
Order By: Relevance
“…GRU+Attn ) stacks a self-attention layer on top of GRU (Cho et al, 2014) and embedding layers; Bran (Verga et al, 2018) adopts a biaffine self-attention model to simultaneously extract the relations of all mention pairs. Both methods use only textual knowledge.…”
Section: Main Results On Biocreative VI Cprmentioning
confidence: 99%
“…GRU+Attn ) stacks a self-attention layer on top of GRU (Cho et al, 2014) and embedding layers; Bran (Verga et al, 2018) adopts a biaffine self-attention model to simultaneously extract the relations of all mention pairs. Both methods use only textual knowledge.…”
Section: Main Results On Biocreative VI Cprmentioning
confidence: 99%
“…Other more recent approaches include two-level reinforcement learning models (Takanobu et al, 2019), two layers of attention-based capsule network models , and self-attention with transformers (Verga et al, 2018). In particular, (Miwa and Sasaki, 2014;Katiyar and Cardie, 2016;Zhang et al, 2017;Zheng et al, 2017;Verga et al, 2018;Takanobu et al, 2019) also seek to jointly learn the entities and relations among them together. A large fraction of the past work focused on relations within a single sentences.…”
Section: Previous Workmentioning
confidence: 99%
“…For the GAD and EU-ADR data sets, we use the train and test splits provided by Lee et al (2019). Also, for CDR, since the state-of-the-art results (Verga et al, 2018) are given at the abstract level, we re-run their proposed algorithm on our transformed sentence-level CDR data set, reporting results for a single model, without additional data (Verga et al (2018) reports also results when adding weakly labeled data). Let us first focus on the two unsupervised baselines.…”
Section: Data Sets and Setupmentioning
confidence: 99%