Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/551
|View full text |Cite
|
Sign up to set email alerts
|

Document-level Relation Extraction as Semantic Segmentation

Abstract: Document-level relation extraction aims to extract relations among multiple entity pairs from a document. Previously proposed graph-based or transformer-based models utilize the entities independently, regardless of global information among relational triples. This paper approaches the problem by predicting an entity-level relation matrix to capture local and global information, parallel to the semantic segmentation task in computer vision. Herein, we propose a Document U-shaped Network for document-level rela… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 89 publications
(65 citation statements)
references
References 5 publications
0
42
0
Order By: Relevance
“…CorefBERT: a pre-trained model was proposed by [12] for word embedding. DocuNet-BERT: [31] proposed a U-shaped segmentation module to capture global information among relational triples. GAIN-GloVe/GAIN-BERT: [13] proposed GAIN, which designed mention graph and entity graph to predict target relations, and make use of GloVe or BERT for word embedding, GCN for representation of the graph.…”
Section: Baseline Modelsmentioning
confidence: 99%
“…CorefBERT: a pre-trained model was proposed by [12] for word embedding. DocuNet-BERT: [31] proposed a U-shaped segmentation module to capture global information among relational triples. GAIN-GloVe/GAIN-BERT: [13] proposed GAIN, which designed mention graph and entity graph to predict target relations, and make use of GloVe or BERT for word embedding, GCN for representation of the graph.…”
Section: Baseline Modelsmentioning
confidence: 99%
“…Zhou et al (2021) argue that the Transformer attentions are able to extract useful contextual features across sentences for DocRE, and they adopt an adaptive threshold for each entity pair. Zhang et al (2021) model DocRE as a semantic segmentation task and predict an entity-level relation matrix to capture local and global information.…”
Section: Related Workmentioning
confidence: 99%
“…For DocRED, we consider additional competing methods: Two Phase , which first predicts whether the entity pair has a relation and then predicts the relation type; LSR (Nan et al, 2020), which constructs the graph by inducing a latent document-level graph; Reconstructor (Xu et al, 2021b), which encourages the model to reconstruct a reasoning path during training; DRN (Xu et al, 2021a), which considers different reasoning skills explicitly and uses graph representation and context representation to model the reasoning skills; ATLOP (Zhou et al, 2021), which aggregates contextual information by the Transformer attentions and adopts an adaptive threshold for different entity pairs; and DocuNet (Zhang et al, 2021), which models DocRE as a semantic segmentation task.…”
Section: Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…To capture longterm and multi-level cascading dependencies, deep learning based techniques (e.g., RNNs [10,21,47,66] and CNNs [55,75]) are incorporated into sequential modeling. DNNs are known to have enticing representation capability and have the natural strength to capture comprehensive relations [76] over different entities (e.g., items, users, interactions). Recently, there are works that explore advanced techniques, e.g., memory networks [53], attention mechanisms [56,79], and graph neural networks [9,26,31,36,81] for sequential recommendation [6,23,29,54,61,67,72].…”
Section: Related Work 21 Sequential Recommendationmentioning
confidence: 99%