Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.361
|View full text |Cite
|
Sign up to set email alerts
|

Element Intervention for Open Relation Extraction

Abstract: Open relation extraction aims to cluster relation instances referring to the same underlying relation, which is a critical step for general relation extraction. Current OpenRE models are commonly trained on the datasets generated from distant supervision, which often results in instability and makes the model easily collapsed. In this paper, we revisit the procedure of OpenRE from a causal view. By formulating OpenRE using a structural causal model, we identify that the above-mentioned problems stem from the s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…One drawback of matching predicate as the relation is that the predicate extracted by OpenIE is commonly a span from current sentence, which may lead models to take the shortcut by directly matching through words co-occurrence. To eliminate this shortcut, we follow several recent works (Agarwal et al, 2021;Liu et al, 2021) to generate paraphrase texts to match the predicate. Specifically, for extracted triplets, we first wrap them with special markers "[H], [R], [T]" correspond to subject, predicate and object.…”
Section: Triplet-paraphrase Constructionmentioning
confidence: 99%
“…One drawback of matching predicate as the relation is that the predicate extracted by OpenIE is commonly a span from current sentence, which may lead models to take the shortcut by directly matching through words co-occurrence. To eliminate this shortcut, we follow several recent works (Agarwal et al, 2021;Liu et al, 2021) to generate paraphrase texts to match the predicate. Specifically, for extracted triplets, we first wrap them with special markers "[H], [R], [T]" correspond to subject, predicate and object.…”
Section: Triplet-paraphrase Constructionmentioning
confidence: 99%
“…As the previous work (Simon et al, 2019;Hu et al, 2020;Liu et al, 2021), we adopt B 3 (Bagga and Baldwin, 1998), V-measure (Rosenberg and Hirschberg, 2007), and Adjusted Rand Index (ARI) (Hubert and Arabie, 1985) to evaluate different methods. Considering that any of the three metrics can measure the clustering performance from different angles, we take the average of B 3 F1, Vmeasure F1 and ARI for comprehensive evaluation.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…For fair comparison, all model are trained and evaluated on 10 relation types, same as (Simon et al, 2019;Hu et al, 2020;Liu et al, 2021). We implement our model in PyTorch 3 (Paszke et al, 2017) with transformers package 4 (Wolf et al, 2020).…”
Section: Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, the Relation Encoder aims to abstract the notion of relation instance, to provide embeddings that are easier to compare. In recent papers [21,71,29,33,65,73,60], relation embeddings are computed using Pretrained Language Models such as BERT [9,59] or RoBERTa [31]. BERT (and RoBERTa) takes as input some tokenized text, and computes for each input token an embedding, which is representative of the token itself and its context of use.…”
Section: Relation Encodermentioning
confidence: 99%