2022
DOI: 10.1609/aaai.v36i10.21379
|View full text |Cite
|
Sign up to set email alerts
|

OneRel: Joint Entity and Relation Extraction with One Module in One Step

Abstract: Joint entity and relation extraction is an essential task in natural language processing and knowledge graph construction. Existing approaches usually decompose the joint extraction task into several basic modules or processing steps to make it easy to conduct. However, such a paradigm ignores the fact that the three elements of a triple are interdependent and indivisible. Therefore, previous joint methods suffer from the problems of cascading errors and redundant information. To address these issues, in this … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 78 publications
(53 citation statements)
references
References 20 publications
0
36
0
Order By: Relevance
“…For comparison, We employed twelve strong models as baselines consisting of the SOTA models PRGC (Zheng et al, 2021a), PFN (Yan et al, 2021), TDEER , GRTE (Ren et al, 2021) and OneRel (Shang et al, 2022). We take the experiment results from the original papers of these baselines directly.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…For comparison, We employed twelve strong models as baselines consisting of the SOTA models PRGC (Zheng et al, 2021a), PFN (Yan et al, 2021), TDEER , GRTE (Ren et al, 2021) and OneRel (Shang et al, 2022). We take the experiment results from the original papers of these baselines directly.…”
Section: Resultsmentioning
confidence: 99%
“…We evaluate the proposed method on two widely used benchmark datasets NYT (Riedel et al, 2010) and WebNLG (Gardent et al, 2017) (Zeng et al, 2018) 61.0 56.6 58.7 37.7 36.4 37.1 GraphRel (Fu et al, 2019) 63.9 60.0 61.9 44.7 41.1 42.9 OrderCopyRE (Zeng et al, 2019) 77.9 67.2 72.1 63.3 59.9 61.6 CasRel BERT (Wei et al, 2020) 89.7 89.5 89.6 93.4 90.1 91.8 TPlinker BERT 91.3 92.5 91.9 91.7 92.0 91.9 PRGC BERT (Zheng et al, 2021a) 93.3 91.9 92.6 94.0 92.1 93.0 R-BPtrNet BERT 93.0 92.1 92.5 93.8 92.4 93.1 GRTE BERT (Ren et al, 2021) 92.9 93.1 93.0 93.7 94.2 93.9 EmRel (Xu et al, 2022) 91.7 92.5 92.1 92.7 93.0 92.9 OneRel BERT (Shang et al, 2022) 92 is produced by distant supervision from New York Times articles which has 24 predefined relations. WebNLG was first created for natural language generation task and is adapted to relational triple extraction by Zeng et al (2018), which has 171 predefined relations.…”
Section: Datasets and Evaluationmentioning
confidence: 99%
See 3 more Smart Citations