Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.129
|View full text |Cite
|
Sign up to set email alerts
|

Learning Reasoning Patterns for Relational Triple Extraction with Mutual Generation of Text and Graph

Abstract: Relational triple extraction is a critical task for constructing knowledge graphs. Existing methods focused on learning text patterns from explicit relational mentions. However, they usually suffered from ignoring relational reasoning patterns, thus failed to extract the implicitly implied triples. Fortunately, the graph structure of a sentence's relational triples can help find multi-hop reasoning paths. Moreover, the type inference logic through the paths can be captured with the sentence's supplementary rel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Self-supervised learning (SSL) is a method for building models where the output labels are already included in the input data, eliminating the need for additional labeled data (Liu et al, 2021;Hu et al, 2021a,b;Liu et al, 2022d,c). SSL has been widely used in NLP domains such as sentence generation (West et al, 2019;Yan et al, 2021), document processing (You et al, 2021;Ginzburg et al, 2021), natural language inference (Li et al, , 2023, and text reasoning (Klein and Nabi, 2020;Fu et al, 2020;Chen et al, 2022). BERT (Devlin et al, 2019) is one of the most eminent SSL methods which exploit self-supervisions from corpus with next sentence prediction and masked language modeling tasks.…”
Section: Self-supervised Learning In Nlpmentioning
confidence: 99%
“…Self-supervised learning (SSL) is a method for building models where the output labels are already included in the input data, eliminating the need for additional labeled data (Liu et al, 2021;Hu et al, 2021a,b;Liu et al, 2022d,c). SSL has been widely used in NLP domains such as sentence generation (West et al, 2019;Yan et al, 2021), document processing (You et al, 2021;Ginzburg et al, 2021), natural language inference (Li et al, , 2023, and text reasoning (Klein and Nabi, 2020;Fu et al, 2020;Chen et al, 2022). BERT (Devlin et al, 2019) is one of the most eminent SSL methods which exploit self-supervisions from corpus with next sentence prediction and masked language modeling tasks.…”
Section: Self-supervised Learning In Nlpmentioning
confidence: 99%
“…Previous efforts mainly focus on the following three aspects. The first one designs novel modules or model architectures, for example, Chen, Zhang, and Huang (2022) proposes a framework of text and graph to learn relational reasoning patterns for relational triple extraction. The second one utilizes various external datasets or resources to enrich the input (Cabot and Navigli 2021;Zhang et al 2022a;Paolini et al 2021;Lu et al 2022).…”
Section: Introductionmentioning
confidence: 99%