Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.36
|View full text |Cite
|
Sign up to set email alerts
|

Graph Based Network with Contextualized Representations of Turns in Dialogue

Abstract: Dialogue-based relation extraction (RE) aimsto extract relation(s) between two arguments that appear in a dialogue. Because dialogues have the characteristics of high personal pronoun occurrences and low information density, and since most relational facts in dialogues are not supported by any single sentence, dialogue-based relation extraction requires a comprehensive understanding of dialogue. In this paper, we propose the TUrn COntext awaRE Graph Convolutional Network (TUCORE-GCN) modeled by paying attentio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 29 publications
1
7
0
Order By: Relevance
“…Next, we see a five point absolute improvement in F1 from the baseline model when using RoBERTa. The trend from BERT to RoBERTa is similar to results found by Lee and Choi (2021), where changing from a BERT base model to RoBERTa Large (not shown here) improved their model performance significantly. Additionally, we see a 3 point improvement from R to D-REX when using RoBERTa (compared to 0.7 for BERT), which we believe is due to the better per- forming ranking model, which allows for D-REX to rely more on the input explanations.…”
Section: Relation Extraction (Re) Evaluationsupporting
confidence: 86%
See 4 more Smart Citations
“…Next, we see a five point absolute improvement in F1 from the baseline model when using RoBERTa. The trend from BERT to RoBERTa is similar to results found by Lee and Choi (2021), where changing from a BERT base model to RoBERTa Large (not shown here) improved their model performance significantly. Additionally, we see a 3 point improvement from R to D-REX when using RoBERTa (compared to 0.7 for BERT), which we believe is due to the better per- forming ranking model, which allows for D-REX to rely more on the input explanations.…”
Section: Relation Extraction (Re) Evaluationsupporting
confidence: 86%
“…There are no restrictions on R, it can be any algorithm which ranks relations (e.g., deep neural network, rule-based, etc.) such as (Yu et al, 2020;Lee and Choi, 2021). However, if R needs to be trained, it must be done prior to D-REX training; D-REX will not make any updates to R.…”
Section: Modelsmentioning
confidence: 99%
See 3 more Smart Citations