Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.478
|View full text |Cite
|
Sign up to set email alerts
|

Learn to Resolve Conversational Dependency: A Consistency Training Framework for Conversational Question Answering

Abstract: One of the main challenges in conversational question answering (CQA) is to resolve the conversational dependency, such as anaphora and ellipsis. However, existing approaches do not explicitly train QA models on how to resolve the dependency, and thus these models are limited in understanding human dialogues. In this paper, we propose a novel framework, EXCORD (Explicit guidance on how to resolve Conversational Dependency) to enhance the abilities of QA models in comprehending conversational context. EXCORD fi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 23 publications
(29 citation statements)
references
References 27 publications
(31 reference statements)
0
24
0
Order By: Relevance
“…They focus on improving the contextualized encoding (BERT vs RoBERTa), multi-task learning of discourse or token importance, stacking networks to capture cross-turn relationships, and approaches to make the models more robust using adversarial training and data augmentation. Recent work by Kim et al (2021) brought together generative conversational query rewriting using T5 in the QA process and showed that it outperforms more complex models that attempt to model both simultaneously. The models largely target factoid QA with most being extractive, possibly with minor adaptions for yes/no questions or multiple choice.…”
Section: Response Generation For Conversational Qamentioning
confidence: 99%
“…They focus on improving the contextualized encoding (BERT vs RoBERTa), multi-task learning of discourse or token importance, stacking networks to capture cross-turn relationships, and approaches to make the models more robust using adversarial training and data augmentation. Recent work by Kim et al (2021) brought together generative conversational query rewriting using T5 in the QA process and showed that it outperforms more complex models that attempt to model both simultaneously. The models largely target factoid QA with most being extractive, possibly with minor adaptions for yes/no questions or multiple choice.…”
Section: Response Generation For Conversational Qamentioning
confidence: 99%
“…For tackling the task, most works focused on developing model structures (Zhu et al, 2018;Qu et al, 2019;Zhao et al, 2021) or training strategies to encode the conversation history effectively. After the advent of a dataset containing standalone questions generated by human annotators (Elgohary et al, 2019), CQR approaches have been studied as a promising method for CQA Kim et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…A line of research in conversational question generation (CQG) aims to generate humanlike follow-up questions upon conversational history Pan et al, 2019;Qi et al, 2020;Gu et al, 2021). Another line of research has greatly improved answer accuracy Qu et al, 2019b;Kim et al, 2021;Zhao et al, 2021). In other words, they are limited in assuming that all other ingredients (i.e., held-out conversations by humans and their gold answer) are provided.…”
Section: Cannotanswer A2mentioning
confidence: 99%
“…Elgohary et al (2019) rewrite conversational questions of QuAC into self-contained questions that could be understood without the conversation. Following Kim et al (2021), we consider the resulting dataset, CANARD, as an additional dataset for training CQA models. Note that CANARAD train and QuAC seen share the same passages.…”
Section: Baselines For Synthetic Cqa Generationmentioning
confidence: 99%
See 1 more Smart Citation