Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3357905
|View full text |Cite
|
Sign up to set email alerts
|

Attentive History Selection for Conversational Question Answering

Abstract: Conversational question answering (ConvQA) is a simplified but concrete setting of conversational search [24]. One of its major challenges is to leverage the conversation history to understand and answer the current question. In this work, we propose a novel solution for ConvQA that involves three aspects. First, we propose a positional history answer embedding method to encode conversation history with position information using BERT [6] in a natural way. BERT is a powerful technique for text representation. … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
114
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 75 publications
(115 citation statements)
references
References 29 publications
1
114
0
Order By: Relevance
“…Given the importance of dealing with answer history, other researchers have proposed to represent answer history using embeddings from pre-trained models (Qu et al 2019). Then, the system includes also a history attention mechanism to help in the selection of items in the history of the dialogue.…”
Section: Technologiesmentioning
confidence: 99%
“…Given the importance of dealing with answer history, other researchers have proposed to represent answer history using embeddings from pre-trained models (Qu et al 2019). Then, the system includes also a history attention mechanism to help in the selection of items in the history of the dialogue.…”
Section: Technologiesmentioning
confidence: 99%
“…The MTL methods can be classified into static methods or dynamic methods based on their weighting strategy . In a static method, before training the network, each of the task's weights is set manually, then these weights are fixed throughout the training of the network (Qu et al, 2019b;Yeh and Chen, 2019). In contrast, the dynamic methods initialise each of the task's weights at the beginning of the training and automatically update the weights during the training process (Belharbi et al, 2016;Chen et al, 2018;.…”
Section: Related Workmentioning
confidence: 99%
“…In particular, QuAC introduces a main task, namely Answer Span prediction, which consists in answering a question by extracting text spans from a given passage as well as some auxiliary tasks, namely Yes/No prediction, Follow up prediction and Unanswerable prediction. Recently, Multi-Task Learning (MTL), which is a way to learn multiple different but related tasks simultaneously, has emerged as a popular solution to tackle all these tasks in a uniform model (Qu et al, 2019b). MTL can also be used to leverage the auxiliary tasks to improve the performance of a system on the main task.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations