Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.354
|View full text |Cite
|
Sign up to set email alerts
|

Making Information Seeking Easier: An Improved Pipeline for Conversational Search

Abstract: This paper presents a highly effective pipeline for passage retrieval in a conversational search setting. The pipeline comprises of two components: Conversational Term Selection (CTS) and Multi-View Reranking (MVR). CTS is responsible for performing the first-stage of passage retrieval. Given an input question, it uses a BERT-based classifier (trained with weak supervision) to de-contextualize the input by selecting relevant terms from the dialog history. Using the question and the selected terms, it issues a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 15 publications
0
15
0
Order By: Relevance
“…CQR + BM25 + BERT-base: latency = 5,350 ms QuReTec (Voskarides et al, 2020) .476 Few-Shot Rewriter (Yu et al, 2020) .492 3CQR + BM25 + BERT-base: latency = 8,025 ms (est.) MVR (Kumar and Callan, 2020) .565…”
Section: Results On Castmentioning
confidence: 99%
See 2 more Smart Citations
“…CQR + BM25 + BERT-base: latency = 5,350 ms QuReTec (Voskarides et al, 2020) .476 Few-Shot Rewriter (Yu et al, 2020) .492 3CQR + BM25 + BERT-base: latency = 8,025 ms (est.) MVR (Kumar and Callan, 2020) .565…”
Section: Results On Castmentioning
confidence: 99%
“…(2) There is limited data regarding conversational search for model training. To address the aforementioned challenges, existing papers (Lin et al, 2021c;Yu et al, 2020;Voskarides et al, 2020;Kumar and Callan, 2020) take a multi-stage pipeline approach. They train a conversational query reformulation (CQR) model using publicly available datasets (Elgohary et al, 2019;Quan et al, 2019) and feed the automatically decontextualized queries to an off-the-shelf IR pipeline (Nogueira and Cho, 2019).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…These solutions aim to reformulate the conversational query to an ad hoc query in the sparse bag-of-words space and then leverage standard sparse retrieval pipelines such as BM25 and BERT ranker [8,22,29]. By design, the vocabulary mismatch problem [6] in these conversational search systems is more severe than their corresponding ad hoc search systems, as the extra query reformulation step is an additional source of errors.…”
Section: Related Workmentioning
confidence: 99%
“…Entity linking in conversations. Research on conversational entity linking has been mainly focused on employing traditional entity linking and named entity recognition methods in conversational and QA systems [7,14,15,38,39,56]. Entity linking is also used in multi-party conversations to connect mentions across different parts of dialogues and mapping to their corresponding character [15].…”
Section: Entity Linkingmentioning
confidence: 99%