2021
DOI: 10.1162/tacl_a_00377
|View full text |Cite
|
Sign up to set email alerts
|

Decontextualization: Making Sentences Stand-Alone

Abstract: Models for question answering, dialogue agents, and summarization often interpret the meaning of a sentence in a rich context and use that meaning in a new context. Taking excerpts of text can be problematic, as key pieces may not be explicit in a local window. We isolate and define the problem of sentence decontextualization: taking a sentence together with its context and rewriting it to be interpretable out of context, while preserving its meaning. We describe an annotation procedure, collect data on the Wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(40 citation statements)
references
References 20 publications
(24 reference statements)
0
24
0
Order By: Relevance
“…Conversational QR is initially proposed to improve a model's understanding of the dialogue context (Elgohary et al, 2019), and gets recently adopted to address more downstream tasks like conversational retrieval and question answering (Anantha et al, 2021;Dalton et al, 2020;Yu et al, 2020). Query rewriting is also related to concepts like sentence de-contextualization (Choi et al, 2021) and prompt-based learning (Liu et al, 2021).…”
Section: Conversational Retrievalmentioning
confidence: 99%
“…Conversational QR is initially proposed to improve a model's understanding of the dialogue context (Elgohary et al, 2019), and gets recently adopted to address more downstream tasks like conversational retrieval and question answering (Anantha et al, 2021;Dalton et al, 2020;Yu et al, 2020). Query rewriting is also related to concepts like sentence de-contextualization (Choi et al, 2021) and prompt-based learning (Liu et al, 2021).…”
Section: Conversational Retrievalmentioning
confidence: 99%
“…Prior work has paved the way for this application of NLI. Pieces of this pipeline like converting a question to a declarative sentence (Demszky et al, 2018) and reformulating an answer sentence to stand on its own (Choi et al, 2021) have been explored in recent work. Moreover, results from other domains such as fact-checking (Kryscinski et al, 2020;Mishra et al, 2020) and summary reranking (Falke et al, 2019) show the potential for broader applicability of NLI.…”
Section: Nli Modelmentioning
confidence: 99%
“…Mapping QA to NLI enables us to exploit both NLI and QA datasets for answer verification, but it relies on a pipeline for mapping (question, answer, context) triplet to a (premise, hypothesis) NLI pair. We implement a strong pipeline here: we extract a concise yet sufficient premise through decontextualization (Choi et al, 2021), which rewrites a single sentence from a document such that it can retain the semantics when presented alone without the document. We improve a prior question conversion model (Demszky et al, 2018) with a stronger pretrained seq2seq model (Raffel et al, 2020).…”
Section: Nli Modelmentioning
confidence: 99%
See 2 more Smart Citations