2019
DOI: 10.1609/aaai.v33i01.330110075
|View full text |Cite
|
Sign up to set email alerts
|

Sequence to Sequence Learning for Query Expansion

Abstract: As fas as we are aware, using Sequence to Sequence algorithms for query expansion has not been explored yet in Information Retrieval literature. We tried to fill this gap in the literature with a custom Query Expansion system trained and tested on open datasets. One specificity of our engine compared to classic ones is that it does not need the documents to expand the introduced query. We test our expansions on two different tasks : Information Retrieval and Answer preselection. Our method yielded a slight imp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…Recent Query Reformulation. There are recent or concurrent studies (Nogueira and Cho, 2017;Zaiem and Sadat, 2019;Yu et al, 2020;Vakulenko et al, 2020;Lin et al, 2020) that reformulate queries with generation models for other retrieval tasks. However, these studies are not easily applicable or efficient enough for OpenQA because: (1) They require external resources such as paraphrase data (Zaiem and Sadat, 2019), search sessions (Yu et al, 2020), or conversational contexts (Lin et al, 2020;Vakulenko et al, 2020) to form the reformulated queries, which are not available or showed inferior domain-transfer performance in OpenQA (Zaiem and Sadat, 2019);…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent Query Reformulation. There are recent or concurrent studies (Nogueira and Cho, 2017;Zaiem and Sadat, 2019;Yu et al, 2020;Vakulenko et al, 2020;Lin et al, 2020) that reformulate queries with generation models for other retrieval tasks. However, these studies are not easily applicable or efficient enough for OpenQA because: (1) They require external resources such as paraphrase data (Zaiem and Sadat, 2019), search sessions (Yu et al, 2020), or conversational contexts (Lin et al, 2020;Vakulenko et al, 2020) to form the reformulated queries, which are not available or showed inferior domain-transfer performance in OpenQA (Zaiem and Sadat, 2019);…”
Section: Related Workmentioning
confidence: 99%
“…There have been some recent studies on query reformulation with text generation for other retrieval tasks, which, for example, rewrite the queries to context-independent (Yu et al, 2020;Lin et al, 2020;Vakulenko et al, 2020) or well-formed ones. However, these methods require either task-specific data (e.g., conversational contexts, ill-formed queries) or external resources such as paraphrase data (Zaiem and Sadat, 2019;) that cannot or do not transfer well to OpenQA. Also, some rely on timeconsuming training process like reinforcement learning (RL) (Nogueira and Cho, 2017; that is not efficient enough for OpenQA (more discussions in Sec.…”
Section: Introductionmentioning
confidence: 99%