2021
DOI: 10.48550/arxiv.2109.00527
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Boosting Search Engines with Interactive Agents

Abstract: Can machines learn to use a search engine as an interactive tool for finding information? That would have far reaching consequences for making the world's knowledge more accessible. This paper presents first steps in designing agents that learn meta-strategies for contextual query refinements. Our approach uses machine reading to guide the selection of refinement terms from aggregated search results. Agents are then empowered with simple but effective search operators to exert fine-grained and transparent cont… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…In the first example, the human decides to search again after removing 'inches', 'width', 'height', and 'white' from the query since product texts often contain abbreviated symbols for these terms like '"', 'w', and 'h'. Thus, search generation is challenging for models since it involves reasoning and adapting to grounded environments, and ideas from query reformulation [37,1] could help alleviate this. Agents also struggle to perform robust semantic matching, which is important in choosing options that contain noisy paraphrases of instruction spans.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the first example, the human decides to search again after removing 'inches', 'width', 'height', and 'white' from the query since product texts often contain abbreviated symbols for these terms like '"', 'w', and 'h'. Thus, search generation is challenging for models since it involves reasoning and adapting to grounded environments, and ideas from query reformulation [37,1] could help alleviate this. Agents also struggle to perform robust semantic matching, which is important in choosing options that contain noisy paraphrases of instruction spans.…”
Section: Discussionmentioning
confidence: 99%
“…Leveraging the web for traditional NLP tasks. Several papers have explored the use of the web for information extraction [34] and retrieval [1], question answering [57,25], dialog [45], and training language models on webtext [2]. These approaches primarily use web search engines as a knowledge retriever for gathering additional evidence for the task at hand.…”
Section: Related Workmentioning
confidence: 99%
“…HTML Understanding Autonomous web navigation has been a popular application for neural network models, and a variety of works propose simulated websites for training web-based agents, with application to task fulfillment (Yao et al, 2022;Gur et al, 2021;Burns et al, 2022;Mazumder and Riva, 2020;Shi et al, 2017;Liu et al, 2018) as well as information retrieval or question-answering (Adolphs et al, 2021;Nogueira and Cho, 2016). Simulated websites provide an easy way to evaluate models online, and for this reason we use the existing MiniWoB benchmark (Shi et al, 2017) for our web navigation setting.…”
Section: Related Workmentioning
confidence: 99%
“…They suggest web-level QA (like WebGPT) as a direction for future work. Adolphs et al [2021] set up an RL problem that involves performing a series of search queries for short-form question-answering. They train their system in two alternative ways: behavior cloning (BC) on synthetically-generated sequences and RL.…”
Section: Related Workmentioning
confidence: 99%