Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.211
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Machine Comprehension with Information Seeking Agents

Abstract: Existing machine reading comprehension (MRC) models do not scale effectively to realworld applications like web-level information retrieval and question answering (QA). We argue that this stems from the nature of MRC datasets: most of these are static environments wherein the supporting documents and all necessary information are fully observed. In this paper, we propose a simple method that reframes existing MRC datasets as interactive, partially observable environments. Specifically, we "occlude" the majorit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 59 publications
0
4
0
Order By: Relevance
“…Leveraging the web for traditional NLP tasks. Several papers have explored the use of the web for information extraction [34] and retrieval [1], question answering [57,25], dialog [45], and training language models on webtext [2]. These approaches primarily use web search engines as a knowledge retriever for gathering additional evidence for the task at hand.…”
Section: Related Workmentioning
confidence: 99%
“…Leveraging the web for traditional NLP tasks. Several papers have explored the use of the web for information extraction [34] and retrieval [1], question answering [57,25], dialog [45], and training language models on webtext [2]. These approaches primarily use web search engines as a knowledge retriever for gathering additional evidence for the task at hand.…”
Section: Related Workmentioning
confidence: 99%
“…While behavior evolves with interfaces, users keep parsing results fast and frugally, attending to just a few items. From a similar angle, Yuan et al [2020] offer promising findings on training QA agents with RL for templatized information-gathering and answering actions. Most of the work in language-related RL is otherwise centered on synthetic navigation/arcade environments [Hu et al, 2019].…”
Section: Related Workmentioning
confidence: 99%
“…However, these datasets contain multiple choice questions, and the answer choices provide hints as to what information may be needed. Yuan et al (2020) explore this as well using a POMDP in which the context in existing QA datasets is hidden from the model until it explicitly searches for it.…”
Section: Open-domain Question Answeringmentioning
confidence: 99%