Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1167
|View full text |Cite
|
Sign up to set email alerts
|

Search-based Neural Structured Learning for Sequential Question Answering

Abstract: Recent work in semantic parsing for question answering has focused on long and complicated questions, many of which would seem unnatural if asked in a normal conversation between two humans. In an effort to explore a conversational QA setting, we present a more realistic task: answering sequences of simple but inter-related questions. We collect a dataset of 6,066 question sequences that inquire about semistructured tables from Wikipedia, with 17,553 question-answer pairs in total. To solve this sequential que… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
216
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 162 publications
(216 citation statements)
references
References 18 publications
(22 reference statements)
0
216
0
Order By: Relevance
“…QBLink [9], CoQA [27], ans ShARC [29] are recent resources for sequential QA over text. The SQA resource [16], derived from WikiTableQuestions [25], is aimed at driving conversational QA over (relatively small) Web tables.…”
Section: The Convquestions Benchmark 41 Benchmark Creationmentioning
confidence: 99%
“…QBLink [9], CoQA [27], ans ShARC [29] are recent resources for sequential QA over text. The SQA resource [16], derived from WikiTableQuestions [25], is aimed at driving conversational QA over (relatively small) Web tables.…”
Section: The Convquestions Benchmark 41 Benchmark Creationmentioning
confidence: 99%
“…The execution results will be used to calculate the Jaccard coefficient with respect to the labeled answers as the approximated rewards. The use of approximated reward has been proven to be effective in (Iyyer et al, 2017). …”
Section: Discussionmentioning
confidence: 99%
“…For example, given the precedent query "what's the biggest zone?" and the follow-up query "the smallest one", STAR prefers to recognize "the biggest zone" and "the smallest one" as two spans, rather (Iyyer et al, 2017) 70.9 35.8 NP (Neelakantan et al, 2016) 58.9 35.9 NP + STAR 58.9 38.1 DynSP + STAR 70.9 39.5 DynSP * (Iyyer et al, 2017) 70.4 41.1 than perform split operations inside them. The SplitNet fails probably because the conflicting spans, "the biggest" ↔ "the smallest" and "zone" ↔ "one", are adjacent, which makes it difficult to identify span boundaries well.…”
Section: Error Analysismentioning
confidence: 99%