Proceedings of the First Workshop on Natural Language Interfaces 2020
DOI: 10.18653/v1/2020.nli-1.4
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Deployment of Conversational Natural Language Interfaces over Databases

Abstract: Many users communicate with chatbots and AI assistants in order to help them with various tasks. A key component of the assistant is the ability to understand and answer a user's natural language questions for questionanswering (QA). Because data can be usually stored in a structured manner, an essential step involves turning a natural language question into its corresponding query language. However, in order to train most natural languageto-query-language state-of-the-art models, a large amount of training da… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…On the other hand, How2QA [37] and iVQA [69] collected questions and answers by presenting videos to crowd workers. In particular, TutorialVQA [13] and PsTuts-VQA [78] focus on software tutorial videos, collecting questions from crowd workers by presenting answer segments or having software experts craft questions. However, since these questions are artificially generated or automatically generated from transcripts, using these questions can be limiting when developing approaches to address questions from real-world users.…”
Section: Video Question Answeringmentioning
confidence: 99%
“…On the other hand, How2QA [37] and iVQA [69] collected questions and answers by presenting videos to crowd workers. In particular, TutorialVQA [13] and PsTuts-VQA [78] focus on software tutorial videos, collecting questions from crowd workers by presenting answer segments or having software experts craft questions. However, since these questions are artificially generated or automatically generated from transcripts, using these questions can be limiting when developing approaches to address questions from real-world users.…”
Section: Video Question Answeringmentioning
confidence: 99%
“…Audio QA datasets : DAQA [83] on audio temporal reasoning, Clotho-AQA [84] on binary and multichoice audio QA. Video QA datasets : such as VideoQA [85] for multi-domain, MovieQA [86]/MovieFIB [87]/TVQA [88]/KnowIT VQA [89] for movies and shows, MarioQA [90] for games, PororoQA [91] for cartoons, TurorialVQA [92] for tutorials, CLEVRER [93] for physical & causal reasoning. Multi-modal multi-hop QA datasets : MultiModalQA/MMQA [94] for multi-modal and multi-hop QA, WebQA [95] on web multi-modal QA, MAQA focus on negation learning and testing [96].…”
Section: Datasetsmentioning
confidence: 99%
“…Mainly working over Open domain question answering (QA). [1] For this purpose, a huge amount of data corpus is required and from various sources. Traditional approach requires training state of the art models which utilize this data corpus and learn user behavior over voice commands.…”
Section: Related Workmentioning
confidence: 99%
“…The research [1] also shows the use of domain ontology triples which carry this format. <object, relation, property>.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation