2021
DOI: 10.1609/aaai.v35i14.17550
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Hybrid Relation Exploration Network for Cross-Domain Context-Dependent Semantic Parsing

Abstract: Semantic parsing has long been a fundamental problem in natural language processing. Recently, cross-domain context-dependent semantic parsing has become a new focus of research. Central to the problem is the challenge of leveraging contextual information of both natural language queries and database schemas in the interaction history. In this paper, we present a dynamic graph framework that is capable of effectively modelling contextual utterances, tokens, database schemas, and their complicated interaction … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Numerous studies have harnessed previously generated SQL queries to address extended dependencies and enhance parsing accuracy [8], [15], [16]. Additionally, research by Cai and Wan [14] and Hui et al [11] has leveraged graph neural networks to jointly encode multi-turn questions and schema information. Building upon the accomplishments of pre-trained models like T5, BERT, ALM, GanLM, and BART [22]- [25], ScoRE [9] and Star [26] design pre-training framework which leverage contextual information to enrich natural language (NL) utterance and table schema representations for text-to-SQL conversations.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Numerous studies have harnessed previously generated SQL queries to address extended dependencies and enhance parsing accuracy [8], [15], [16]. Additionally, research by Cai and Wan [14] and Hui et al [11] has leveraged graph neural networks to jointly encode multi-turn questions and schema information. Building upon the accomplishments of pre-trained models like T5, BERT, ALM, GanLM, and BART [22]- [25], ScoRE [9] and Star [26] design pre-training framework which leverage contextual information to enrich natural language (NL) utterance and table schema representations for text-to-SQL conversations.…”
Section: Related Workmentioning
confidence: 99%
“…This entails the model's ability to effectively establish the entity mapping relationship between the user query and the database schema, while also comprehending the underlying intent of the current inquiry in the context provided. Several prior studies [9], [11]- [13] have employed neural network encoders that con- catenate the current question, question context, and schema. Concurrently, a number of approaches have directly incorporated historically generated SQL queries [8], [14]- [16] to aid the model in SQL parsing for the present question.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, we train all the models on WikiBio and WikiPerson from scratch, and the training cost is rather expensive: 2.5 days using 4 NVIDIA V100 32G GPUs. Lastly, this paper does not compare the pre-trained language models (PLMs) (Devlin et al, 2019;Raffel et al, 2020;Hui et al, 2021Hui et al, , 2022, though our approach may also benefit from some pre-trained table encoders, such as TAPAS (Müller et al, 2021). The main reasons why we do not consider PLMs are that PLMs will bring an unfair comparison and bring more variables and may make our work lose focus.…”
Section: Limitationsmentioning
confidence: 99%
“…Two columns of VXI additional bus pins are used to design the storage pool of image information resources in college sports multimedia teaching. At the input and output devices, multithread control bus design and serial output bus control structure design are adopted [13][14] . Based on ZigBee networking design method, the output end of college sports multimedia teaching platform under moving image analysis mode is established.…”
Section: Multimedia Teaching Resource Database Permission Configurati...mentioning
confidence: 99%
“…Wherein, N is the statistical sample number of image information resources in college sports multimedia teaching, and X and Y are fuzzy constraint factors that users' visits exceed the load limit of the system, respectively. The optimal solution of dynamic distribution of image information resources in college sports multimedia teaching is obtained, and the distribution length of image information resources in college sports multimedia teaching is N, which will be converted into xi La = strings, thus establishing a scheduling model of image information resources in college sports multimedia teaching [14] .…”
Section: Multimedia Usermentioning
confidence: 99%