2019
DOI: 10.48550/arxiv.1905.08205
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
40
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(40 citation statements)
references
References 34 publications
0
40
0
Order By: Relevance
“…EditSQL uses an editing mechanism to introduce historical information for user queries, and its matching accuracy on Spider dataset reaches up to 32.9. IRNet (Guo et al, 2019) adopts an intermediate representation named SemQL to translate complex SQL queries into a syntax tree. Using pointer network (Vinyals et al, 2015) for downstream tasks, it achieves an accuracy of 54.7 on Spider test set.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…EditSQL uses an editing mechanism to introduce historical information for user queries, and its matching accuracy on Spider dataset reaches up to 32.9. IRNet (Guo et al, 2019) adopts an intermediate representation named SemQL to translate complex SQL queries into a syntax tree. Using pointer network (Vinyals et al, 2015) for downstream tasks, it achieves an accuracy of 54.7 on Spider test set.…”
Section: Related Workmentioning
confidence: 99%
“…This type of method requires manual summarization of experience and has a high time cost. In addition, with the switch of application scenarios, the existing templates are often difficult to meet the requirements, and the migration is poor; (2) Based on the deep learning method, the neural network is used for end-to-end implementation (Zhong et al, 2017, Yu et al, 2018a,b, Bogin et al, 2019, Guo et al, 2019. This method can be self-optimized by continuously adding sample information.…”
Section: Introductionmentioning
confidence: 99%
“…The recent state-of-the-art models evaluated on Spider use various attentional architectures for question/schema encoding and AST-based structural architectures for query decoding. IRNet (Guo et al, 2019) encodes the question and schema separately with LSTM and self-attention respectively, augmenting them with custom type vectors for schema linking. They further use the AST-based decoder of Yin and Neubig (2017) to decode a query in an intermediate representation (IR) that exhibits higher-level abstraction structure than SQL.…”
Section: Related Workmentioning
confidence: 99%
“…This procedure matches that ofGuo et al (2019), but we use the matching information differently in RAT.…”
mentioning
confidence: 99%
“…Tables store rich numerical data, and a wide range of recent table-related tasks highly rely on numerical reasoning, such as spreadsheet formula prediction (Chen et al, 2021a), table structure understanding (Koci et al, 2019), question answering over tables (Chen et al, 2021b;Cheng et al, 2021), and data-to-text (Suadaa et al, 2021;Cheng et al, 2021). To achieve better numerical reasoning capabilities, some works explore to design domain specific languages (Chen et al, 2020(Chen et al, , 2021bCheng et al, 2021;Guo et al, 2019) and inject preexecuted operations (Suadaa et al, 2021), but are task-specific and suffer from shortage of labels.…”
Section: Introductionmentioning
confidence: 99%