2022
DOI: 10.1109/access.2022.3188714
|View full text |Cite
|
Sign up to set email alerts
|

SGPT: A Generative Approach for SPARQL Query Generation From Natural Language Questions

Abstract: SPARQL query generation from natural language questions is complex because it requires an understanding of both the question and the underlying knowledge graph (KG) patterns. Most SPARQL query generation approaches are template-based, tailored to a specific knowledge graph and require pipelines with multiple steps, including entity and relation linking. Template-based approaches are also difficult to adapt for new KGs and require manual efforts from domain experts to construct query templates. To overcome this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 39 publications
0
7
0
Order By: Relevance
“…In this paper, we compare our methods with some classical approaches like AQGnet [35], Multi-hop QGG [51], CLC (+BERT/Tencent Word) [46], and some recently used PLM methods such as BART [16], PGN-BERT (-BERT) [11], SGPT Q,K [52], and T5 [15]. AQG-net [35] employs a generative model based on neural networks to generate an abstract representation of query graphs, capturing the logical structures of queries.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…In this paper, we compare our methods with some classical approaches like AQGnet [35], Multi-hop QGG [51], CLC (+BERT/Tencent Word) [46], and some recently used PLM methods such as BART [16], PGN-BERT (-BERT) [11], SGPT Q,K [52], and T5 [15]. AQG-net [35] employs a generative model based on neural networks to generate an abstract representation of query graphs, capturing the logical structures of queries.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…Capitalizing on the generative capabilities of LMs, KGQA systems that adopt the translation approach take in natural language questions and feed them into fine-tuned LMs to output SPARQL queries. For many systems, , the grounded queries are generated in a one-shot manner. Meanwhile, LMs could be employed to obtain only query skeletons, which are then grounded with entities and/or relations detected by a separate set of components. , Recognizing that one-shot generation of executable SPARQL queries is prone to KG misalignment especially for unseen entities and relations, but decoupling entity and relation linking from logical form generation opens up more room for errors, researchers behind the system GETT-QA propose a middle ground, whereby in the first step, SPARQL queries are generated with entity and relation slots already filled with their surface forms as found in input questions, and in the second steps, these labels are grounded to actual KG entities and relations.…”
Section: Related Workmentioning
confidence: 99%
“…Capitalizing on the generative capabilities of LMs, KGQA systems that adopt the translation approach take in natural language questions and feed them into fine-tuned LMs to output SPARQL queries. For many systems, 29,33 the grounded queries are generated in a oneshot manner. Meanwhile, LMs could be employed to obtain only query skeletons, which are then grounded with entities and/or relations detected by a separate set of components.…”
Section: ■ Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We include both these systems in our evaluation Table 4. SGPT [31] and STAG [29] both use generative methods for forming the query using pre-trained language models, which is similar to what we do, however, neither of them generate the entity or relation label, or the embeddings. Instead STAG uses an external entity linking step, while SGPT attempts to generate entity and relation IDs directly.…”
Section: Related Workmentioning
confidence: 99%