2022
DOI: 10.1007/s41019-022-00182-8
|View full text |Cite
|
Sign up to set email alerts
|

FLAG: Towards Graph Query Autocompletion for Large Graphs

Abstract: Graph query autocompletion (GQAC) takes a user’s graph query as input and generates top-k query suggestions as output, to help alleviate the verbose and error-prone graph query formulation process in a visual interface. To compose a target query with GQAC, the user may iteratively adopt suggestions or manually add edges to augment the existing query. The current state-of-the-art of GQAC, however, focuses on a large collection of small- or medium-sized graphs only. The subgraph features exploited by existing GQ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 29 publications
0
1
0
Order By: Relevance
“…Firstly, many recent works (Ho, Jain, and Abbeel 2020;Song, Meng, and Another way to accelerate diffusion models is reducing computation by exploiting the sparsity of the input data or the model parameters. There are several techniques (Dong et al 2017;Liu et al 2018;Ren et al 2018;Liu et al 2019;Child et al 2019;Bolya et al 2023;Han et al 2023;Bi et al 2023;Zhang et al 2023;Yi et al 2022) available for implementing sparse computation. However, directly applying existing techniques such as tiled sparse convolution (Ren et al 2018) fails to achieve considerable speedup and image quality in our scenario.…”
Section: Diffusion Model Accelerationmentioning
confidence: 99%
“…Firstly, many recent works (Ho, Jain, and Abbeel 2020;Song, Meng, and Another way to accelerate diffusion models is reducing computation by exploiting the sparsity of the input data or the model parameters. There are several techniques (Dong et al 2017;Liu et al 2018;Ren et al 2018;Liu et al 2019;Child et al 2019;Bolya et al 2023;Han et al 2023;Bi et al 2023;Zhang et al 2023;Yi et al 2022) available for implementing sparse computation. However, directly applying existing techniques such as tiled sparse convolution (Ren et al 2018) fails to achieve considerable speedup and image quality in our scenario.…”
Section: Diffusion Model Accelerationmentioning
confidence: 99%
“…In the domains of natural language understanding [1] and knowledge inference [2,3], a comprehensive KG offers significant performance improvements as prior knowledge for downstream tasks such as question answering [4] and recommendation systems [5,6]. However, real-world KGs often represent only a subset of the complete KG, containing a vast amount of undiscovered and poorly organized potential knowledge that is valuable for deep mining and analysis, including node classification [7], node clustering [8], graph query [9][10][11], entity recognition [12], and link prediction [13]. Consequently, efficient KG representation has become a critical challenge.…”
Section: Introductionmentioning
confidence: 99%