2021
DOI: 10.1007/978-3-030-89188-6_38
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Graph Learning with Evolutionary Algorithms: An Experimental Study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…The following well-known AutoML tools (listed in alphabetical order) were considered for our study. AutoGL is open source and was created at Tsinghua University for AutoML on graphs and contains four modules, including auto feature engineering, model training, hyperparameter optimization, and auto ensemble [23,24]. AutoGluon is an open-source tool created by Amazon that can automate machine learning and deep learning algorithms for text, images, and datasets [25][26][27].…”
Section: Automl Toolsmentioning
confidence: 99%
“…The following well-known AutoML tools (listed in alphabetical order) were considered for our study. AutoGL is open source and was created at Tsinghua University for AutoML on graphs and contains four modules, including auto feature engineering, model training, hyperparameter optimization, and auto ensemble [23,24]. AutoGluon is an open-source tool created by Amazon that can automate machine learning and deep learning algorithms for text, images, and datasets [25][26][27].…”
Section: Automl Toolsmentioning
confidence: 99%
“…A majority of works on GML focus on developing new algorithms for certain graph tasks and applications [44,48]. In comparison, there exist relatively few recent works [40,13,47,5,49] that address the GML model selection problem. They mainly focus on neural architecture search and hyperparameter optimization (HPO) for GML models, especially for graph neural networks.…”
Section: Evaluation-based Model Selectionmentioning
confidence: 99%
“…To achieve more efficient model selection than the naive exhaustive approach (Fig. 1b), they investigated techniques for efficient HPO, including subgraph sampling [40], graph coarsening [13], hierarchical evaluation [47], hypernets [49], and evolutionary algorithms [5]. However, for model selection, they still need to perform model training and/or evaluations on the new graph, which is much more costly than evaluation-free model selection.…”
Section: Evaluation-based Model Selectionmentioning
confidence: 99%
“…Subsequently, deep learning models, such as deep knowledge tracing (DKT) [8], were developed, which model a student's learning process as a recurrent neural network (RNN), significantly improving the prediction performance of the traditional Bayesian-based KT. With the development of graph neural networks (GNN) [9], GNN-based KT models [10,11], which use the natural graph structure existing in skills to model students' cognition, have attracted considerable attention. Although KT models have developed rapidly in recent years, limitations still exist.…”
Section: Introductionmentioning
confidence: 99%