2021
DOI: 10.1007/978-3-030-67664-3_23
|View full text |Cite
|
Sign up to set email alerts
|

Graph-Revised Convolutional Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
50
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 60 publications
(64 citation statements)
references
References 14 publications
0
50
0
Order By: Relevance
“…Considering the discrete nature of graph structures, one type of methods adopts probabilistic models, such as Bernoulli probability model [12] and stochastic block model [45]. Another type of methods models structures with node-wise similarity computed by metric learning functions like cosine similarity [7] and dot production [11,53]. Besides, directly treating each element in adjacency matrix as a learnable parameter is also an effective solution [11,20].…”
Section: Deep Graph Structure Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Considering the discrete nature of graph structures, one type of methods adopts probabilistic models, such as Bernoulli probability model [12] and stochastic block model [45]. Another type of methods models structures with node-wise similarity computed by metric learning functions like cosine similarity [7] and dot production [11,53]. Besides, directly treating each element in adjacency matrix as a learnable parameter is also an effective solution [11,20].…”
Section: Deep Graph Structure Learningmentioning
confidence: 99%
“…To tackle the aforementioned problems, deep graph structure learning (GSL) is a promising solution that constructs and improves the graph topology with GNNs [7,12,20,58]. Concretely, these methods parameterize the adjacency matrix with a probabilistic model [12,45], full parameterization [20] or metric learning model [7,11,53], and jointly optimize the parameters of the adjacency matrix and GNNs by solving a downstream task (i.e., node classification) [58]. However, existing methods learn graph structures in a supervised scenario, which brings the following issues: (1) The reliance on label information.…”
Section: Introductionmentioning
confidence: 99%
“…Others learn to reweight edges in a fully connected graph (i.e., soft pruning). and Yu et al (2019) propose heuristics for regularizing edge weights. Hu et al (2019) use the question embedding to help predict edge weights.…”
Section: Related Workmentioning
confidence: 99%
“…However, early work in GCNs also have a limitation in common, i.e., graph convolution is only used to learn node embedding conditioned on fixed edges [12], instead of jointly learning the optimal embeddings for both nodes and edges. Later efforts move on the direction of learning of wights of edges in the input graph in parallel with the learning of node embedding [31,44], which is more powerful than early GCNs and more capable in model adaptation for downstream tasks. However, those GCNs have a constraint in common, that is, the edges in the input graph must be of homogeneous kind, such as the links in a citation graph or the elements in a co-occurrence count matrix.…”
Section: Introductionmentioning
confidence: 99%