2022
DOI: 10.1109/tkde.2022.3144250
|View full text |Cite
|
Sign up to set email alerts
|

Graph Transfer Learning via Adversarial Domain Adaptation with Graph Convolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(34 citation statements)
references
References 53 publications
0
27
0
Order By: Relevance
“…Thus, they describe two techniques to exploit node-level knowledge: context prediction and attribute masking; as well as two approaches for graph-level pretraining. More recently, Dai et al [24] present AdaGCN: a framework for transfer learning based on adversarial domain adaption with GCNs.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, they describe two techniques to exploit node-level knowledge: context prediction and attribute masking; as well as two approaches for graph-level pretraining. More recently, Dai et al [24] present AdaGCN: a framework for transfer learning based on adversarial domain adaption with GCNs.…”
Section: Related Workmentioning
confidence: 99%
“…Graphbased domain adaptation is categorized based on the availability of cross-domain connections. For domain-exclusive graphs, approaches include SSL with GCNs (Shen and Chung, 2019) and domainadversarial learning (Dai et al, 2019). For crossdomain connected graphs, co-regularized training (Ni et al, 2018) and joint-embedding (Xu et al, 2017) have been explored.…”
Section: Domain-adversarialmentioning
confidence: 99%
“…assumption [16]. There are a few recent studies addressing the challenging problem of cross-graph node classification, such as CDNE [6], ACDNE [7], AdaGCN [8], and UDA-GCN [9]. Although their performance is more preferable than those designed for singlegraph learning, three open questions remain to be further explored.…”
Section: Introductionmentioning
confidence: 99%
“…Although these methods reach a certain level of local or global consistency, they still lack consideration about the global structural role of a node. • Existing studies align the source and target representations by minimizing domain classification error [7], [9] or distribution discrepancy metrics such as the Wasserstein distance [8]. Since the domain alignment is category agnostic in these methods, the aligned node representations are possibly not classification friendly.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation