Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/467
|View full text |Cite
|
Sign up to set email alerts
|

Deep Attributed Network Embedding

Abstract: Network embedding has attracted a surge of attention in recent years. It is to learn the low-dimensional representation for nodes in a network, which benefits downstream tasks such as node classification and link prediction. Most of the existing approaches learn node representations only based on the topological structure, yet nodes are often associated with rich attributes in many real-world applications. Thus, it is important and necessary to learn node representations based on both the topological structure… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
168
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 209 publications
(170 citation statements)
references
References 12 publications
2
168
0
Order By: Relevance
“…As neither AN2VEC-0 nor AN2VEC-16 exhibited over-fitting, this behaviour is suprising and warrants further explorations which are beyond the scope of this paper (in particular, this may be specific to the link prediction task). Nonetheless, the higher performance of AN2VEC-S-0 and AN2VEC-S-16 over the vanilla VGAE on Cora and CiteSeer confirms that including feature reconstruction in the constraints of node embeddings is capable of increasing link prediction performance when feature and structure are not independent, and is consistent with [33,35,34]. An illustration of the embeddings produced by AN2VEC-S-16 on Cora is shown in Figure 4.…”
Section: Methodssupporting
confidence: 71%
See 1 more Smart Citation
“…As neither AN2VEC-0 nor AN2VEC-16 exhibited over-fitting, this behaviour is suprising and warrants further explorations which are beyond the scope of this paper (in particular, this may be specific to the link prediction task). Nonetheless, the higher performance of AN2VEC-S-0 and AN2VEC-S-16 over the vanilla VGAE on Cora and CiteSeer confirms that including feature reconstruction in the constraints of node embeddings is capable of increasing link prediction performance when feature and structure are not independent, and is consistent with [33,35,34]. An illustration of the embeddings produced by AN2VEC-S-16 on Cora is shown in Figure 4.…”
Section: Methodssupporting
confidence: 71%
“…In our model, different dimensions of the generated embeddings can be dedicated to encoding feature information, network structure, or shared feature-network information separately. Unlike previous embedding methods dealing with features [33,34,35,36], this interaction model [37] allows us to explore the dependencies between the disentangled network and feature information by comparing the embedding reconstruction performance to a baseline case where no shared information is extracted. Using this method, we can identify an optimal reduced embedding, which indicates whether combined information coming from the structure and features is important, or whether their non-interacting combination is sufficient for reconstructing the featured network.In practice, as this method solves a reconstruction problem, it may give important insights about the combination of feature-and structure-driven mechanisms which determine the formation of a given network.…”
mentioning
confidence: 99%
“…The training part of the model is implemented referring to the word2vec part of TensorFlow tutorials 8 . The evaluation part uses some metric functions from scikit-learn 9 including roc_auc_score, f1_score, preci-sion_recall_curve, auc. Our model parameters are updated and optimized by stochastic gradient descent with Adam updating rule [17].…”
Section: A1 Implementation Notesmentioning
confidence: 99%
“…In most studies, node attributes are only used for embedding initialization, but not during model training. DANE [4] proposes a deep non-linear architecture to preserve both aspects.…”
Section: Related Workmentioning
confidence: 99%