Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18 2018
DOI: 10.1145/3178876.3186113
|View full text |Cite
|
Sign up to set email alerts
|

Co-Regularized Deep Multi-Network Embedding

Abstract: Network embedding aims to learn a low-dimensional vector representation for each node in the social and information networks, with the constraint to preserve network structures. Most existing methods focus on single network embedding, ignoring the relationship between multiple networks. In many real-world applications, however, multiple networks may contain complementary information, which can lead to further re ned node embeddings. Thus, in this paper, we propose a novel multi-network embedding method, DMNE. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 56 publications
(40 citation statements)
references
References 28 publications
(32 reference statements)
0
40
0
Order By: Relevance
“…A ij ŷ i −ŷ j 2 , which directly regulates the predictions of connected nodes to be similar [8], [9], [22], [23]. The assumption could also be implicitly implemented by iteratively propagating node embeddings through the graph so that connected nodes obtain close embeddings and are predicted similarly [3], [7], [10], [11]. Here, λ is a hyperparameter to balance the two terms.…”
Section: Graph-based Learningmentioning
confidence: 99%
“…A ij ŷ i −ŷ j 2 , which directly regulates the predictions of connected nodes to be similar [8], [9], [22], [23]. The assumption could also be implicitly implemented by iteratively propagating node embeddings through the graph so that connected nodes obtain close embeddings and are predicted similarly [3], [7], [10], [11]. Here, λ is a hyperparameter to balance the two terms.…”
Section: Graph-based Learningmentioning
confidence: 99%
“…HAN (Wang et al 2019) employed graph attention network (Veličković et al 2017) on each graph, and then applied the attention mechanism to merge the node representations learned from each graph by considering the importance of each graph. However, the existing methods either require labels for training (Wang et al 2019;Qu et al 2017;Schlichtkrull et al 2018), or overlook the node attributes (Liu et al 2017;Xu et al 2017;Li et al 2018;Shi et al 2018;Zhang et al 2018a;Ni et al 2018;Chu et al 2019). Most recently, Ma et al (2019) proposed a graph convolutional network (GCN) based method called mGCN, which is not only unsupervised, but also naturally incorporates the node attributes by using GCNs.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, there is another line of HIN embedding study, decomposing based methods. These methods learn node embeddings by decomposing HIN semantics into projected metrics with different relation spaces (such as PME [8], HEER [30] and RHINE [20]), or decomposing HINs into cross-network relationships in different sub-networks (such as AMN [26], DME [23] and AspEm [29]). However, these methods are not suitable for HINs that have a large number of types, since the metrics projecting or sub-networks decomposing are conducted in type-pair-specific level, leading in the exponential growth of spaces with respect to the number of types.…”
Section: Related Workmentioning
confidence: 99%