2017
DOI: 10.1007/978-3-319-55753-3_11
|View full text |Cite
|
Sign up to set email alerts
|

PPNE: Property Preserving Network Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
28
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 54 publications
(28 citation statements)
references
References 19 publications
0
28
0
Order By: Relevance
“…Factorization strategies vary across different algo- Unsupervised Social Dim. [31], [32], [33] DeepWalk [6] LINE [1] GraRep [26] DNGR [9] SDNE [19] node2vec [34] HOPE [35] APP [36] M-NMF [28] GraphGAN [37] struct2vec [38] GraphWave [39] SNS [40] DP [41] HARP [42] TADW [7] HSCA [8] pRBM [29] UPP-SNE [43] PPNE [44] Semi-supervised DDRW [45] MMDW [46] TLINE [47] GENE [48] SemiNE [49] TriDNR [50] LDE [51] DMF [8] Planetoid [52] LANE [30] rithms according to their objectives. For example, in the Modularity Maximization method [31], eigen decomposition is performed on the modularity matrix to learn community indicative vertex representations [53]; in the TADW algorithm [7], inductive matrix factorization [54] is carried out on the vertexcontext matrix to simultaneously preserve vertex textual features and network structure in the learning of vertex representations.…”
Section: Categorizationmentioning
confidence: 99%
See 3 more Smart Citations
“…Factorization strategies vary across different algo- Unsupervised Social Dim. [31], [32], [33] DeepWalk [6] LINE [1] GraRep [26] DNGR [9] SDNE [19] node2vec [34] HOPE [35] APP [36] M-NMF [28] GraphGAN [37] struct2vec [38] GraphWave [39] SNS [40] DP [41] HARP [42] TADW [7] HSCA [8] pRBM [29] UPP-SNE [43] PPNE [44] Semi-supervised DDRW [45] MMDW [46] TLINE [47] GENE [48] SemiNE [49] TriDNR [50] LDE [51] DMF [8] Planetoid [52] LANE [30] rithms according to their objectives. For example, in the Modularity Maximization method [31], eigen decomposition is performed on the modularity matrix to learn community indicative vertex representations [53]; in the TADW algorithm [7], inductive matrix factorization [54] is carried out on the vertexcontext matrix to simultaneously preserve vertex textual features and network structure in the learning of vertex representations.…”
Section: Categorizationmentioning
confidence: 99%
“…DeepWalk [6], node2vec [34], APP [36], DDRW [45], GENE [48], TriDNR [50], UPP-SNE [43], struct2vec [38], SNS [40], PPNE [44], SemiNE [49] relatively efficient only capture local structure Edge Modeling LINE [1], TLINE [47], LDE [51], pRBM [29], GraphGAN [37] efficient only capture local structure Deep Learning DNGR [9], SDNE [19] capture non-linearity high time cost…”
Section: Random Walkmentioning
confidence: 99%
See 2 more Smart Citations
“…For instance, SDNE (Wang, Cui, and Zhu 2016) uses the sparse adjacency vector of vertices as raw features for each vertex, and applies an autoencoder to extract short and condense features for vertices under the supervision of edge existence. PPNE (Li et al 2017b) directly learns vertex embeddings with supervised learning on positive samples (connected vertex pairs) and negative samples (disconnected vertex pairs), also preserving the inherent properties of vertices during the learning process.…”
Section: Introductionmentioning
confidence: 99%