2017
DOI: 10.1007/978-3-319-55753-3_9
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Network Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 18 publications
0
20
0
Order By: Relevance
“…Factorization strategies vary across different algo- Unsupervised Social Dim. [31], [32], [33] DeepWalk [6] LINE [1] GraRep [26] DNGR [9] SDNE [19] node2vec [34] HOPE [35] APP [36] M-NMF [28] GraphGAN [37] struct2vec [38] GraphWave [39] SNS [40] DP [41] HARP [42] TADW [7] HSCA [8] pRBM [29] UPP-SNE [43] PPNE [44] Semi-supervised DDRW [45] MMDW [46] TLINE [47] GENE [48] SemiNE [49] TriDNR [50] LDE [51] DMF [8] Planetoid [52] LANE [30] rithms according to their objectives. For example, in the Modularity Maximization method [31], eigen decomposition is performed on the modularity matrix to learn community indicative vertex representations [53]; in the TADW algorithm [7], inductive matrix factorization [54] is carried out on the vertexcontext matrix to simultaneously preserve vertex textual features and network structure in the learning of vertex representations.…”
Section: Categorizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Factorization strategies vary across different algo- Unsupervised Social Dim. [31], [32], [33] DeepWalk [6] LINE [1] GraRep [26] DNGR [9] SDNE [19] node2vec [34] HOPE [35] APP [36] M-NMF [28] GraphGAN [37] struct2vec [38] GraphWave [39] SNS [40] DP [41] HARP [42] TADW [7] HSCA [8] pRBM [29] UPP-SNE [43] PPNE [44] Semi-supervised DDRW [45] MMDW [46] TLINE [47] GENE [48] SemiNE [49] TriDNR [50] LDE [51] DMF [8] Planetoid [52] LANE [30] rithms according to their objectives. For example, in the Modularity Maximization method [31], eigen decomposition is performed on the modularity matrix to learn community indicative vertex representations [53]; in the TADW algorithm [7], inductive matrix factorization [54] is carried out on the vertexcontext matrix to simultaneously preserve vertex textual features and network structure in the learning of vertex representations.…”
Section: Categorizationmentioning
confidence: 99%
“…As the extensions of the structure only preserving version, algorithms like DDRW [45], GENE [48] and SemiNE [49] incorporate vertex labels with network structure to harness representation learning, PPNE [44] imports vertex attributes, and Tri-DNR [50] enforces the model with both vertex labels and attributes. As these models can be trained in an online manner, they have great potential to scale up.…”
Section: Categorizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Besides the network topology information, several works focused on incorporating the side information as the complementary to improve the quality of node embeddings [11,15,17,23,28,33]. Tu et al [28] extended the DeepWalk model to a max-margin extension to incorporate few available labels.…”
Section: Related Workmentioning
confidence: 99%
“…Tu et al [28] extended the DeepWalk model to a max-margin extension to incorporate few available labels. Li et al [15] designed a multi-layer perceptron based model to perform semi-supervised network embedding. Yang et al proposed TADW [33], a matrix factorization based model to fuse node attributes (text features) into the embedding process.…”
Section: Related Workmentioning
confidence: 99%