Proceedings of the 2018 SIAM International Conference on Data Mining 2018
DOI: 10.1137/1.9781611975321.16
|View full text |Cite
|
Sign up to set email alerts
|

AspEm: Embedding Learning by Aspects in Heterogeneous Information Networks

et al.

Abstract: Heterogeneous information networks (HINs) are ubiquitous in real-world applications. Due to the heterogeneity in HINs, the typed edges may not fully align with each other. In order to capture the semantic subtlety, we propose the concept of aspects with each aspect being a unit representing one underlying semantic facet. Meanwhile, network embedding has emerged as a powerful method for learning network representation, where the learned embedding can be used as features in various downstream applications. There… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
68
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 88 publications
(68 citation statements)
references
References 31 publications
0
68
0
Order By: Relevance
“…Most of the efforts are designed for homogeneous networks and are inadequate to handle heterogeneous networks. Het-erogeneous network representation learning methods include metapath2vec [6], metagraph2vec [37], PTE [28], HNE [4], LANE [11], and ASPEM [23]. Except metapath2vec and metagraph2vec, none of these methods is aligned to our problem of generic unsupervised task-independent network embedding learning preserving the heterogeneity in structure and semantics, as discussed in Section I.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Most of the efforts are designed for homogeneous networks and are inadequate to handle heterogeneous networks. Het-erogeneous network representation learning methods include metapath2vec [6], metagraph2vec [37], PTE [28], HNE [4], LANE [11], and ASPEM [23]. Except metapath2vec and metagraph2vec, none of these methods is aligned to our problem of generic unsupervised task-independent network embedding learning preserving the heterogeneity in structure and semantics, as discussed in Section I.…”
Section: Related Workmentioning
confidence: 99%
“…A more recent work proposed metagraph2vec [37] that leverages metagraph in order to capture richer structural contexts and semantics between distant nodes. Other heterogeneous network embedding methods include PTE [28] that is a semi-supervised representation learning method for text data; HNE [4] that learns representation for each modality of the network separately and then unifies them into a common space using linear transformations; LANE [11] that generates embeddings for attributed networks; and AS-PEM [23] that captures the incompatibility in heterogeneous networks by decomposing the input graph into multiple aspects and learns embeddings independently for each aspect. None of PTE, HNE, LANE, or ASPEM is aligned to the generic task of task-independent heterogeneous network embedding learning.…”
Section: Introductionmentioning
confidence: 99%
“…For DBLP, we use the manual class labels of authors from four research areas, i.e., database, data mining, machine learning and information retrieval provided by [1]. For IMDB, we follow [17] to use all 23 available genres such as drama, comedy, romance, thriller, crime and action as class labels. For Yelp, we extract six sets of businesses based on some available attributes, i.e., good for kids, take out, outdoor seating, good for groups, delivery and reservation.…”
Section: A Experimental Settingsmentioning
confidence: 99%
“…As a large number of social and information networks are heterogeneous in nature, i.e., nodes and edges are of multiple types, heterogeneous network embedding methods have recently garnered attention [29][30][31]35]. They learn node embeddings by exploiting various types of relationships among nodes and the network structure, and use them for general downstream tasks, such as node classification [7,26], link prediction [9,27], and clustering [3,15].…”
Section: Introductionmentioning
confidence: 99%