Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/170
|View full text |Cite
|
Sign up to set email alerts
|

Discrete Embedding for Latent Networks

Abstract: Discrete network embedding emerged recently as a new direction of network representation learning. Compared with traditional network embedding models, discrete network embedding aims to compress model size and accelerate model inference by learning a set of short binary codes for network vertices. However, existing discrete network embedding methods usually assume that the network structures (e.g., edge weights) are readily available. In real-world scenarios such as social networks, sometimes it is imp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Neighborhood aggregation algorithms are based on the principle that each vertex receives messages from its neighbors and uses them to update its representation [44]. H. Yang et al [16,45] introduced the concept of a Weisfeiler-Lehman matrix to capture the interplay between a node's structure and its attributes. Xuewei Ma et.al.…”
Section: A Network Embedding Based On Weisfeiler-lehman Subtree Kernelmentioning
confidence: 99%
“…Neighborhood aggregation algorithms are based on the principle that each vertex receives messages from its neighbors and uses them to update its representation [44]. H. Yang et al [16,45] introduced the concept of a Weisfeiler-Lehman matrix to capture the interplay between a node's structure and its attributes. Xuewei Ma et.al.…”
Section: A Network Embedding Based On Weisfeiler-lehman Subtree Kernelmentioning
confidence: 99%
“…• Discrete Embedding for Latent Networks (DELN) [16] is an end-to-end discrete network embedding method to learn binary representations.…”
Section: Baselinesmentioning
confidence: 99%
“…However, the tanh function could produce undesirable relaxation errors [10] and degrade the quality of learned hash codes. Although some works [3,16] have proposed to leverage an alternating algorithm to optimize hash codes, it is hard to integrate it into a deep neural network in an end-to-end fashion.…”
Section: Introductionmentioning
confidence: 99%