The World Wide Web Conference 2019
DOI: 10.1145/3308558.3313622
|View full text |Cite
|
Sign up to set email alerts
|

Tag2Vec: Learning Tag Representations in Tag Networks

Abstract: Network embedding is a method to learn low-dimensional representation vectors for nodes in complex networks. In real networks, nodes may have multiple tags but existing methods ignore the abundant semantic and hierarchical information of tags. This information is useful to many network applications and usually very stable. In this paper, we propose a tag representation learning model, Tag2Vec, which mixes nodes and tags into a hybrid network. Firstly, for tag networks, we define semantic distance as the proxim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 29 publications
0
10
0
Order By: Relevance
“…On the other hand, adversarial perturbations on model parameters demonstrates an entirely new paradigm of adversarial training and shows great potential in promoting model effectiveness, especially for those solving transductive embedding problems. A reasonable explanation is in urgent need to exhibit why and where APP can work well, and thus we are able to determine whether APP can be generalized to other similar scenarios such as word embedding [25,30], tag embedding [45,47], table embedding [13,18,52] and broader domains.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…On the other hand, adversarial perturbations on model parameters demonstrates an entirely new paradigm of adversarial training and shows great potential in promoting model effectiveness, especially for those solving transductive embedding problems. A reasonable explanation is in urgent need to exhibit why and where APP can work well, and thus we are able to determine whether APP can be generalized to other similar scenarios such as word embedding [25,30], tag embedding [45,47], table embedding [13,18,52] and broader domains.…”
Section: Introductionmentioning
confidence: 99%
“…Comparison of AS and NM on Cora, Wiki and Citeseer, revealing the similarity of APP and Momentum in practice. 45. 40.45 42.76 43.78 46.28 47.31 48.00 48.54 50.55 50.97 55.11 56.86 58.29 58.71 59.55 60.83 60.32 61.68 Citeseer_NM 51.56 36.28 40.43 43.38 44.46 46.88 47.36 48.74 49.02 50.71 50.79 55.07 57.47 58.41 58.93 59.82 60.59 59.98 59.69 3: Accuracy(%) of multi-class classification on Cora .57 63.54 66.75 69.43 70.54 70.82 71.77 72.66 73.60 74.43 77.75 79.30 80.50 80.89 82.31 82.12 82.32 83.58 LINE 66.02 50.93 56.16 60.55 62.14 63.89 63.46 65.09 65.07 66.18 66.53 68.73 70.15 70.31 71.07 70.93 72.20 72.16 72.80 node2vec 74.71 59.18 64.03 66.94 69.20 70.57 71.00 71.45 72.60 73.33 74.16 77.57 79.93 80.84 81.82 82.14 82.85 83.03 84.13 GF 41.35 20.79 29.76 33.65 34.68 36.13 37.65 39.01 38.93 40.28 40.53 44.85 46.80 48.45 49.75 49.63 50.23 51.75 51.40 GraRep 71.45 55.01 60.85 66.12 68.09 67.26 70.77 70.81 71.27 71.47 72.73 74.42 75.68 75.70 76.13 78.13 77.28 77.43 76.97 GraphSage 53.62 31.51 38.79 44.29 45.07 47.29 49.77 50.43 51.77 53.92 54.18 58.24 59.59 62.07 62.89 63.55 63.79 62.88 65.09 AdvTNE 75.14 58.95 65.18 67.21 68.62 71.19 71.45 72.41 73.39 73.81 75.15 77.88 79.94 81.03 82.22 82.97 83.11 83.71 84.24 Cleora 70.99 54.29 60.97 62.62 65.68 68.51 68.65 68.82 71.17 71.04 72.08 73.85 75.62 76.57 76.82 76.98 77.48 77.90 78.75 EATNE 76.93 60.30 66.28 70.56 72.43 74.96 74.60 75.44 76.64 77.33 77.55 80.12 81.09 81.57 82.47 82.90 83.06 83.47 84.02…”
mentioning
confidence: 99%
“…† Corresponding Authors. 1 CIKM19 Best Research Paper Runner-up Award found in the social network of animals, and category hierarchy is widely used in lots of e-commerce sites (e.g., Alibaba, Amazon and Rakuten Ichiba) [11,12,13]. "Women's Fashion → Tops → Sweaters → Longsleeved knit → Crew neck" is an example of such an organization.…”
Section: Introductionsmentioning
confidence: 99%
“…They mainly rely on traditional information retrieval (IR) techniques such as keyword matching [13] or a combination of text similarity and Application Program Interface (API) matching [14]. Recently, many works have taken steps to apply deep learning methods [3,8,18,20,22] to code search [2, 4, 5, 7, 10-12, 17, 19, 21, 23, 24], using neural networks to capture deep and semantic correlations between natural language queries and code snippets, and have achieved promising performance improvements. These methods employ various types of model structures, including sequential models [2,4,5,7,10,17,21,23,24], graph models [6,12], and transformers [4].…”
Section: Introductionmentioning
confidence: 99%