Proceedings of the 27th ACM International Conference on Information and Knowledge Management 2018
DOI: 10.1145/3269206.3271696
|View full text |Cite
|
Sign up to set email alerts
|

Neural Relational Topic Models for Scientific Article Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
23
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 45 publications
(24 citation statements)
references
References 19 publications
0
23
0
Order By: Relevance
“…Topic Models. Well-known topic models, e.g., probabilistic latent semantic analysis (pLSA) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei et al, 2003), have shown advantages in capturing effective semantic representations, and proven beneficial to varying downstream applications, such as summarization (Haghighi and Vanderwende, 2009) and recommendation (Zeng et al, 2018;Bai et al, 2018). For short text data, topic model variants have been proposed to reduce the effects of sparsity issues on topic modeling, such as biterm topic model (BTM) (Yan et al, 2013) and LeadLDA (Li et al, 2016b).…”
Section: Related Workmentioning
confidence: 99%
“…Topic Models. Well-known topic models, e.g., probabilistic latent semantic analysis (pLSA) (Hofmann, 1999) and latent Dirichlet allocation (LDA) (Blei et al, 2003), have shown advantages in capturing effective semantic representations, and proven beneficial to varying downstream applications, such as summarization (Haghighi and Vanderwende, 2009) and recommendation (Zeng et al, 2018;Bai et al, 2018). For short text data, topic model variants have been proposed to reduce the effects of sparsity issues on topic modeling, such as biterm topic model (BTM) (Yan et al, 2013) and LeadLDA (Li et al, 2016b).…”
Section: Related Workmentioning
confidence: 99%
“…Research collaborations in scientific community have been extensively studied to understand team dynamics in social networks [2]. Co-authorship data provide a means to analyse research collaborations.…”
Section: Collaborations In a Co-authorship Networkmentioning
confidence: 99%
“…PLANE (Le and Lauw 2014) extracts topics and 2D visualization coordinates simultaneously. NRTM (Bai et al 2018) extends VAE to document networks, outperforming another model RDL (Wang, Shi, and Yeung 2017) that extends DAE. These models capture only the first-order neighborhood.…”
Section: Related Workmentioning
confidence: 99%
“…We compare our models with several categories of baseline models as listed below. Following(Chen and Zaki 2017;Bai et al 2018), the activation functions for AE, DAE, CAE, KSAE, and NRTM are sigmoid, while those for VAE and KATE are tanh (hidden) and sigmoid (output) respectively. We use validation set to choose the best hyperparameters.…”
mentioning
confidence: 99%