Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1096
|View full text |Cite
|
Sign up to set email alerts
|

Coherence-Aware Neural Topic Modeling

Abstract: Topic models are evaluated based on their ability to describe documents well (i.e. low perplexity) and to produce topics that carry coherent semantic meaning. In topic modeling so far, perplexity is a direct optimization target. However, topic coherence, owing to its challenging computation, is not optimized for and is only evaluated after training. In this work, under a neural variational inference framework, we propose methods to incorporate a topic coherence objective into the training process. We demonstra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
45
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 56 publications
(47 citation statements)
references
References 11 publications
(16 reference statements)
2
45
0
Order By: Relevance
“…The experiments in Ding et al (2018) provide some insight into this behaviour. They find that when training neural topic models, model fit and NPMI initially tend to improve on each epoch.…”
Section: Npmi Versus Nll As Stopping Criteriamentioning
confidence: 95%
“…The experiments in Ding et al (2018) provide some insight into this behaviour. They find that when training neural topic models, model fit and NPMI initially tend to improve on each epoch.…”
Section: Npmi Versus Nll As Stopping Criteriamentioning
confidence: 95%
“…Since W-LDA is not based on variational inference, we cannot compute the ELBO based perplexity as a performance metric as in (Miao et al, 2016;Srivastava and Sutton, 2017;Ding et al, 2018). To compare the predictive performance of the latent document-topic vectors across all models, we use document classification accuracy instead.…”
Section: Document Classificationmentioning
confidence: 99%
“…We select three experimental baseline models that represent diverse styles of neural topic modeling. 12 Each achieves the highest NPMI on the majority of its respective datasets, as well as a considerable improvement over previous neural and non-neural topic models (such as Srivastava and Sutton, 2017;Miao et al, 2016;Ding et al, 2018). All our baselines are roughly contemporaneous with one another, and had yet to be compared in a head-to-head fashion prior to our work.…”
Section: Experimental Baselinesmentioning
confidence: 83%