Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/638
|View full text |Cite
|
Sign up to set email alerts
|

Topic Modelling Meets Deep Neural Networks: A Survey

Abstract: Topic modelling has been a successful technique for text analysis for almost twenty years. When topic modelling met deep neural networks, there emerged a new and increasingly popular research area, neural topic models, with nearly a hundred models developed and a wide range of applications in neural language understanding such as text generation, summarisation and language models. There is a need to summarise research developments and discuss open problems and future directions. In this paper, we provide a foc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 68 publications
(43 citation statements)
references
References 7 publications
(10 reference statements)
0
30
0
1
Order By: Relevance
“…Like other TMs, the latent representation of the document is a distribution over K topics: θj ∈ Σ K , each element of which denotes the proportion of one topic in this document. Previous work shows that the data likelihood can be helpful to regularize the optimization of the a transport based loss (Frogner et al, 2015;Zhao et al, 2021). To amortize the computation of θ j and provide additional regularization, we introduce a regularized CT loss as…”
Section: Learning Topic Embeddings and Topic Proportionsmentioning
confidence: 99%
See 2 more Smart Citations
“…Like other TMs, the latent representation of the document is a distribution over K topics: θj ∈ Σ K , each element of which denotes the proportion of one topic in this document. Previous work shows that the data likelihood can be helpful to regularize the optimization of the a transport based loss (Frogner et al, 2015;Zhao et al, 2021). To amortize the computation of θ j and provide additional regularization, we introduce a regularized CT loss as…”
Section: Learning Topic Embeddings and Topic Proportionsmentioning
confidence: 99%
“…With the recent development in auto-encoding VI, originated from variational autoencoders (VAEs) (Kingma & Welling, 2014;Rezende et al, 2014), deep neural networks have been successfully used to develop neural topic models (NTMs) (Miao et al, 2016;Srivastava & Sutton, 2017;Burkhardt & Kramer, 2019;Zhang et al, 2018;Dieng et al, 2020;Zhao et al, 2021). The key advantage of NTMs is that approximate posterior inference can be carried out easily via a forward pass of the encoder network, without the need for expensive iterative inference scheme per test observation as in both Gibbs sampling and conventional VI.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To our best knowledge, most state-of-the-art NTMs, like ours, are based on the VAE framework. For a detailed discussion of these NTMs, we refer readers to Zhao et al (2021a). Here, we only discuss the two lines of research that are most relevant to ours.…”
Section: Related Workmentioning
confidence: 99%
“…To address the above limitations, increasing effort has been made in leveraging deep neural networks (DNNs) for topic modelling, which leads to the so-called neural topic models (NTMs) (Zhao et al, 2021a). Most of these models follow the framework of variational auto-encoders (VAEs) (Kingma and Welling, 2014;Rezende et al, 2014) and adopt an encoder-decoder architecture, in which the encoder transforms the BoW data of each document into the corresponding documenttopical embeddings, and the decoder attempts to map these embeddings back to the same data.…”
Section: Introductionmentioning
confidence: 99%