2021
DOI: 10.1016/j.ipm.2021.102614
|View full text |Cite
|
Sign up to set email alerts
|

Neural variational sparse topic model for sparse explainable text representation

Abstract: Texts are the major information carrier for internet users, from which learning the latent representations has important research and practical value. Neural topic models have been proposed and have great performance in extracting interpretable latent topics and representations of texts. However, there remain two major limitations: 1) these methods generally ignore the contextual information of texts and have limited feature representation ability due to the shallow feed-forward network architecture, 2) Sparsi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(2 citation statements)
references
References 21 publications
(27 reference statements)
0
2
0
Order By: Relevance
“…Q Xie et al 2021. [15] Utilised the neural networks to develop effective latent representations of text with prior sparse distribution using the semantic reinforcement neural variational sparse topic model (SR-NSTM) method. They also applied Bi-LSTM to take word sequence into account.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Q Xie et al 2021. [15] Utilised the neural networks to develop effective latent representations of text with prior sparse distribution using the semantic reinforcement neural variational sparse topic model (SR-NSTM) method. They also applied Bi-LSTM to take word sequence into account.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Its purpose is to establish a mechanism for converting natural language with human intent into a structured form that is easy for computers to compare and measure. Researchers have conducted related research on text representation [5,6]. However, language processing tasks still face signifcant difculties because of the semantic variety of natural language texts.…”
Section: Introductionmentioning
confidence: 99%