2020
DOI: 10.1007/s11280-020-00823-w
|View full text |Cite
|
Sign up to set email alerts
|

Improving biterm topic model with word embeddings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 32 publications
0
13
0
Order By: Relevance
“…Topic knowledge is able to provide additional information for short text classification [ 17 ]. Zeng et al proposed a topic memory network that encodes topic representations and classifies documents by memory networks [ 43 ].…”
Section: Related Workmentioning
confidence: 99%
“…Topic knowledge is able to provide additional information for short text classification [ 17 ]. Zeng et al proposed a topic memory network that encodes topic representations and classifies documents by memory networks [ 43 ].…”
Section: Related Workmentioning
confidence: 99%
“…Wu et al [59] introduced a clustering method for short texts based on the (BG & SLF-Kmeans) method. In addition, a novel approach named Noise BTM Word Embedding (NBTMWE) was suggested by [60] to resolve the data sparsity problems. This approach integrates the noise BTM and WE from external datasets to ameliorate the coherence of the topic.…”
Section: Short Text Topic Modellingmentioning
confidence: 99%
“…Where N denotes the number of documents in the dataset, K is the number of pre-defined hidden topics, v denotes to the number of available time-slices, and W represents the total number of words. Moreover, a new model called Noise BTM Word Embedding (NBTMWE) was developed by (Huang et al 2020) to tackle data sparsity. NBTMWE combines noise BTM and WE from external corpus to ameliorate the coherence of the topic.…”
Section: Global Word Co-occurrences Based Asttm Modelsmentioning
confidence: 99%