2019
DOI: 10.1080/02522667.2019.1616911
|View full text |Cite
|
Sign up to set email alerts
|

Deep LDA : A new way to topic model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(13 citation statements)
references
References 7 publications
0
13
0
Order By: Relevance
“…However, the average accuracies for 18 models were relatively low even after well-established tuning parameters such as the dimension of input data set to 4,000, learning and dropout rate, or layer numbers. The proposed framework recommends two approaches for determining the correctness of topic labels in a coldstart situation based on deep learning: deep LDA [15] and GCN [8].…”
Section: Topic Labellingmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the average accuracies for 18 models were relatively low even after well-established tuning parameters such as the dimension of input data set to 4,000, learning and dropout rate, or layer numbers. The proposed framework recommends two approaches for determining the correctness of topic labels in a coldstart situation based on deep learning: deep LDA [15] and GCN [8].…”
Section: Topic Labellingmentioning
confidence: 99%
“…The experiments followed best practices [15] in setting GCN model with two convolution layers plus the same hidden node numbers (330-130), and Adam optimizer. As mentioned previously, the experiments used 4 (four) selected topics and ± 27K article titles.…”
Section: Experiments On Parameters Of Gcn Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…and the previous vectors w t−1 x ′ and w t−1 y ′ are const for the tth iteration, the solutions (w t x ′ and w t y ′ ) to equation (37) are also achieved by the GEV method.…”
Section: Figure 4 Directions Selection By Canonical Correlation Analysismentioning
confidence: 99%
“…Text-image recognition has drawn notable attention within machine learning and data mining communities [33][34][35]. In this paper, we extract two kinds of features for text-image recognition: the bag-of-visual SIFT (BOV-SIFT) vector from images [36] and the deep learning-based feature Deep Latent Dirichlet Allocation (DLDA) from texts [37]. (g) The BOV-SIFT feature.…”
Section: Feature Extraction For Text-image Recognitionmentioning
confidence: 99%