2022 32nd Conference of Open Innovations Association (FRUCT) 2022
DOI: 10.23919/fruct56874.2022.9953874
|View full text |Cite
|
Sign up to set email alerts
|

Towards Better Evaluation of Topic Model Quality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…4. The dataset with evaluation of topics (Khodorchenko et al, 2022a) contains automatic and human scores for a variety of sampled topics outputted by 100 variously configured ARTM models with different amounts of topics built on the datasets 1-3 from this list. To measure the quality of topics, they were presented as tasks in Toloka (Tol, 2023) crowdsourcing platform interface.…”
Section: Datasetsmentioning
confidence: 99%
See 1 more Smart Citation
“…4. The dataset with evaluation of topics (Khodorchenko et al, 2022a) contains automatic and human scores for a variety of sampled topics outputted by 100 variously configured ARTM models with different amounts of topics built on the datasets 1-3 from this list. To measure the quality of topics, they were presented as tasks in Toloka (Tol, 2023) crowdsourcing platform interface.…”
Section: Datasetsmentioning
confidence: 99%
“…Furthermore, the metrics may reflect only a particular side of the produced model quality (Hoyle et al, 2021). Additionaly, best evaluation metrics can differ from dataset to dataset (Khodorchenko et al, 2022a). The suboptimal decision of the best topic model may cause an inaccurate representation of data and, therefore, its biased understanding.…”
Section: Introductionmentioning
confidence: 99%