2022
DOI: 10.14569/ijacsa.2022.0130209
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Latent Semantic Analysis and Vector Space Model for Automatic Identification of Competent Reviewers to Evaluate Papers

Abstract: The assignment of reviewers to papers is one of the most important and challenging tasks in organizing scientific events. A major part of it is the correct identification of proper reviewers. This article presents a series of experiments aiming to test whether the latent semantic analysis (LSA) could be reliably used to identify competent reviewers to evaluate submitted papers. It also compares the performance of the LSA, the vector space model (VSM) and the method of explicit document description by a taxonom… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 14 publications
0
1
0
Order By: Relevance
“…However, finding the correct number of latent themes in a corpus of the text remains a problem [3]. There are many methods of topic modeling, such as latent semantic analysis [4], [5] the matrix factorization does not refusal (NMF) [6] that attempt to model latent topics as probability distributions or as a set of vectors in topic space by implicitly asserting that the number of topics is known in advance.…”
Section: Introductionmentioning
confidence: 99%
“…However, finding the correct number of latent themes in a corpus of the text remains a problem [3]. There are many methods of topic modeling, such as latent semantic analysis [4], [5] the matrix factorization does not refusal (NMF) [6] that attempt to model latent topics as probability distributions or as a set of vectors in topic space by implicitly asserting that the number of topics is known in advance.…”
Section: Introductionmentioning
confidence: 99%
“…Among many other tasks (Di Gangi, Bosco, and Pilato, 2019;Tseng et al, 2019;Phillips et al, 2021;Hassani, Iranmanesh, and Mansouri, 2021;Ren and Coutanche, 2021;Gupta and Patel, 2021;Kalmukov, 2022), LSA has been used extensively for information retrieval (Zhang, Yoshida, and Tang, 2011;Patil, 2022), by using associations between documents and terms (Dumais et al, 1988;Deerwester et al, 1990;Dumais, 1991). The exact factorization achieved via SVD has been shown to achieve solutions comparable in some ways to those obtained by modern neural network-based techniques (Levy and Goldberg, 2014;Levy, Goldberg, and Dagan, 2015), commonly used to obtain dense word representations from textual corpora (Jurafsky and Martin, 2021).…”
Section: Introductionmentioning
confidence: 99%