2013
DOI: 10.1007/978-3-642-39593-2_17
|View full text |Cite
|
Sign up to set email alerts
|

Experiments with Semantic Similarity Measures Based on LDA and LSA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(9 citation statements)
references
References 12 publications
0
9
0
Order By: Relevance
“…Latent semantic analysis (LSA) given by Niraula and Banjade (2013) and LDA proposed by Vidhya and Aghila (2010), are two popular mathematical approaches to modelling textual data. Questions posed by algorithm developers and data analysts working with LSA and LDA models motivated to How closely do LSAs concepts correspond to LDAs topics?…”
Section: Method: Ldamentioning
confidence: 99%
“…Latent semantic analysis (LSA) given by Niraula and Banjade (2013) and LDA proposed by Vidhya and Aghila (2010), are two popular mathematical approaches to modelling textual data. Questions posed by algorithm developers and data analysts working with LSA and LDA models motivated to How closely do LSAs concepts correspond to LDAs topics?…”
Section: Method: Ldamentioning
confidence: 99%
“…Recent studies have shown that similarity measures of features are more efficient when based on topic models techniques than they are based on bag of words and TF-IDF ( Xie & Xing, 2013 ). In this context, the semantic similarity between two documents was also investigated ( Niraula et al, 2013 ). The most related work to our context is probably the use of topic modeling features to improve the word sense disambiguation by Li & Suzuki (2021) and also the work in Pavlinek & Podgorelec (2017) in which they present features representation with a semi-supervised approach using self-training learning.…”
Section: Related Workmentioning
confidence: 99%
“…In the general English domain, semantic evaluation (SemEval) STS shared tasks have been organized annually from 2012 to 2017 [1][2][3][4][5][6], and STS benchmark datasets were developed for evaluation [6]. Previous work on STS often used machine learning models [7][8][9] such as support vector machine [10], random forest [11], convolutional neural networks [12], and recurrent neural networks [13] and topic modeling techniques [8] such as latent semantic analysis [14] and latent Dirichlet allocation [15]. Recently, deep learning models based on transformer architectures such as Bidirectional Encoder Representations from Transformers (BERT) [16], XLNet [17], and Robustly optimized BERT approach (RoBERTa) [18] have demonstrated state-of-the-art performances on the STS benchmark dataset [19] and remarkably outperformed the previous models.…”
Section: Introductionmentioning
confidence: 99%