2020
DOI: 10.1007/s11192-020-03422-8
|View full text |Cite|
|
Sign up to set email alerts
|

Analysis of journal evaluation indicators: an experimental study based on unsupervised Laplacian score

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 48 publications
0
10
0
Order By: Relevance
“…Various studies have emphasized the inability of a single indicator to measure the impact and prestige of a journal, and the need for multidimensional journal evaluation (Bornmann et al, 2012; Katritsis, 2019; Thelwall & Fairclough, 2015; Zitt, 2012). Consensus has been reached in the scientometrics community that the complex structure and multiple aspects of a journal's impact cannot be captured by one metric but should rather be reflected by a range of indicators (Bollen et al, 2009; Coleman, 2007; Feng et al, 2020; Haustein, 2012; Haustein & Lariviere, 2014; Schubert, 2015; Zeng & Shi, 2021). However, selecting suitable metrics from the excessive number of indicators in different dimensions is not easy.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Various studies have emphasized the inability of a single indicator to measure the impact and prestige of a journal, and the need for multidimensional journal evaluation (Bornmann et al, 2012; Katritsis, 2019; Thelwall & Fairclough, 2015; Zitt, 2012). Consensus has been reached in the scientometrics community that the complex structure and multiple aspects of a journal's impact cannot be captured by one metric but should rather be reflected by a range of indicators (Bollen et al, 2009; Coleman, 2007; Feng et al, 2020; Haustein, 2012; Haustein & Lariviere, 2014; Schubert, 2015; Zeng & Shi, 2021). However, selecting suitable metrics from the excessive number of indicators in different dimensions is not easy.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Multidimensional information cannot be conflated into one metric due to the inability of a single indicator to reflect the quality of a journal (Coleman, 2007). The difficulty in developing a multidimensional journal evaluation method is how to appropriately identify multiple indicators from an excessive number of metrics and integrate them in a multidimensional metric space to evaluate a journal (Bollen et al, 2009; Feng et al, 2020). To address the above difficulty, this paper has developed a journal evaluation framework based on the Pareto‐dominated set (defined in Section 3.5.2) of a journal in a multidimensional metric space from a systematic perspective.…”
Section: Introductionmentioning
confidence: 99%
“…The ability to automate the ranking of journals is considered of value both to save on costs as well as to potentially mitigate human bias Saarela & Kärkkäinen (2020); Halim & Khan (2019). Machine learning classifiers have been applied to categorize journals based on their subject and quartiles Feng et al (2020) and into coarse ranks derived from bibliometricbased ranking systems Drongstrup et al (2020). Classifiers have also been explored for the automation of ranking that would normally be performed by a panel of experts Saarela & Kärkkäinen (2020).…”
Section: Journal Predictionsmentioning
confidence: 99%
“…However, it is diffcult to assess the real quality of academic articles due to the dynamic change of citation networks [4]. Furthermore, the evaluation result will be heavily influenced by utilizing different bibliometrics indicators or ranking methods [5]. As early as in 1972, Garfield proposed journal impact factor (JIF) to rank various academic journals [6].…”
Section: Introductionmentioning
confidence: 99%