We explore if and how Microsoft Academic (MA) could be used for bibliometric analyses. First, we examine the Academic Knowledge API (AK API), an interface to access MA data, and compare it to Google Scholar (GS). Second, we perform a comparative citation analysis of researchers by normalizing data from MA and Scopus. We find that MA offers structured and rich metadata, which facilitates data retrieval, handling and processing. In addition, the AK API allows retrieving frequency distributions of citations. We consider these features to be a major advantage of MA over GS. However, we identify four main limitations regarding the available metadata. First, MA does not provide the document type of a publication. Second, the "fields of study" are dynamic, too specific and field hierarchies are incoherent. Third, some publications are assigned to incorrect years. Fourth, the metadata of some publications did not include all authors. Nevertheless, we show that an average-based indicator (i.e. the journal normalized citation score; JNCS) as well as a distribution-based indicator (i.e. percentile rank classes; PR classes) can be calculated with relative ease using MA. Hence, normalization of citation counts is feasible with MA. The citation analyses in MA and Scopus yield uniform results. The JNCS and the PR classes are similar in both databases, and, as a consequence, the evaluation of the researchers' publication impact is congruent in MA and Scopus. Given the fast development in the last year, we postulate that MA has the potential to be used for full-fledged bibliometric analyses.
This study presents humanities scholars' conceptions of research and subjective notions of quality in the three disciplines German literature studies, English literature studies, and art history, captured using 21 Repertory Grid interviews. We identified three dimensions that structure the scholars' conceptions of research: quality, time, and success. Further, the results revealed four types of research in the humanities: positively connoted 'traditional' research (characterized as individual, discipline-oriented, and ground-breaking research), positively connoted 'modern' research (cooperative, interdisciplinary, and socially relevant), negatively connoted 'traditional' research (isolated, reproductive, and conservative), and negatively connoted 'modern' research (career oriented, epigonal, calculated). In addition, 15 quality criteria for research in the three disciplines German literature studies, English literature studies, and art history were derived from the Repertory Grid interviews.
Research assessment in the social sciences and humanities (SSH) is delicate. Assessment procedures meet strong criticisms from SSH scholars and bibliometric research shows that the methods that are usually applied are ill-adapted to SSH research. While until recently research on assessment in the SSH disciplines focused on the deficiencies of the current assessment methods, we present some European initiatives that take a bottom-up approach. They focus on research practices in SSH and reflect on how to assess SSH research with its own approaches instead of applying and adjusting the methods developed for and in the natural and life sciences. This is an important development because we can learn from previous evaluation exercises that whenever scholars felt that assessment procedures were imposed in a top-down manner without proper adjustments to SSH research, it resulted in boycotts or resistance. Applying adequate evaluation methods not only helps foster a better valorization of SSH research within the research community, among policymakers and colleagues from the natural sciences, but it will also help society to better understand SSH's contributions to solving major societal challenges. Therefore, taking the time to encourage bottom-up evaluation initiatives should result in being able to better confront the main challenges facing modern society. This article is published as part of a collection on the future of research assessment.
In May 2016, an article published in Scientometrics, titled 'Taking scholarly books into account: current developments in five European countries', introduced a comparison of book evaluation schemes implemented within five European countries. The present article expands upon this work by including a broader and more heterogeneous set of countries (19 European countries in total) and adding new variables for comparison. Two complementary classification models were used to point out the commonalities and differences between each country's evaluation scheme. First, we employed a double-axis classification to highlight the degree of 'formalization' for each scheme, second, we classified each country according to the presence or absence of a bibliographic database. Each country's evaluation scheme possesses its own unique merits and details; however the result of this study was the identification of four main types of book evaluation systems, leading to the following main conclusions. First, countries may be differentiated on the basis of those that use a formalized evaluation system and those that do not. Also, countries that do use a formalized evaluation system either have a supra-institutional database, quality labels for publishers and/or publisher rankings in place to harmonize the evaluations. Countries that do not use a formalized system tend to rely less on quantitative evaluation procedures. Each evaluation type has its advantages and disadvantages; therefore an exchange between countries might help to generate future improvements.
The assessment of research performance in the humanities is linked to the question of what humanities scholars perceive as 'good research'. Even though scholars themselves evaluate research on a daily basis, e.g. while reading other scholars' research, not much is known about the quality concepts scholars rely on in their judgment of research. This chapter presents a project funded by the Rectors' Conference of the Swiss Universities, in which humanities scholars' conceptions of research quality were investigated and translated into an approach to research evaluation in the humanities. The approach involves the scholars of a given discipline and seeks to identify agreed-upon concepts of quality. By applying the approach to three humanities disciplines, the project reveals both the opportunities and limitations of research quality assessment in the humanities: A research assessment by means of quality criteria presents opportunities to make visible and evaluate humanities research, while a quantitative assessment by means of indicators is very limited and is not accepted by scholars. However, indicators that are linked to the humanities scholars' notions of quality can be used to support peers in the evaluation process (i.e. informed peer review).
The assessment of research performance in the humanities is an intricate and highly discussed topic. Many problems have yet to be solved, foremost the question of the humanities scholars' acceptance of evaluation tools and procedures. This article presents the results of a project funded by the Rectors' Conference of the Swiss Universities in which an approach to research evaluation in the humanities is developed that focuses on consensuality. We describe the results of four studies and integrate them into limitations and opportunities of research quality assessment in the humanities. The results indicate that while an assessment by means of quantitative indicators exhibits limitations, a research assessment by means of quality criteria presents opportunities to evaluate humanities research and make it visible. Indicators that are linked to the humanities scholars' notions of quality can be used to support peers in the evaluation process (informed peer review).
Research assessments in the humanities are highly controversial. While citation-based research performance indicators are widely used in the natural and life sciences, quantitative measures for research performance meet strong opposition in the humanities. Since there are many problems connected to the use of bibliometrics in the humanities, new approaches have to be considered for the assessment of humanities research. Recently, concepts and methods for measuring research quality in the humanities have been developed in several countries. The edited volume 'Research Assessment in the Humanities: Towards Criteria and Procedures' analyses and discusses these recent developments in depth. It combines the presentation of state-of-the-art projects on research assessments in the humanities by humanities scholars themselves with a description of the evaluation of humanities research in practice presented by research funders. Bibliometric issues concerning humanities research complete the exhaustive analysis of humanities research assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.