Many academic analyses of good practice in the use of bibliometric data address only technical aspects and fail to account for and appreciate user requirements, expectations, and actual practice. Bibliometric indicators are rarely the only evidence put before any user group. In the present state of knowledge, it is more important to consider how quantitative evaluation can be made simple, transparent, and readily understood than it is to focus unduly on precision, accuracy, or scholarly notions of purity. We discuss how the interpretation of ‘performance’ from a presentation using accurate but summary bibliometrics can change when iterative deconstruction and visualization of the same dataset is applied. From the perspective of a research manager with limited resources, investment decisions can easily go awry at governmental, funding program, and institutional levels. By exploring select real-life data samples we also show how the specific composition of each dataset can influence interpretive outcomes.
While bibliometric analysis is normally able to rely on complete publication sets this is not universally the case. For example, Australia (in ERA) and the UK (in the RAE/REF) use institutional research assessment that may rely on small or fractional parts of researcher output. Using the Category Normalised Citation Impact (CNCI) for the publications of ten universities with similar output (21,000–28,000 articles and reviews) indexed in the Web of Science for 2014–2018, we explore the extent to which a ‘sample’ of institutional data can accurately represent the averages and/or the correct relative status of the population CNCIs. Starting with full institutional data, we find a high variance in average CNCI across 10,000 institutional samples of fewer than 200 papers, which we suggest may be an analytical minimum although smaller samples may be acceptable for qualitative review. When considering the ‘top’ CNCI paper in researcher sets represented by DAIS-ID clusters, we find that samples of 1000 papers provide a good guide to relative (but not absolute) institutional citation performance, which is driven by the abundance of high performing individuals. However, such samples may be perturbed by scarce ‘highly cited’ papers in smaller or less research-intensive units. We draw attention to the significance of this for assessment processes and the further evidence that university rankings are innately unstable and generally unreliable.
National research diversity is explored through the balance of global and national papers in journal categories in the Web of Science (WoS) and Essential Science Indicators (ESI) and we examine the consequences of ‘normalising’ national publication counts against global baselines. Global balance across subject categories became more even as annual WoS indexing grew fourfold between 1981 and 2018, with a relative shift from biomedicine towards environment and technology. Change at country level may have tracked this or been influenced by local policy and funding. We discuss choice of methods and indices for analysis: WoS categories provide better granularity than ESI; Lorenz curves are explored but found limiting; the Pratt index, Gini coefficient and Shannon diversity are compared. At national level, balance generally increases and is greatest in non-Anglophone countries, perhaps due to shifts in language and journal use. Two aspects of national change are revealed: the balance of actual WoS paper counts; and the balance of counts normalised against world baseline. The broad patterns for these analyses are similar but normalised data indicate relatively greater evenness. National patterns link to research capacity and regional networking opportunities whilst international collaboration may blend national differences. A dataset is provided for analytical use.
The past decade has witnessed a substantial increase in the number of affiliations listed by individual authors of scientific papers. Some authors now list an astonishing number of institutions, sometimes exceeding 20, 30, or more. This trend raises concerns regarding the genuine scientific contributions these authors make at each institution they claim to be affiliated with. To address this issue, our study conducted a comprehensive regional analysis of the growth of both domestic and international multi-affiliations over the past decade. Our findings reveal certain countries that have experienced an abnormal surge in international multi-affiliation authorships. Coupled with the high numbers of affiliations involved, this emphasizes the need for careful scrutiny of the actual scientific contributions made by these authors and the importance of safeguarding the integrity of scientific output and networks.
This report from the Institute for Scientific Information focuses on the value of shifting from simple metrics of research activity and performance to visually more informative profiles. These profiles help us understand what is going on in research and so enable better policy and management decision making. The report focuses on four key indicators at researcher, journal, institutional and research field levels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.