The literature on publication counting demonstrates the use of various terminologies and methods. In many scientific publications, no information at all is given about the counting methods used. There is a lack of knowledge and agreement about the sort of information provided by the various methods, about the theoretical and technical limitations for the different methods and about the size of the differences obtained by using various methods. The need for precise definitions and terminology has been expressed repeatedly but with no success.Counting methods for publications are defined and analysed with the use of set and measure theory. The analysis depends on definitions of basic units for analysis (three chosen for examination), objects of study (three chosen for examination) and score functions (five chosen for examination). The score functions define five classes of counting methods. However, in a number of cases different combinations of basic units of analysis, objects of study and score functions give identical results. Therefore, the result is the characterization of 19 counting methods, five complete counting methods, five complete-normalized counting methods, two whole counting methods, two whole-normalized counting methods, and five straight counting methods.When scores for objects of study are added, the value obtained can be identical with or higher than the score for the union of the objects of study. Therefore, some classes of counting methods, including the classes of complete, complete-normalized and straight counting methods, are additive, others, including the classes of whole and whole-normalized counting methods, are non-additive.An analysis of the differences between scores obtained by different score functions and therefore the differences obtained by different counting methods is presented. In this analysis we introduce a new kind of objects of study, the class of cumulative-turnout networks for objects of study, containing full information on cooperation. Cumulative-turnout networks are all authors, M. GAUFFRIAU et al.: Publication, cooperation and productivity measures 176 Scientometrics 73 (2007) institutions or countries contributing to the publications of an author, an institute or a country. The analysis leads to an interpretation of the results of score functions and to the definition of new indicators for scientific cooperation.We also define a number of other networks, internal cumulative-turnout networks, external cumulative-turnout networks, underlying networks, internal underlying networks and external underlying networks. The networks open new opportunities for quantitative studies of scientific cooperation.
Using a database for publications established at CEST and covering the period from 1981 to 2002 the differences in national scores obtained by different counting methods have been measured. The results are supported by analysing data from the literature. Special attention has been paid to the comparison between the EU and the USA. There are big differences between scores obtained by different methods. In one instance the reduction in scores going from whole to complete-normalized (fractional) counting is 72 per cent. In the literature there is often not enough information given about methods used, and no sign of a clear and consistent terminology and of agreement on properties of and results from different methods. As a matter of fact, whole counting is favourable to certain countries, especially countries with a high level of international cooperation. The problems are increasing with time because of the ever-increasing national and international cooperation in research and the increasing average number of authors per publication. The need for a common understanding and a joint effort to rectify the situation is stressed.
For all rankings of countries research output based on number of publications or citations compared with population, GDP, R&D and public R&D expenses, and other national characteristics the counting method is decisive. Total counting (full credit to a country when at least one of the authors is from this country) and Fractional Counting (a country receives a fraction of full credit for a publication equal to the fraction of authors from this country) of publications give widely different results. Counting methods must be stated, rankings based on different counting methods cannot be compared, and Fractional Counting is to be preferred.
Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.