One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are then brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. In addition to this, a relative increase according to one method can go hand in hand with a relative decrease according to another counting method. Indeed, we present examples in which country (or author) c has a smaller relative score in the total counting system than in the fractional counting one, yet this smaller score has a higher importance than the larger one (fractional counting). Similar anomalies were constructed for total versus proportional counts and for total versus straight counts. Consequently, a ranking between countries, universities, research groups or authors, based on one particular accrediting method does not contain an absolute truth about their relative importance. Different counting methods should be used and compared. Differences are illustrated with a real‐life example. Finally, it is shown that some of these anomalies can be avoided by using geometric instead of arithmetic averages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.