Based on the foundation laid by the h-index we introduce and study the R-and AR-indices. These new indices eliminate some of the disadvantages of the h-index, especially when they are used in combination with the h-index. The R-index measures the h-core's citation intensity, while AR goes one step further and takes the age of publications into account. This allows for an index that can actually increase and decrease over time. We propose the pair (h, AR) as a meaningful indicator for research evaluation. We further prove a relation characterizing the h-index in the power law model.
h-index, A-index, R-index, AR-index, g-index, performance evaluation, power law
Author cocitation analysis (ACA), a special type of cocitation analysis, was introduced by White and Griffith in 1981.This technique is used to analyze the intellectual structure of a given scientific field. In 1990, McCain published a technical overview that has been largely adopted as a standard. Here, McCain notes that Pearson's correlation coefficient (Pearson's r) is often used as a similarity measure in ACA and presents some advantages of its use. The present article criticizes the use of Pearson's r in ACA and sets forth two natural requirements that a similarity measure applied in ACA should satisfy. It is shown that Pearson's r does not satisfy these requirements. Real and hypothetical data are used in order to obtain counterexamples to both requirements. It is concluded that Pearson's r is probably not an optimal choice of a similarity measure in ACA. Still, further empirical research is needed to show if, and in that case to what extent, the use of similarity measures in ACA that fulfill these requirements would lead to objectively better results in full-scale studies. Further, problems related to incomplete cocitation matrices are discussed.
Social network analysis (SNA) is not a formal theory in sociology but rather a strategy for investigating social structures. As it is an idea that can be applied in many fields, we study, in particular, its influence in the information sciences. Information scientists study publication, citation and co-citation networks, collaboration structures and other forms of social interaction networks. Moreover, the Internet represents a social network of an unprecedented scale. In all these studies social network analysis can successfully be applied. SNA is further related to recent theories concerning the free market economy, geography and transport networks. The growth of SNA is documented and a co-author network of SNA is drawn. Centrality measures of the SNA network are calculated.
The h-index (or Hirsch-index) was defined by Hirsch in 2005 as the number h such that, for a general group of papers, h papers received at least h citations while the other papers received no more than h citations. This definition is extended here to the general framework of Information Production Processes (IPPs), using a source-item terminology. It is further shown that in each practical situation an IPP always has a unique h-index. In Lotkaian systems h =where T is the total number of sources and α is the Lotka exponent. The relation between h and the total number of items is highlighted.
One aim of science evaluation studies is to determine quantitatively the contribution of different players (authors, departments, countries) to the whole system. This information is then used to study the evolution of the system, for instance to gauge the results of special national or international programs. Taking articles as our basic data, we want to determine the exact relative contribution of each coauthor or each country. These numbers are then brought together to obtain country scores, or department scores, etc. It turns out, as we will show in this article, that different scoring methods can yield totally different rankings. In addition to this, a relative increase according to one method can go hand in hand with a relative decrease according to another counting method. Indeed, we present examples in which country (or author) c has a smaller relative score in the total counting system than in the fractional counting one, yet this smaller score has a higher importance than the larger one (fractional counting). Similar anomalies were constructed for total versus proportional counts and for total versus straight counts. Consequently, a ranking between countries, universities, research groups or authors, based on one particular accrediting method does not contain an absolute truth about their relative importance. Different counting methods should be used and compared. Differences are illustrated with a real‐life example. Finally, it is shown that some of these anomalies can be avoided by using geometric instead of arithmetic averages.
The objective of this article is to further the study of journal interdisciplinarity, or, more generally, knowledge integration at the level of individual articles. Interdisciplinarity is operationalized by the diversity of subject fields assigned to cited items in the article's reference list. Subject fields and subfields were obtained from the Leuven‐Budapest (ECOOM) subject‐classification scheme, while disciplinary diversity was measured taking variety, balance, and disparity into account. As diversity measure we use a Hill‐type true diversity in the sense of Jost and Leinster‐Cobbold. The analysis is conducted in 3 steps. In the first part, the properties of this measure are discussed, and, on the basis of these properties it is shown that the measure has the potential to serve as an indicator of interdisciplinarity. In the second part the applicability of this indicator is shown using selected journals from several research fields ranging from mathematics to social sciences. Finally, the often‐heard argument, namely, that interdisciplinary research exhibits larger visibility and impact, is studied on the basis of these selected journals. Yet, as only 7 journals, representing a total of 15,757 articles, are studied, albeit chosen to cover a large range of interdisciplinarity, further research is still needed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.