A new approach to the field normalization of the classical journal impact factor is introduced. This approach, called the audience factor, takes into consideration the citing propensity of journals for a given cited journal, specifically, the mean number of references of each citing journal, and fractionally weights the citations from those citing journals. Hence, the audience factor is a variant of a fractional citation-counting scheme, but computed on the citing journal rather than the citing article or disciplinary level, and, in contrast to other cited-side normalization strategies, is focused on the behavior of the citing entities. A comparison with standard journal impact factors from Thomson Reuters shows a more diverse representation of fields within various quintiles of impact, significant movement in rankings for a number of individual journals, but nevertheless a high overall correlation with standard impact factors.
As citation practices strongly depend on fields, field normalisation is recognised as necessary for fair comparison of figures in bibliometrics and evaluation studies. However fields may be defined at various levels, from small research areas to broad academic disciplines, and thus normalisation values are expected to vary. The aim of this project was to test the stability of citation ratings of articles as the level of observation -hence the basis of normalisation -changes. A conventional classification of science based on ISI subject categories and their aggregates at various scales was used, namely at five levels: all science, large academic discipline, subdiscipline, speciality and journal. Among various normalisation methods, we selected a simple ranking method (quantiles), based on the citation score of the article in each particular aggregate (journal, speciality, etc.) it belonged to at each level. The study was conducted on articles in the full SCI range, for publication year 1998 with a four-year citation window. Stability is measured in three ways: overall comparison of article rankings; individual trajectory of articles; survival of the top-cited class across levels. Overall rank correlations on the observed empirical structure are benchmarked against two fictitious sets that keep the same embedded structure of articles but reassign citation scores either in a totally ordered or in a totally random distribution. These sets act respectively as a 'worst case' and 'best case' for the stability of citation ratings. The results show that: (a) the average citation rankings of articles substantially change with the level of observation (b) observation at the journal level is very particular, and the results differ greatly in all test circumstances from all the other levels of observation (c) the lack of cross-scale stability is confirmed when looking at the distribution of individual trajectories of articles across the levels; (d) when considering the top-cited fractions, a standard measure of excellence, it is found that the contents of the 'top-cited' set is completely dependent on the level of observation. The instability of impact measures should not be interpreted in terms of lack of robustness but rather as the coexistence of various perspectives each having their own form of legitimacy. A follow-up study will focus on the micro levels of observation and will be based on a structure built around bibliometric groupings rather than conventional groupings based on ISI subject categories.
Although impact factor and related measurements are the best-known features of scientific journals, other characteristics are of particular interest. The way a journal retleets the internationalized nature of science may be determined by many methods, one of which is based on the distribution of authoring and citing countries. This can be systematically measured either by a comparison of these distributions with averages profiles of a discipline or specialty, or by concentration indexes on the other. This paper focuses on the first approach. As the average profile of scienee drifts with the level of visibility, stratification by impact level is discussed. In this study, experimental internationalization indexes were calculated on the SCI for journals belonging to Earth&Space and Applied Biology. Convergence of measurements (types of indexes, type of normalization, publication vs citation scope) is adressed. IntemationaIization indexes may have a variety of applications, including characterization of the scientific publishing market and sampling of the SCI for science indicators.
Indicators in a research Institute ought to be readable at several decision levels, and particularly with different break-downs of the publication set chosen as reference. Citation transactions between journals have been widely used to structure scientific subfields in ISI databases. We tried a seed-free stmcturation of SCI/CMCI journals (a) to test convergence of pure citation-built specialties (roughly 150) on SCI/CMCI journals with existing classifications at the subfield level Co) to explore the interest and the limits of this approach for upper levels of aggregation (roughly 30 fields). A few limits ofjoumal-level classification are addressed. At the subfield level, the convergence is large with some discrepancies worth noticing. At the subdiscipline level, the method is not sufficient to achieve a satisfactory 30-level delineation, but gives a good basis for informed expert validation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.