A key process of any citation analysis study is to map the coded citation data from a high-dimensional dataset to a lower dimensional one while detecting the groups, clusters, patterns or other features of the citation relationships. Over the years, many methods have been used in various studies, including multi-dimensional scaling, Pathfinder networks, Kohonen's self-organizing mapping, etc. Many of these methods are fundamentally different, but their results are similar and comparable. In this study, we selected and applied four of the mapping methods to the same dataset, the author co-citation matrix of the top 100 highly cited information scientists. The results of the different mapping methods provide interesting comparisons among the different mapping algorithms as well as the different views of the dataset.
In their 1998 article "Visualizing a discipline: An author cocitation analysis of information science, 1972-1995," White and McCain used multidimensional scaling, hierarchical clustering, and factor analysis to display the specialty groupings of 120 highly-cited ("paradigmatic") information scientists. These statistical techniques are traditional in author cocitation analysis (ACA). It is shown here that a newer technique, Pathfinder Networks (PFNETs), has considerable advantages for ACA. In PFNETs, nodes represent authors, and explicit links represent weighted paths between nodes, the weights in this case being cocitation counts. The links can be drawn to exclude all but the single highest counts for author pairs, which reduces a network of authors to only the most salient relationships. When these are mapped, dominant authors can be defined as those with relatively many links to other authors (i.e., high degree centrality). Links between authors and dominant authors define specialties, and links between dominant authors connect specialties into a discipline. Maps are made with one rather than several computer routines and in one rather than many computer passes. Also, PFNETs can, and should, be generated from matrices of raw counts rather than Pearson correlations, which removes a computational step associated with traditional ACA. White and McCain's raw data from 1998 are remapped as a PFNET. It is shown that the specialty groupings correspond closely to those seen in the factor analysis of the 1998 article. Because PFNETs are fast to compute, they are used in AuthorLink, a new Web-based system that creates live interfaces for cocited author retrieval on the fly.
In their article "Requirements for a cocitation similarity measure, with special reference to Pearson's correlation coefficient," Ahlgren, Jarneving, and Rousseau fault traditional author cocitation analysis (ACA) for using Pearson's r as a measure of similarity between authors because it fails two tests of stability of measurement. The instabilities arise when rs are recalculated after a first coherent group of authors has been augmented by a second coherent group with whom the first has little or no cocitation. However, AJ&R neither cluster nor map their data to demonstrate how fluctuations in rs will mislead the analyst, and the problem they pose is remote from both theory and practice in traditional ACA. By entering their own rs into multidimensional scaling and clustering routines, I show that, despite r's fluctuations, clusters based on it are much the same for the combined groups as for the separate groups. The combined groups when mapped appear as polarized clumps of points in two-dimensional space, confirming that differences between the groups have become much more important than differences within the groups-an accurate portrayal of what has happened to the data. Moreover, r produces clusters and maps very like those based on other coefficients that AJ&R mention as possible replacements, such as a cosine similarity measure or a chi square dissimilarity measure. Thus, r performs well enough for the purposes of ACA. Accordingly, I argue that qualitative information revealing why authors are cocited is more important than the cautions proposed in the AJ&R critique. I include notes on topics such as handling the diagonal in author cocitation matrices, lognormalizing data, and testing r for significance.
Bibliometric measures for evaluating research units in the book-oriented humanities and social sciences are underdeveloped relative to those available for journal-oriented science and technology. We therefore present a new measure designed for book-oriented fields: the "libcitation count." This is a count of the libraries holding a given book, as reported in a national or international union catalog. As librarians decide what to acquire for the audiences they serve, they jointly constitute an instrument for gauging the cultural impact of books. Their decisions are informed by knowledge not only of audiences but also of the book world, e.g., the reputations of authors and the prestige of publishers. From libcitation counts, measures can be derived for comparing research units. Here, we imagine a matchup between the departments of history, philosophy, and political science at the University of New South Wales and the University of Sydney in Australia. We chose the 12 books from each department that had thehighest libcitation counts in the Libraries Australia union catalog during 2000-2006. We present each book's raw libcitation count, its rank within its LC class, and its LC-class normalized libcitation score. The latter is patterned on the item-oriented field normalized citation score used in evaluative bibliometrics. Summary statistics based on these measures allow the departments to be compared for cultural impact. Our work has implications for programs such as Excellence in Research for Australia and the Research Assessment Exercise in the United Kingdom. It also has implications for data mining in OCLC's WorldCat.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.