Research on bias in peer review examines scholarly communication and funding processes to assess the epistemic and social legitimacy of the mechanisms by which knowledge communities vet and self‐regulate their work. Despite vocal concerns, a closer look at the empirical and methodological limitations of research on bias raises questions about the existence and extent of many hypothesized forms of bias. In addition, the notion of bias is predicated on an implicit ideal that, once articulated, raises questions about the normative implications of research on bias in peer review. This review provides a brief description of the function, history, and scope of peer review; articulates and critiques the conception of bias unifying research on bias in peer review; characterizes and examines the empirical, methodological, and normative claims of bias in peer review research; and assesses possible alternatives to the status quo. We close by identifying ways to expand conceptions and studies of bias to contend with the complexity of social interactions among actors involved directly and indirectly in peer review.
The authors apply a new bibliometric measure, the h-index (Hirsch, 2005), to the literature of information science. Faculty rankings based on raw citation counts are compared with those based on h-counts. There is a strong positive correlation between the two sets of rankings. It is shown how the h-index can be used to express the broad impact of a scholar's research output over time in more nuanced fashion than straight citation counts.
We chronicle the use of acknowledgments in 20th‐century scholarship by analyzing and classifying more than 4,500 specimens covering a 100‐year period. Our results show that the intensity of acknowledgment varies by discipline, reflecting differences in prevailing sociocognitive structures and work practices. We demonstrate that the acknowledgment has gradually established itself as a constitutive element of academic writing, one that provides a revealing insight into the nature and extent of subauthorship collaboration. Complementary data on rates of coauthorship are also presented to highlight the growing importance of collaboration and the increasing division of labor in contemporary research and scholarship.
Classical assumptions about the nature and ethical entailments of authorship (the standard model) are being challenged by developments in scientific collaboration and multiple authorship. In the biomedical research community, multiple authorship has increased to such an extent that the trustworthiness of the scientific communication system has been called into question. Documented abuses, such as honorific authorship, have serious implications in terms of the acknowledgment of authority, allocation of credit, and assigning of accountability. Within the biomedical world it has been proposed that authors be replaced by lists of contributors (the radical model), whose specific inputs to a given study would be recorded unambiguously. The wider implications of the 'hyperauthorship' phenomenon for scholarly publication are considered.
The idea of a unified citation index to the literature of science was first outlined by Eugene Garfield [1] in 1955 in the journal Science. Science Citation Index has since established itself as the gold standard for scientific information retrieval. It has also become the database of choice for citation analysts and evaluative bibliometricians worldwide. As scientific publication moves to the web, and novel approaches to scholarly communication and peer review establish themselves, new methods of citation and link analysis will emerge to capture often liminal expressions of peer esteem, influence and approbation. The web thus affords bibliometricians rich opportunities to apply and adapt their techniques to new contexts and content: the age of ‘bibliometric spectroscopy’ [2] is dawning.
Classical assumptions about the nature and ethical entailments of authorship (the standard model) are being challenged by developments in scientific collaboration and multiple authorship. In the biomedical research community, multiple authorship has increased to such an extent that the trustworthiness of the scientific communication system has been called into question. Documented abuses, such as honorific authorship, have serious implications in terms of the acknowledgment of authority, allocation of credit, and assigning of accountability. Within the biomedical world it has been proposed that authors be replaced by lists of contributors (the radical model), whose specific inputs to a given study would be recorded unambiguously. The wider implications of the ‘hyperauthorship’ phenomenon for scholarly publication are considered.
Citation analysis does not generally take the quality of citations into account: all citations are weighted equally irrespective of source. However, a scholar may be highly cited but not highly regarded: popularity and prestige are not identical measures of esteem. In this study we define popularity as the number of times an author is cited and prestige as the number of times an author is cited by highly cited papers. Information Retrieval (IR) is the test field. We compare the 40 leading researchers in terms of their popularity and prestige over time. Some authors are ranked high on prestige but not on popularity, while others are ranked high on popularity but not on prestige. We also relate measures of popularity and prestige to date of Ph.D. award, number of key publications, organizational affiliation, receipt of prizes/honors, and gender.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.