The National Institutes of Health (NIH) is the largest source of funding for biomedical research in the world. Funding decisions are made largely based on the outcome of a peer review process that is intended to provide a fair, equitable, timely, and unbiased review of the quality, scientific merit, and potential impact of the research. There have been concerns about the criteria reviewers are using, and recent changes in review procedures at the NIH now make it possible to conduct an analysis of how reviewers evaluate applications for funding. This study examined the criteria and overall impact scores recorded by assigned reviewers for R01 grant applications. The results suggest that all the scored review criteria, including innovation, are related to the overall impact score. Further, good scores are necessary on all five scored review criteria, not just the score for research methodology, in order to achieve a good overall impact score.
Background: Blinding reviewers to applicant identity has been proposed to reduce bias in peer review. Methods: This experimental test used 1200 NIH grant applications, 400 from Black investigators, 400 matched applications from White investigators, and 400 randomly selected applications from White investigators. Applications were reviewed by mail in standard and redacted formats. Results: Redaction reduced, but did not eliminate, reviewers' ability to correctly guess features of identity. The primary, pre-registered analysis hypothesized a differential effect of redaction according to investigator race in the matched applications. A set of secondary analyses (not pre-registered) used the randomly selected applications from White scientists and tested the same interaction. Both analyses revealed similar effects: Standard format applications from White investigators scored better than those from Black investigators. Redaction cut the size of the difference by about half (e.g. from a Cohen's d of 0.20 to 0.10 in matched applications); redaction caused applications from White scientists to score worse but had no effect on scores for Black applications. Conclusions: Grant-writing considerations and halo effects are discussed as competing explanations for this pattern. The findings support further evaluation of peer review models that diminish the influence of applicant identity. Funding: Funding was provided by the NIH.
Linear complementarity, Interior-point, Superlinear convergence,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.