The evidence based medicine movement has championed the need for objective and transparent methods of clinical guideline development. The Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) framework was developed for that purpose. Central to this framework is criteria for assessing the quality of evidence from clinical studies and the impact that body of evidence should have on our confidence in the clinical effectiveness of a therapy under examination. Grades of Recommendation, Assessment, Development, and Evaluation has been adopted by a number of professional medical societies and organizations as a means for orienting the development of clinical guidelines. As a result, the method of GRADE has implications on how health care is delivered and patient outcomes. In this paper, we reveal several issues with the underlying logic of GRADE that warrant further discussion. First, the definitions of the "grades of evidence" provided by GRADE, while explicit, are functionally vague. Second, the "criteria for assigning grade of evidence" is seemingly arbitrary and arguably logically incoherent. Finally, the GRADE method is unclear on how to integrate evidence grades with other important factors, such as patient preferences, and trade-offs between costs, benefits, and harms when proposing a clinical practice recommendation. Much of the GRADE method requires judgement on the part of the user, making it unclear as to how the framework reduces bias in recommendations or makes them more transparent-both goals of the programme. It is our view that the issues presented in this paper undermine GRADE's justificatory scheme, thereby limiting the usefulness of GRADE as a tool for developing clinical recommendations.
Contemporary health care has become preoccupied with evidence. "Evidence-based practice" has permeated several (if not all) health care professions and most aspects of service provision. That fact is evident in the many articles published in this issue of the Journal of Evaluation in Clinical Practice. It is good that decisions about diagnostic tests, management of care, and the organization and allocation of health care resources should be based on evidence. I doubt anyone would believe (or admit) that it should be otherwise.The mere suggestion that health care should be "evidence-based" implies that health care activities can also be "not evidence-based."Thus, it begs the question as to what makes something "evidencebased" (and for that matter, what makes something "evidence"?).What are the alternatives to "evidence-based"? The answer to these questions has significant implications on which interventions are selected for practice and how they are studied in an evidence-based
A rational thinker uses all available evidence to formulate beliefs. The GRADE criteria seem to suggest that we discard some of that information when other, more favoured information (eg, derived from clinical trials) is available. The GRADE framework should strive to ensure that the whole evidence base is considered when determining confidence in the effect estimate. The incremental value of such evidence on determining confidence in the effect estimate should be assigned in a manner that is theoretically or empirically justified, such that confidence is proportional to the evidence, both for and against it.
Rationale, aims and objectives The COVID‐19 pandemic has impacted every facet of society, including medical research. This paper is the second part of a series of articles that explore the intricate relationship between the different challenges that have hindered biomedical research and the generation of novel scientific knowledge during the COVID‐19 pandemic. In the first part of this series, we demonstrated that, in the context of COVID‐19, the scientific community has been faced with numerous challenges with respect to (1) finding and prioritizing relevant research questions and (2) choosing study designs that are appropriate for a time of emergency. Methods During the early stages of the pandemic, research conducted on hydroxychloroquine (HCQ) sparked several heated debates with respect to the scientific methods used and the quality of knowledge generated. Research on HCQ is used as a case study in both papers. The authors explored biomedical databases, peer‐reviewed journals, pre‐print servers and media articles to identify relevant literature on HCQ and COVID‐19, and examined philosophical perspectives on medical research in the context of this pandemic and previous global health challenges. Results This second paper demonstrates that a lack of research prioritization and methodological rigour resulted in the generation of fleeting and inconsistent evidence that complicated the development of public health guidelines. The reporting of scientific findings to the scientific community and general public highlighted the difficulty of finding a balance between accuracy and speed. Conclusions The COVID‐19 pandemic presented challenges in terms of (3) evaluating evidence for the purpose of making evidence‐based decisions and (4) sharing scientific findings with the rest of the scientific community. This second paper demonstrates that the four challenges outlined in the first and second papers have often compounded each other and have contributed to slowing down the creation of novel scientific knowledge during the COVID‐19 pandemic.
Rationale, aims, and objectives One of the sectors challenged by the COVID‐19 pandemic is medical research. COVID‐19 originates from a novel coronavirus (SARS‐CoV‐2) and the scientific community is faced with the daunting task of creating a novel model for this pandemic or, in other words, creating novel science. This paper is the first part of a series of two papers that explore the intricate relationship between the different challenges that have hindered biomedical research and the generation of scientific knowledge during the COVID‐19 pandemic. Methods During the early stages of the pandemic, research conducted on hydroxychloroquine (HCQ) was chaotic and sparked several heated debates with respect to the scientific methods used and the quality of knowledge generated. Research on HCQ is used as a case study in both papers. The authors explored biomedical databases, peer‐reviewed journals, pre‐print servers, and media articles to identify relevant literature on HCQ and COVID‐19, and examined philosophical perspectives on medical research in the context of this pandemic and previous global health challenges. Results This paper demonstrates that a lack of prioritization among research questions and therapeutics was responsible for the duplication of clinical trials and the dispersion of precious resources. Study designs, aimed at minimising biases and increasing objectivity, were, instead, the subject of fruitless oppositions. The duplication of research works, combined with poor‐quality research, has greatly contributed to slowing down the creation of novel scientific knowledge. Conclusions The COVID‐19 pandemic presented challenges in terms of (1) finding and prioritising relevant research questions and (2) choosing study designs that are appropriate for a time of emergency.
One concern that has been raised throughout the Covid-19 pandemic is the lack and quality of information. As such, a focus of building more resilient systems has been on improving our capacity to collect
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.