Systematic reviews are difficult to keep up to date, but failure to do so leads to a decay in review currency, accuracy, and utility. We are developing a novel approach to systematic review updating termed "Living systematic review" (LSR): systematic reviews that are continually updated, incorporating relevant new evidence as it becomes available. LSRs may be particularly important in fields where research evidence is emerging rapidly, current evidence is uncertain, and new research may change policy or practice decisions. We hypothesize that a continual approach to updating will achieve greater currency and validity, and increase the benefits to end users, with feasible resource requirements over time.
Julian Elliott and colleagues discuss how the current inability to keep systematic reviews up-to-date hampers the translation of knowledge into action. They propose living systematic reviews as a contribution to evidence synthesis to enhance the accuracy and utility of health evidence.
Background Electronic cigarettes (ECs) are electronic devices that heat a liquid into an aerosol for inhalation. The liquid usually comprises propylene glycol and glycerol, with or without nicotine and flavours, and stored in disposable or refillable cartridges or a reservoir. Since ECs appeared on the market in 2006 there has been a steady growth in sales. Smokers report using ECs to reduce risks of smoking, but some healthcare organizations, tobacco control advocacy groups and policy makers have been reluctant to encourage smokers to switch to ECs, citing lack of evidence of efficacy and safety. Smokers, healthcare providers and regulators are interested to know if these devices can help smokers quit and if they are safe to use for this purpose. This review is an update of a review first published in 2014. Objectives To evaluate the safety and effect of using ECs to help people who smoke achieve long-term smoking abstinence. Search methods We searched the Cochrane Tobacco Addiction Group's Specialized Register, the Cochrane Central Register of Controlled Trials (CEN-TRAL), MEDLINE, Embase, and PsycINFO for relevant records from 2004 to January 2016, together with reference checking and contact with study authors. Selection criteria We included randomized controlled trials (RCTs) in which current smokers (motivated or unmotivated to quit) were randomized to EC or a control condition, and which measured abstinence rates at six months or longer. As the field of EC research is new, we also included cohort follow-up studies with at least six months follow-up. We included randomized cross-over trials, RCTs and cohort follow-up studies that included at least one week of EC use for assessment of adverse events (AEs). Data collection and analysis We followed standard Cochrane methods for screening and data extraction. Our main outcome measure was abstinence from smoking after at least six months follow-up, and we used the most rigorous definition available (continuous, biochemically validated, longest follow-up). We used a fixed-effect Mantel-Haenszel model to calculate the risk ratio (RR) with a 95% confidence interval (CI) for each study, and where appropriate we pooled data from these studies in meta-analyses.
BackgroundEvidence mapping describes the quantity, design and characteristics of research in broad topic areas, in contrast to systematic reviews, which usually address narrowly-focused research questions. The breadth of evidence mapping helps to identify evidence gaps, and may guide future research efforts. The Global Evidence Mapping (GEM) Initiative was established in 2007 to create evidence maps providing an overview of existing research in Traumatic Brain Injury (TBI) and Spinal Cord Injury (SCI).MethodsThe GEM evidence mapping method involved three core tasks:1. Setting the boundaries and context of the map: Definitions for the fields of TBI and SCI were clarified, the prehospital, acute inhospital and rehabilitation phases of care were delineated and relevant stakeholders (patients, carers, clinicians, researchers and policymakers) who could contribute to the mapping were identified. Researchable clinical questions were developed through consultation with key stakeholders and a broad literature search.2. Searching for and selection of relevant studies: Evidence search and selection involved development of specific search strategies, development of inclusion and exclusion criteria, searching of relevant databases and independent screening and selection by two researchers.3. Reporting on yield and study characteristics: Data extraction was performed at two levels - 'interventions and study design' and 'detailed study characteristics'. The evidence map and commentary reflected the depth of data extraction.ResultsOne hundred and twenty-nine researchable clinical questions in TBI and SCI were identified. These questions were then prioritised into high (n = 60) and low (n = 69) importance by the stakeholders involved in question development. Since 2007, 58 263 abstracts have been screened, 3 731 full text articles have been reviewed and 1 644 relevant neurotrauma publications have been mapped, covering fifty-three high priority questions.ConclusionsGEM Initiative evidence maps have a broad range of potential end-users including funding agencies, researchers and clinicians. Evidence mapping is at least as resource-intensive as systematic reviewing. The GEM Initiative has made advancements in evidence mapping, most notably in the area of question development and prioritisation. Evidence mapping complements other review methods for describing existing research, informing future research efforts, and addressing evidence gaps.
While it is important for the evidence supporting practice guidelines to be current, that is often not the case. The advent of living systematic reviews has made the concept of "living guidelines" realistic, with the promise to provide timely, up-to-date and high-quality guidance to target users. We define living guidelines as an optimization of the guideline development process to allow updating individual recommendations as soon as new relevant evidence becomes available. A major implication of that definition is that the unit of update is the individual recommendation and not the whole guideline. We then discuss when living guidelines are appropriate, the workflows required to support them, the collaboration between living systematic reviews and living guideline teams, the thresholds for changing recommendations, and potential approaches to publication and dissemination. The success and sustainability of the concept of living guideline will depend on those of its major pillar, the living systematic review. We conclude that guideline developers should both experiment with and research the process of living guidelines.
New approaches to evidence synthesis, which use human effort and machine automation in mutually reinforcing ways, can enhance the feasibility and sustainability of living systematic reviews. Human effort is a scarce and valuable resource, required when automation is impossible or undesirable, and includes contributions from online communities ("crowds") as well as more conventional contributions from review authors and information specialists. Automation can assist with some systematic review tasks, including searching, eligibility assessment, identification and retrieval of full-text reports, extraction of data, and risk of bias assessment. Workflows can be developed in which human effort and machine automation can each enable the other to operate in more effective and efficient ways, offering substantial enhancement to the productivity of systematic reviews. This paper describes and discusses the potential-and limitations-of new ways of undertaking specific tasks in living systematic reviews, identifying areas where these human/machine "technologies" are already in use, and where further research and development is needed. While the context is living systematic reviews, many of these enabling technologies apply equally to standard approaches to systematic reviewing.
Background: While the potential of clinical practice guidelines (CPGs) to support implementation of evidence has been demonstrated, it is not currently being achieved. CPGs are both poorly developed and ineffectively implemented. To improve clinical practice and health outcomes, both well-developed CPGs and effective methods of CPG implementation are needed. We sought to establish whether there is agreement on the fundamental characteristics of an evidence-based CPG development process and to explore whether the level of guidance provided in CPG development handbooks is sufficient for people using these handbooks to be able to apply it.
The recent proliferation of strategies designed to increase the use of research in health policy (knowledge exchange) demands better application of contemporary conceptual understandings of how research shapes policy. Predictive models, or action frameworks, are needed to organise existing knowledge and enable a more systematic approach to the selection and testing of intervention strategies. Useful action frameworks need to meet four criteria: have a clearly articulated purpose; be informed by existing knowledge; provide an organising structure to build new knowledge; and be capable of guiding the development and testing of interventions. This paper describes the development of the SPIRIT Action Framework. A literature search and interviews with policy makers identified modifiable factors likely to influence the use of research in policy. An iterative process was used to combine these factors into a pragmatic tool which meets the four criteria. The SPIRIT Action Framework can guide conceptually-informed practical decisions in the selection and testing of interventions to increase the use of research in policy. The SPIRIT Action Framework hypothesises that a catalyst is required for the use of research, the response to which is determined by the capacity of the organisation to engage with research. Where there is sufficient capacity, a series of research engagement actions might occur that facilitate research use. These hypotheses are being tested in ongoing empirical work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.