Interventions to support the resilience and mental health of frontline health and social care professionals during and after a disease outbreak, epidemic or pandemic: a mixed methods systematic review Pollock
Background Following the initial identification of the 2019 coronavirus disease (covid-19), the subsequent months saw substantial increases in published biomedical research. Concerns have been raised in both scientific and lay press around the quality of some of this research. We assessed clinical research from major clinical journals, comparing methodological and reporting quality of covid-19 papers published in the first wave (here defined as December 2019 to May 2020 inclusive) of the viral pandemic with non-covid papers published at the same time. Methods We reviewed research publications (print and online) from The BMJ, Journal of the American Medical Association (JAMA), The Lancet, and New England Journal of Medicine, from first publication of a covid-19 research paper (February 2020) to May 2020 inclusive. Paired reviewers were randomly allocated to extract data on methodological quality (risk of bias) and reporting quality (adherence to reporting guidance) from each paper using validated assessment tools. A random 10% of papers were assessed by a third, independent rater. Overall methodological quality for each paper was rated high, low or unclear. Reporting quality was described as percentage of total items reported. Results From 168 research papers, 165 were eligible, including 54 (33%) papers with a covid-19 focus. For methodological quality, 18 (33%) covid-19 papers and 83 (73%) non-covid papers were rated as low risk of bias, OR 6.32 (95%CI 2.85 to 14.00). The difference in quality was maintained after adjusting for publication date, results, funding, study design, journal and raters (OR 6.09 (95%CI 2.09 to 17.72)). For reporting quality, adherence to reporting guidelines was poorer for covid-19 papers, mean percentage of total items reported 72% (95%CI:66 to 77) for covid-19 papers and 84% (95%CI:81 to 87) for non-covid. Conclusions Across various measures, we have demonstrated that covid-19 research from the first wave of the pandemic was potentially of lower quality than contemporaneous non-covid research. While some differences may be an inevitable consequence of conducting research during a viral pandemic, poor reporting should not be accepted.
Introduction: Randomised controlled trials (RCTs) that fail to meet their recruitment target risk increasing research waste. Acute stroke RCTs experience notable recruitment issues. The efficiency of recruitment to stroke rehabilitation RCTs has not been explored. Aims and objectives: To explore recruitment efficiency and the trial features associated with efficient recruitment to stroke rehabilitation RCTs. Methods: A systematic review of stroke rehabilitation RCTs published between 2005 and 2015 identified in a search of the Cochrane Stroke Group (CSG) Trials Register from 35 electronic databases (e.g. Medline, CINAHL; EMBASE), clinical trial registers, and hand-searching. Inclusion criteria are stroke rehabilitation intervention, delivered by a member of the rehabilitation team, and clinically relevant environment. We extracted data on recruitment efficiency and trial features.Results: We screened 12,939 titles, 1270 abstracts and 788 full texts, before extracting data from 512 included RCTs (n = 28,804 stroke survivor participants). This is the largest systematic review of recruitment to date. A third of stroke survivors screened consented to participate (median 34% (IQR 14-61), on average sites recruited 1.5 participants per site per month (IQR 0.71-3.22), and one in twenty (6% (IQR 0-13) dropped out during the RCT. Almost half (48%) of those screened in the community were recruited compared to hospital settings (27%). Similarly, almost half (47%) those screened at least 6 months after stroke participated, compared to 23% of stroke survivors screened within a month of stroke. When one recruiter screened multiple sites, a median of one stroke survivor was recruited every 2 months compared to more than two per month when there was a dedicated recruiter per site. RCT recruitment was significantly faster per site, with fewer dropouts, for trials conducted in Asia (almost three stroke survivors monthly; 2% dropout) compared to European trials (approximately one stroke survivor monthly; 7% dropout). Conclusions: One third of stroke survivors screened were randomised to rehabilitation RCTs at a rate of between one and two per month, per site. One in twenty did not complete the trial. Our findings will inform recruitment plans of future stroke rehabilitation RCTs. Limited reporting of recruitment details restricted the subgroup analysis performed.Trial registration: Prospective Register of Systematic Reviews, registration number CRD42016033067.
Background Poor recruitment of patients is the predominant reason for early termination of randomized clinical trials (RCTs). Systematic empirical investigations and validation studies of existing recruitment models, however, are lacking. We aim to provide evidence-based guidance on how to predict and monitor recruitment of patients into RCTs. Our specific objectives are the following: (1) to establish a large sample of RCTs (target n = 300) with individual patient recruitment data from a large variety of RCTs, (2) to investigate participant recruitment patterns and study site recruitment patterns and their association with the overall recruitment process, (3) to investigate the validity of a freely available recruitment model, and (4) to develop a user-friendly tool to assist trial investigators in the planning and monitoring of the recruitment process. Methods Eligible RCTs need to have completed the recruitment process, used a parallel group design, and investigated any healthcare intervention where participants had the free choice to participate. To establish the planned sample of RCTs, we will use our contacts to national and international RCT networks, clinical trial units, and individual trial investigators. From included RCTs, we will collect patient-level information (date of randomization), site-level information (date of trial site activation), and trial-level information (target sample size). We will examine recruitment patterns using recruitment trajectories and stratifications by RCT characteristics. We will investigate associations of early recruitment patterns with overall recruitment by correlation and multivariable regression. To examine the validity of a freely available Bayesian prediction model, we will compare model predictions to collected empirical data of included RCTs. Finally, we will user-test any promising tool using qualitative methods for further tool improvement. Discussion This research will contribute to a better understanding of participant recruitment to RCTs, which could enhance efficiency and reduce the waste of resources in clinical research with a comprehensive, concerted, international effort.
Purpose Stroke survivors are at high risk of developing cognitive syndromes, such as delirium and dementia. Accurate prediction of future cognitive outcomes may aid timely diagnosis, intervention planning, and stratification in clinical trials. We aimed to identify, describe and appraise existing multivariable prognostic rules for prediction of post-stroke cognitive status. Method We systematically searched four electronic databases from inception to November 2019 for publications describing a method to estimate individual probability of developing a cognitive syndrome following stroke. We extracted data from selected studies using a pre-specified proforma and applied the Prediction model Risk Of Bias Assessment Tool (PROBAST) for critical appraisal. Findings Of 17,390 titles, we included 10 studies (3143 participants), presenting the development of 11 prognostic rules – 7 for post-stroke cognitive impairment and 4 for delirium. Most commonly incorporated predictors were: demographics, imaging findings, stroke type and symptom severity. Among studies assessing predictive discrimination, the area under the receiver operating characteristic (AUROC) in apparent validation ranged from 0.80 to 0.91. The overall risk of bias for each study was high. Only one prognostic rule had been externally validated. Discussion/conclusion: Research into the prognosis of cognitive outcomes following stroke is an expanding field, still at its early stages. Recommending use of specific prognostic rules is limited by the high risk of bias in all identified studies, and lack of supporting evidence from external validation. To ensure the quality of future research, investigators should adhere to current, endorsed best practice guidelines for conduct of prediction model studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.