Clinical research should ultimately improve patient care. For this to be possible, trials must evaluate outcomes that genuinely reflect real-world settings and concerns. However, many trials continue to measure and report outcomes that fall short of this clear requirement. We highlight problems with trial outcomes that make evidence difficult or impossible to interpret and that undermine the translation of research into practice and policy. These complex issues include the use of surrogate, composite and subjective endpoints; a failure to take account of patients’ perspectives when designing research outcomes; publication and other outcome reporting biases, including the under-reporting of adverse events; the reporting of relative measures at the expense of more informative absolute outcomes; misleading reporting; multiplicity of outcomes; and a lack of core outcome sets. Trial outcomes can be developed with patients in mind, however, and can be reported completely, transparently and competently. Clinicians, patients, researchers and those who pay for health services are entitled to demand reliable evidence demonstrating whether interventions improve patient-relevant clinical outcomes.
Objective to review the evidence from studies relating SARS-CoV-2 culture with the results of reverse transcriptase polymerase chain reaction (RT-PCR) and other variables which may influence the interpretation of the test, such as time from symptom onset Methods We searched LitCovid, medRxiv, Google Scholar and the WHO Covid-19 database for Covid-19 to 10 September 2020. We included studies attempting to culture or observe SARS-CoV-2 in specimens with RT-PCR positivity. Studies were dual extracted and the data summarised narratively by specimen type. Where necessary we contacted corresponding authors of included papers for additional information. We assessed quality using a modified QUADAS 2 risk of bias tool. Results We included 29 studies reporting attempts at culturing, or observing tissue infection by, SARS-CoV-2 in sputum, nasopharyngeal or oropharyngeal, urine, stool, blood and environmental specimens. The quality of the studies was moderate with lack of standardised reporting. The data suggest a relationship between the time from onset of symptom to the timing of the specimen test, cycle threshold (Ct) and symptom severity. Twelve studies reported that Ct values were significantly lower and log copies higher in specimens producing live virus culture. Two studies reported the odds of live virus culture reduced by approximately 33% for every one unit increase in Ct. Six of eight studies reported detectable RNA for longer than 14 days but infectious potential declined after day 8 even among cases with ongoing high viral loads. Four studies reported viral culture from stool specimens. Conclusion Complete live viruses are necessary for transmission, not the fragments identified by PCR. Prospective routine testing of reference and culture specimens and their relationship to symptoms, signs and patient co-factors should be used to define the reliability of PCR for assessing infectious potential. Those with high cycle threshold are unlikely to have infectious potential.
Background Discrepancies between pre-specified and reported outcomes are an important source of bias in trials. Despite legislation, guidelines and public commitments on correct reporting from journals, outcome misreporting continues to be prevalent. We aimed to document the extent of misreporting, establish whether it was possible to publish correction letters on all misreported trials as they were published, and monitor responses from editors and trialists to understand why outcome misreporting persists despite public commitments to address it. Methods We identified five high-impact journals endorsing Consolidated Standards of Reporting Trials (CONSORT) ( New England Journal of Medicine , The Lancet , Journal of the American Medical Association , British Medical Journal , and Annals of Internal Medicine ) and assessed all trials over a six-week period to identify every correctly and incorrectly reported outcome, comparing published reports against published protocols or registry entries, using CONSORT as the gold standard. A correction letter describing all discrepancies was submitted to the journal for all misreported trials, and detailed coding sheets were shared publicly. The proportion of letters published and delay to publication were assessed over 12 months of follow-up. Correspondence received from journals and authors was documented and themes were extracted. Results Sixty-seven trials were assessed in total. Outcome reporting was poor overall and there was wide variation between journals on pre-specified primary outcomes (mean 76% correctly reported, journal range 25–96%), secondary outcomes (mean 55%, range 31–72%), and number of undeclared additional outcomes per trial (mean 5.4, range 2.9–8.3). Fifty-eight trials had discrepancies requiring a correction letter (87%, journal range 67–100%). Twenty-three letters were published (40%) with extensive variation between journals (range 0–100%). Where letters were published, there were delays (median 99 days, range 0–257 days). Twenty-nine studies had a pre-trial protocol publicly available (43%, range 0–86%). Qualitative analysis demonstrated extensive misunderstandings among journal editors about correct outcome reporting and CONSORT. Some journals did not engage positively when provided correspondence that identified misreporting; we identified possible breaches of ethics and publishing guidelines. Conclusions All five journals were listed as endorsing CONSORT, but all exhibited extensive breaches of this guidance, and most rejected correction letters documenting shortcomings. Readers are likely to be misled by this discrepancy. We discuss the advantages of prospective methodology research sharing all data openly and pro-actively in real time as feedback on critiqued studies. This is the first empirical study of major ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.